- So what’s web scraping anyway? It involves automating away the laborious task of collecting information from websites. There are a lot of use cases for web scraping: you might want to collect prices from various e-commerce sites for a price comparison site. Or perhaps you need flight times and hotel/AirBNB listings for a travel site.
- X-Byte Enterprise Solution is a leading blockchain, AI, & IoT Solution company in USA and UAE, We have expert Web & mobile app development services Get a Free Quote.
Web Scraping Js
Sometimes you need to scrape content from a website and a fancy scraping setup would be overkill.
Maybe you only need to extract a list of items on a single page, for example.
In these cases you can just manipulate the DOM right in the Chrome developer tools.
Extract List Items From a Wikipedia Page
Let's say you need this list of baked goods in a format that's easy to consume: https://en.wikipedia.org/wiki/List_of_baked_goods
Open Chrome DevTools and copy the following into the console:
Now you can select the JSON output and copy it to your clipboard.
A More Complicated Example
Let's try to get a list of companies from AngelList (https://angel.co/companies?company_types=Startup&locations=1688-United+States
This case is a slightly less straightforward because we need to click 'more' at the bottom of the page to fetch more search results.
Open Chrome DevTools and copy:
You can access the results with:
- Chrome natively supports ES6 so we can use things like the spread operator
- We spread
[...document.querySelectorAll]because it returns a node list and we want a plain old array.
- We spread
- We wrap everything in a setTimeout loop so that we don't overwhelm Angel.co with requests
- We save our results in localStorage with
window.localStorage.setItem('__companies__', JSON.stringify(arr))so that if we disconnect or the browser crashes, we can go back to Angel.co and our results will be saved.
- We must serialize data before saving it to localStorage.
Scraping With Node
These examples are fun but what about scraping entire websites?
We can use node-fetch and JSDOM to do something similar.
Just like before, we're not using any fancy scraping API, we're 'just' using the DOM API. But since this is node we need JSDOM to emulate a browser.
Scraping With NightmareJs
Nightmare is a browser automation library that uses electron under the hood.
The idea is that you can spin up an electron instance, go to a webpage and use nightmare methods like type and click to programmatically interact with the page.
For example, you'd do something like the following to login to a Wordpress site programmatically with nightmare:
Nightmare is a fun library and might seem like 'magic' at first.
But the NightmareJs methods like wait, type, click, are just syntactic sugar on DOM (or virtual DOM) manipulation.
For example, here's the source for the nightmare method refresh:
Why Do We Need Electron?
Why is Nightmare built on electron? Why not just use Chrome?
This brings us to the interesting alternative to nightmare, Chromeless.
Chromeless attempts to duplicate Nightmare's simple browser automation API using Chrome Canary instead of Electron.
This has a few interesting benefits, the most important of which is that Chromeless can be run on AWS Lambda. It turns out that the precompiled electron binaries are just too large to work with Lambda.
Here's the same example we started with (scraping companies from Angel.co), using Chromeless:
To run the above example, you'll need to install Chrome Canary locally. Here's the download link.
Next, run the above two commands to start Chrome canary headlessly.
Finally, install the npm package chromeless.
Posted by Soham Kamani on November 24, 2018
Almost all the information on the web exists in the form of HTML pages. The information in these pages is structured as paragraphs, headings, lists, or one of the many other HTML elements. These elements are organized in the browser as a hierarchical tree structure called the DOM (short for Document Object Model). Each element can have multiple child elements, which can also have their own children. This structure makes it convenient to extract specific information from the page.
The process of extracting this information is called 'scraping' the web, and it’s useful for a variety of applications. All search engines, for example, use web scraping to index web pages for their search results. We can also use web scraping in our own applications when we want to automate repetitive information gathering tasks.
Learn how your Marketing team can update your Node App with ButterCMS.
Cheerio is a Node.js library that helps developers interpret and analyze web pages using a jQuery-like syntax. In this post, I will explain how to use Cheerio in your tech stack to scrape the web. We will use the headless CMSAPI documentation for ButterCMS as an example and use Cheerio to extract all the API endpoint URLs from the web page.
There are many other web scraping libraries, and they run on most popular programming languages and platforms. What makes Cheerio unique, however, is its jQuery-based API.
You could use jQuery to get the text of the paragraph:
The above code uses a CSS
The jQuery API is useful because it uses standard CSS selectors to search for elements, and has a readable API to extract information from them. jQuery is, however, usable only inside the browser, and thus cannot be used for web scraping. Cheerio solves this problem by providing jQuery's functionality within the Node.js
The Cheerio API
Unlike jQuery, Cheerio doesn't have access to the browser’s
Web Scraping Json Java
Let's look at how we can implement the previous example using cheerio:
You can find more information on the Cheerio API in the official documentation.
Scraping the ButterCMS documentation page
The ButterCMS documentation page is filled with useful information on their APIs. For our application, we just want to extract the URLs of the API endpoints.
For example, the API to get a single page is documented below:
What we want is the URL:
In order to use Cheerio to extract all the URLs documented on the page, we need to:
- Download the source code of the webpage, and load it into a Cheerio instance
- Use the Cheerio API to filter out the HTML elements containing the URLs
To get started, make sure you have Node.js installed on your system. Create an empty folder as your project directory:
Next, go inside the directory and start a new node project:
## follow the instructions, which will create a package.json file in the directory
Finally, create a
Obtaining the website source code
We can use the
While in the project directory, install the
npm install axios
We can then use
Add the above code to
index.js and run it with:
You should then see the HTML source code printed to your console. This can be quite large! Let’s explore the source code to find patterns we can use to extract the information we want. You can use your favorite browser to view the source code. Right-click on any page and click on the 'View Page Source' option in your browser.
Learn how your Marketing team can update your Node App with ButterCMS.
Extracting information from the source code
After looking at the code for the ButterCMS documentation page, it looks like all the API URLs are contained in
span elements within
We can use this pattern to extract the URLs from the source code. To get started, let's install the Cheerio library into our project:
npm install cheerio
Now, we can use the response data from earlier to create a Cheerio instance and scrape the webpage we downloaded:
Run the above program with:
And you should see the output:
Cheerio makes it really easy for us to use the tried and tested jQuery API in a server-based environment. In fact, if you use the code we just wrote, barring the page download and loading, it would work perfectly in the browser as well. You can verify this by going to the ButterCMS documentation page and pasting the following jQuery code in the browser console:
You’ll see the same output as the previous example:
You can even use the browser to play around with the DOM before finally writing your program with Node and Cheerio.
One important aspect to remember while web scraping is to find patterns in the elements you want to extract. For example, they could all be list items under a common ul element, or they could be rows in a table element. Inspecting the source code of a webpage is the best way to find such patterns, after which using Cheerio's API should be a piece of cake!
Soham is a full stack developer with experience in developing web applications at scale in a variety of technologies and frameworks.
ButterCMS is the #1 rated Headless CMS
What Is Headless CMS? A Simple Guide for Marketing Teams
React, Firebase, and Google Analytics: How To Set Up and Log Events
When to Use NoSQL: A Guide for Beginners
Don’t miss a single post
Axios Web Scraping
Get our latest articles, stay updated!