Web Scraping Python Beautifulsoup

Sometimes we need to extract information from websites. We can extract data from websites by using there available API’s. But there are websites where API’s are not available.

Here, Web scraping comes into play!

Web scraping with Python is easy due to the many useful libraries available. A barebones installation isn’t enough for web scraping. One of the Python advantages is a large selection of libraries for web scraping. For this Python web scraping tutorial, we’ll be using three important libraries – BeautifulSoup v4, Pandas, and Selenium. It is the most popular language for web scraping as it can handle most of the processes easily. It also has a variety of libraries that were created specifically for Web Scraping. Scrapy is a very popular open-source web crawling framework that is written in Python. It is ideal for web scraping as well as extracting data using APIs.

Python is widely being used in web scraping, for the ease it provides in writing the core logic. Whether you are a data scientist, developer, engineer or someone who works with large amounts of data, web scraping with Python is of great help.

Without a direct way to download the data, you are left with web scraping in Python as it can extract massive quantities of data without any hassle and within a short period of time.

In this tutorial , we shall be looking into scraping using some very powerful Python based libraries like BeautifulSoup and Selenium.

BeautifulSoup and urllib

Web scraping is the process of collecting and parsing raw data from the Web, and the Python community has come up with some pretty powerful web scraping tools. The Internet hosts perhaps the greatest source of information—and misinformation—on the planet. Scraping the Monster Job Site. In this tutorial, you’ll build a web scraper that fetches Software Developer job listings from the Monster job aggregator site. Your web scraper will parse the HTML to pick out the relevant pieces of information and filter that content for specific words.

BeautifulSoup is a Python library for pulling data out of HTML and XML files. But it does not get data directly from a webpage. So here we will use urllib library to extract webpage.

First we need to install Python web scraping BeautifulSoup4 plugin in our system using following command :

$ sudo pip install BeatifulSoup4

$ pip install lxml

OR

$ sudo apt-get install python3-bs4

$ sudo apt-get install python-lxml

So here I am going to extract homepage from a website https://www.botreetechnologies.com

from urllib.request import urlopen

from bs4 import BeautifulSoup

We import our package that we are going to use in our program. Now we will extract our webpage using following.

response = urlopen('https://www.botreetechnologies.com/case-studies')

Beautiful Soup does not get data directly from content we just extract. So we need to parse it in html/XML data.

data = BeautifulSoup(response.read(),'lxml')

Here we parsed our webpage html content into XML using lxml parser.

As you can see in our web page there are many case studies available. I just want to read all the case studies available here.

There is a title of case studies at the top and then some details related to that case. I want to extract all that information.

We can extract an element based on tag , class, id , Xpath etc.

You can get class of an element by simply right click on that element and select inspect element.

case_studies = data.find('div', { 'class' : 'content-section' })

In case of multiple elements of this class in our page, it will return only first. So if you want to get all the elements having this class use findAll() method.

case_studies = data.find('div', { 'class' : 'content-section' })

Now we have div having class ‘content-section’ containing its child elements. We will get all <h2> tags to get our ‘TITLE’ and <ul> tag to get all children, the <li> elements.

case_stud.find('h2').find('a').text

How To Use Beautiful Soup

case_stud_details = case_stud.find(‘ul’).findAll(‘li’)

Now we got the list of all children of ul element.

To get first element from the children list simply write:

case_stud_details[0]

We can extract all attribute of a element . i.e we can get text for this element by using:

case_stud_details[2].text

But here I want to click on the ‘TITLE’ of any case study and open details page to get all information.

Since we want to interact with the website to get the dynamic content, we need to imitate the normal user interaction. Such behaviour cannot be achieved using BeautifulSoup or urllib, hence we need a webdriver to do this.

Webdriver basically creates a new browser window which we can control pragmatically. It also let us capture the user events like click and scroll.

Selenium is one such webdriver.

Selenium Webdriver

Selenium webdriver accepts cthe ommand and sends them to ba rowser and retrieves results.

You can install selenium in your system using fthe ollowing simple command:

$ sudo pip install selenium

In order to use we need to import selenium in our Python script.

from selenium import webdriver

I am using Firefox webdriver in this tutorial. Now we are ready to extract our webpage and we can do this by using fthe ollowing:

self.url = 'https://www.botreetechnologies.com/'

self.browser = webdriver.Firefox()

Now we need to click on ‘CASE-STUDIES’ to open that page.

We can click on a selenium element by using following piece of code:

self.browser.find_element_by_xpath('//div[contains(@id,'navbar')]/ul[2]/li[1]').click()

Now we are transferred to case-studies page and here all the case studies are listed with some information.

Here, I want to click on each case study and open details page to extract all available information.

So, I created a list of links for all case studies and load them one after the other.

To load previous page you can use following piece of code:

self.browser.execute_script('window.history.go(-1)')

Final script for using Selenium will looks as under:

And we are done, Now you can extract static webpages or interact with webpages using the above script.

Conclusion: Web Scraping Python is an essential Skill to have

Today, more than ever, companies are working with huge amounts of data. Learning how to scrape data in Python web scraping projects will take you a long way. In this tutorial, you learn Python web scraping with beautiful soup.

Along with that, Python web scraping with selenium is also a useful skill. Companies need data engineers who can extract data and deliver it to them for gathering useful insights. You have a high chance of success in data extraction if you are working on Python web scraping projects.

If you want to hire Python developers for web scraping, then contact BoTree Technologies. We have a team of engineers who are experts in web scraping. Give us a call today.

Consulting is free – let us help you grow!

The internet has an amazingly wide variety of information for human consumption. But this data is often difficult to access programmatically if it doesn't come in the form of a dedicated REST API. With Python tools like Beautiful Soup, you can scrape and parse this data directly from web pages to use for your projects and applications.

Let's use the example of scraping MIDI data from the internet to train a neural network with Magenta that can generate classic Nintendo-sounding music. In order to do this, we'll need a set of MIDI music from old Nintendo games. Using Beautiful Soup we can get this data from the Video Game Music Archive.

Getting started and setting up dependencies

Before moving on, you will need to make sure you have an up to date version of Python 3 and pip installed. Make sure you create and activate a virtual environment before installing any dependencies.

You'll need to install the Requests library for making HTTP requests to get data from the web page, and Beautiful Soup for parsing through the HTML.

With your virtual environment activated, run the following command in your terminal:

Beautiful Soup Web Scraping Tutorial

We're using Beautiful Soup 4 because it's the latest version and Beautiful Soup 3 is no longer being developed or supported.

Using Requests to scrape data for Beautiful Soup to parse

Web Scraping Python Beautifulsoup

First let's write some code to grab the HTML from the web page, and look at how we can start parsing through it. The following code will send a GET request to the web page we want, and create a BeautifulSoup object with the HTML from that page:

With this soup object, you can navigate and search through the HTML for data that you want. For example, if you run soup.title after the previous code in a Python shell you'll get the title of the web page. If you run print(soup.get_text()), you will see all of the text on the page.

Getting familiar with Beautiful Soup

The find() and find_all() methods are among the most powerful weapons in your arsenal. soup.find() is great for cases where you know there is only one element you're looking for, such as the body tag. On this page, soup.find(id='banner_ad').text will get you the text from the HTML element for the banner advertisement.

soup.find_all() is the most common method you will be using in your web scraping adventures. Using this you can iterate through all of the hyperlinks on the page and print their URLs:

You can also provide different arguments to find_all, such as regular expressions or tag attributes to filter your search as specifically as you want. You can find lots of cool features in the documentation.

Parsing and navigating HTML with BeautifulSoup

Before writing more code to parse the content that we want, let’s first take a look at the HTML that’s rendered by the browser. Every web page is different, and sometimes getting the right data out of them requires a bit of creativity, pattern recognition, and experimentation.

Our goal is to download a bunch of MIDI files, but there are a lot of duplicate tracks on this webpage as well as remixes of songs. We only want one of each song, and because we ultimately want to use this data to train a neural network to generate accurate Nintendo music, we won't want to train it on user-created remixes.

When you're writing code to parse through a web page, it's usually helpful to use the developer tools available to you in most modern browsers. If you right-click on the element you're interested in, you can inspect the HTML behind that element to figure out how you can programmatically access the data you want.

Let's use the find_all method to go through all of the links on the page, but use regular expressions to filter through them so we are only getting links that contain MIDI files whose text has no parentheses, which will allow us to exclude all of the duplicates and remixes.

Create a file called nes_midi_scraper.py and add the following code to it:

This will filter through all of the MIDI files that we want on the page, print out the link tag corresponding to them, and then print how many files we filtered.

Run the code in your terminal with the command python nes_midi_scraper.py.

Downloading the MIDI files we want from the webpage

Now that we have working code to iterate through every MIDI file that we want, we have to write code to download all of them.

In nes_midi_scraper.py, add a function to your code called download_track, and call that function for each track in the loop iterating through them:

Web Scraping Python Beautifulsoup Code

In this download_track function, we're passing the Beautiful Soup object representing the HTML element of the link to the MIDI file, along with a unique number to use in the filename to avoid possible naming collisions.

Run this code from a directory where you want to save all of the MIDI files, and watch your terminal screen display all 2230 MIDIs that you downloaded (at the time of writing this). This is just one specific practical example of what you can do with Beautiful Soup.

The vast expanse of the World Wide Web

Now that you can programmatically grab things from web pages, you have access to a huge source of data for whatever your projects need. One thing to keep in mind is that changes to a web page’s HTML might break your code, so make sure to keep everything up to date if you're building applications on top of this.

If you're looking for something to do with the data you just grabbed from the Video Game Music Archive, you can try using Python libraries like Mido to work with MIDI data to clean it up, or use Magenta to train a neural network with it or have fun building a phone number people can call to hear Nintendo music.

I’m looking forward to seeing what you build. Feel free to reach out and share your experiences or ask any questions.

Python 3 Web Scraping

  • Email: [email protected]
  • Twitter: @Sagnewshreds
  • Github: Sagnew
  • Twitch (streaming live code): Sagnewshreds