Corey Schafer Web Scraping

Quick Start
Running the package from the python interpreter
Understanding the API
For more control
General Overview
Future Features
Technical Specifications

View phone numbers, addresses, public records, background check reports and possible arrest records for Corey Schafer. Whitepages people search is the most trusted directory. Background Checks. In this article, I will guide you through collecting information from two different website formats and explain a few blocks of code and how we came up with them. If you need a more detailed tutorial of every component in the code, I highly recommend watching Corey Schafer's Youtube tutorial. Requirements Python. Sentdex (Python, Machine Learning & Web Dev.) StatQuest with Josh Starmer (Statistics & Mathematics behind ML Algorithms) Tech with Tim (Python, ML & Cool Projects) Corey Schafer (Web Development with Python) Chai Time Data Science (Data Science & ML Practitioners Interviews) 3Blue1Brown (Mathematics & Statistics).

Quick Start

Web scraping python corey schafer

This package uses f-strings (more here) and as such requires Python 3.6+. If you have an older version of Python, you can download the Python 3.8.2 macOS 64-bit installer, Windows x86-64 executable installer, Windows x86 executable installer, or the Gzipped source tarball (most useful for Linux) and follow the instructions to set up Python for your machine.

It's recommend to install the latest version if you don't have existing projects that are dependent on a specific older version of Python, but if you want to install a different version, visit the Python Downloads page and select the version you want. Once you do that, enter the following in your command line:

NOTE: You do need to have the Selenium driver installed to run this package, but you do not need to download all Selenium drivers for your OS if you only want to run this program on a specific driver. If you want a specific driver, just copy and paste the corresponding command for the relevant driver from below. Otherwise, download the selenium dependencies for all the drivers that are supported on your OS to play around with them and see how they differ :)

Copy paste the code block that's relevant for the OS of your machine for the Selenium driver(s) you want from here

NOTE that you also need the corresponding browser installed to properly run the selenium driver.

  • To download the most recent version of the browser, go to the page for:

Running the package from the python interpreter

Understanding the API

There are two types of YouTube channels: one type is a user channel and the other is a channel channel.

  • The url for a user channel consists of followed by user followed by the name. For example:
    • Disney:
    • sentdex:
    • Marvel:
    • Apple:
  • The url for a channel channel consists of followed by channel followed by a string of rather unpredictable characters. For example:
    • Tasty:
    • Billie Eilish:
    • Gordon Ramsay:
    • PBS Space Time:

To scrape the video titles along with the link to the video, you need to run the create_list_for(channel, channel_type) method on the ListCreator object you just created, substituting the name of the channel for the channel argument and the type of channel for channel_type argument. By default, the name of the file produced will be channelVideosList.ext where the .ext will be .csv or .txt depending on the type of file(s) that you specified.

For more control

NOTE that you can also access all the information below in the python3 interpreter by entering
from yt_videos_list import ListCreator

Corey Schafer Web Scraping

There are a number of optional arguments you can specify during the instantiation of the ListCreator object. The preceding arguments are run by default, but in case you want more flexibility, you can specify:

  • Options for the driver argument are
    • Firefox (default)
    • Opera
    • Safari
    • Chrome
      • driver='firefox'
      • driver='opera'
      • driver='safari'
      • driver='chrome'
  • Options for the file type arguments (csv, txt) are
    • True (default) - create a file for the specified type
    • False - do not create a file for the specified type.
      • txt=True (default) OR txt=False
      • csv=True (default) OR csv=False
  • Options for the write format arguments (csv_write_format, txt_write_format) are
    • 'x' (default) - does not overwrite an existing file with the same name
    • 'w' - if an existing file with the same name exists, it will be overwritten
    • NOTE: if you specify the file type argument to be False, you don't need to touch this - the program will automatically skip this step.
      • txt_write_format='x' (default) OR txt_write_format='w'
      • csv_write_format='x' (default) OR csv_write_format='w'
  • Options for the chronological argument are
    • False (default) - write the files in order from most recent video to the oldest video
    • True - write the files in order from oldest video to the most recent video
      • chronological=False (default) OR chronological=True
  • Options for the headless argument are
    • False (default) - run the driver with an open Selenium instance for viewing
    • True - run the driver in 'invisible' mode.
      • headless=False (default) OR headless=True
  • Options for the scroll_pause_time argument are any float values greater than 0 (default 0.8). The value you provide will be how long the program waits before trying to scroll the videos list page down for the channel you want to scrape. For fast internet connections, you may want to reduce the value, and for slow connections you may want to increase the value.
    • scroll_pause_time=0.8 (default)
    • CAUTION: reducing this value too much will result in the programming not capturing all the videos, so be careful! Experiment :)

Running the package from the CLI as a script using -m (coming in yt-videos-list 2.0!)

General Overview

This repo is intended to provide a quick, simple way to create a list of all videos posted to any YouTube channel by providing just the URL to that user's channel videos. The general format for this is

Technical Specifications

Corey Schafer Web Scraping Software

Please see /extra/

Corey Schafer Web Scraping Tools

Shared January 6, 2017

Corey Schafer Web Scraping Tool

Web scraping is a very powerful tool to learn for any data professional. With web scraping the entire internet becomes your database. In this tutorial we show you how to parse a web page into a data file (csv) using a Python package called BeautifulSoup.
In this example, we web scrape graphics cards from
Python Code:​
JavaScript beautifier:​
If you are not seeing the command line, follow this tutorial:​
Table of Contents:
0:00​ - Introduction
1:28​ - Setting up Anaconda
3:00​ - Installing Beautiful Soup
3:43​ - Setting up urllib
6:07​ - Retrieving the Web Page
10:47​ - Evaluating Web Page
11:27​ - Converting Listings into Line Items
16:13​ - Using jsbeautiful
16:31​ - Reading Raw HTML for Items to Scrape
18:34​ - Building the Scraper
22:11​ - Using the 'findAll' Function
27:26​ - Testing the Scraper
29:07​ - Creating the .csv File
32:18​ - End Result
Learn more about Data Science Dojo here:​
Watch the latest video tutorials here:​
See what our past attendees are saying here:​
Like Us:​
Follow Us:​
Connect with Us:​
Also find us on:
#webscraping​ #python​ #pythontutorial​