Урок 1.00:01:57
Software to web scrape in JavaScript
Урок 2.00:05:22
(optional) Note about deprecation of Request/Request-Promise
Урок 3.00:06:27
This could save you A LOT of time and effort!
Урок 4.00:00:38
Intro to section
Урок 5.00:03:13
Using Chrome Developer Tools
Урок 6.00:03:49
Selecting our element
Урок 7.00:08:34
Building our first scraper!
Урок 8.00:04:53
Selecting multiple elements
Урок 9.00:02:48
Selecting using CSS ID
Урок 10.00:03:10
Selecting using CSS classes
Урок 11.00:02:25
Selecting using HTML attributes
Урок 12.00:01:02
Intro to section
Урок 13.00:02:07
Structure of a HTML table
Урок 14.00:00:59
Data Structure in JavaScript
Урок 15.00:03:40
Creating selector in Chrome Tools
Урок 16.00:04:17
Scraping all table cells in Chrome Tools
Урок 17.00:06:25
Scraping data in Nodejs with Cheerio/Request
Урок 18.00:07:01
Scraping Company Names in Nodejs
Урок 19.00:03:53
Scraping all table columns
Урок 20.00:07:07
BONUS - dynamic table headers when scraping tables
Урок 21.00:01:17
Intro to project
Урок 22.00:01:30
Why are we using Puppeteer instead of Nodejs Request?
Урок 23.00:01:11
Initialising project
Урок 24.00:03:33
Opening a URL with Puppeteer
Урок 25.00:01:53
What data are we scraping?
Урок 26.00:03:26
Data Structure
Урок 27.00:04:49
Job Title Css Selector
Урок 28.00:03:39
Scraping job title using Cheerio
Урок 29.00:06:43
Scraping description url
Урок 30.00:05:48
Creating array of scraping objects
Урок 31.00:05:46
Scraping job post date
Урок 32.00:04:02
Scraping Neighborhood data
Урок 33.00:06:53
Scraping List of Pages with Puppeteer
Урок 34.00:03:38
Limiting Scraping Requests per Second
Урок 35.00:03:37
Scraping job descriptions from different pages
Урок 36.00:04:32
Scraping compensation from job listings
Урок 37.00:03:49
Setting up MongoDB database with MLab
Урок 38.00:03:54
Connecting to MongoDB database with Mongoose
Урок 39.00:02:59
Creating Listing mongoose schema
Урок 40.00:04:23
Saving listing data to MongoDB
Урок 41.00:00:47
Introduction
Урок 42.00:01:32
Project Setup
Урок 43.00:03:26
Getting Html from website
Урок 44.00:04:07
Creating sample of data to collect
Урок 45.00:06:20
Title/URL From Jobs
Урок 46.00:02:11
Scraping Time Job Was Posted
Урок 47.00:01:00
Job Neighborhood
Урок 48.00:06:48
Scraping Job Descriptions
Урок 49.00:06:22
Finish Description and Compensation
Урок 50.00:00:47
Outtro
Урок 51.00:00:33
Help! I'm blocked!
Урок 52.00:02:01
What can you do if you're blocked?
Урок 53.00:02:12
Using a proxy in Request
Урок 54.00:01:37
Initializing project and adding packages
Урок 55.00:01:04
Creating tests folder and setting up test script
Урок 56.00:03:38
Writing our first simple test
Урок 57.00:00:45
Making our first simple test pass!
Урок 58.00:04:35
Getting HTML from the website for our tests
Урок 59.00:02:29
Reading HTML file for our tests
Урок 60.00:08:25
Writing out our tests
Урок 61.00:03:39
Getting title test to pass
Урок 62.00:01:11
Making URL test pass!
Урок 63.00:02:25
Making hood test pass!
Урок 64.00:03:17
Making the final test for datePosted pass!
Урок 65.00:04:01
End notes + refactoring
Урок 66.00:07:47
Exporting web scraping results to CSV
Урок 67.00:05:25
Handling Network Problems in our Craigslist scraper
Урок 68.00:04:26
What is robots.txt?
Урок 69.00:01:49
Initialising project
Урок 70.00:06:54
Example of usage robots-parser
Урок 71.00:10:35
Parsing robots.txt from a real site
Урок 72.00:12:11
Simple Pagination Scraper in 10 mins!
Урок 73.00:01:34
Intro to authentication scraping project
Урок 74.00:03:40
Looking at Login request
Урок 75.00:11:44
Recreating login in Postman
Урок 76.00:16:42
Creating our login request in Nodejs
Урок 77.00:13:58
Using Puppeteer instead of Request
Урок 78.00:01:37
Intro to project
Урок 79.00:03:03
Replicating login request inside Postman - seeing how cookies are required
Урок 80.00:04:53
Building out our request inside Node.js and enabling cookieJar
Урок 81.00:10:17
Getting CSRF token from saved cookies and using it in our POST login request
Урок 82.00:02:06
Intro To Nordstrom.com project
Урок 83.00:07:19
Finding the secret API behind Nordstrom.com
Урок 84.00:06:50
Making a API request inside Postman
Урок 85.00:11:57
Creating a REST API in Nodejs Express
Урок 86.00:03:55
Passing Query Parameters to our own REST API
Урок 87.00:03:14
Starting React project with create-react-app
Урок 88.00:05:30
Making a API Request inside the React app
Урок 89.00:13:11
Something
Урок 90.00:08:22
Adding a form to React app
Урок 91.00:04:35
Adding search query to form
Урок 92.00:01:06
Intro to Project
Урок 93.00:01:24
Project Setup
Урок 94.00:03:06
So What Are We Scraping?
Урок 95.00:05:50
Scraping Top 100 Movie Titles
Урок 96.00:04:49
Let's Get Some Good Ratings!
Урок 97.00:02:19
Easy Peasy Rank and Description Url
Урок 98.00:01:26
Css Selector For The Poster Url
Урок 99.00:07:19
Scraping The Poster URL
Урок 100.00:01:39
Why Request Can't Scrape This Page - Why We're Using NightmareJs Now
Урок 101.00:02:51
Importing NightmareJs and Getting Our Poster Image Css Selector
Урок 102.00:05:30
Scraping the Poster Image URL with NightmareJs
Урок 103.00:04:02
Saving the Poster Image to Disk!
Урок 104.00:00:58
Intro to Project
Урок 105.00:01:15
Project Setup
Урок 106.00:03:06
What are we scraping exactly?
Урок 107.00:03:04
Sample Object + Index Offset
Урок 108.00:04:12
Looking at the HTML of the Index page
Урок 109.00:02:32
Opening Page with Puppeteer
Урок 110.00:06:07
Getting the URLS of the homes from the index page
Урок 111.00:02:47
Getting ready to scrape description URLS
Урок 112.00:05:01
Opening Homes in a separate page
Урок 113.00:08:21
Scraping Price Per Night
Урок 114.00:02:03
Why we are using Regular Expressions now
Урок 115.00:07:28
Scraping number of guests allowed using regular expressions
Урок 116.00:11:03
Scraping the beds, bedrooms, baths
Урок 117.00:02:16
Intro to this section
Урок 118.00:10:10
Timed scraping vs on-demand scraping API's
Урок 119.00:05:24
Build a super simple Reddit scraper in 5 minutes
Урок 120.00:04:16
Connecting to MongoDB database
Урок 121.00:03:10
Connecting to MongoDB database using Mongoose
Урок 122.00:05:17
Creating a MongoDB model and saving
Урок 123.00:02:05
Intro
Урок 124.00:05:21
Intro to code
Урок 125.00:05:07
Deploying to Heroku
Урок 126.00:03:53
Deploying to Google Cloud Platform / Google App Engine
Урок 127.00:06:05
Deploying Puppeteer web scraper to Heroku using buildpacks
Урок 128.00:24:52
Introduction to GraphQL + Creating a GraphQL API in 10 minutes
Урок 129.00:04:15
Intro to scraping infinite scrolling pages
Урок 130.00:01:38
Project setup
Урок 131.00:08:13
Extracting items function
Урок 132.00:11:04
Scrolling and Scraping items
Урок 133.00:06:56
How to get access to Facebook's site without JavaScript
Урок 134.00:06:07
How to use Postman to get Facebook's wall
Урок 135.00:00:57
Project Setup for Facebook Scraper
Урок 136.00:06:43
Creating our POST request in Nodejs
Урок 137.00:04:13
Faking our User-Agent and logging in to Facebook
Урок 138.00:05:45
Getting our Facebook wall!
Урок 139.00:10:00
Request HTML is different from Chrome HTML