Scrapy gives us access to two main spiders classes, the generic spider which we have used lots of time before in other videos plus this CrawlSpider that works in a slightly different way. We can give it a rule set and get it to follow links automatically, passing the ones that we want matched back to our parse function with a callback. This makes incredibly easy full website data scraping. In this video I will explain to you how to use the CrawlSpider, what the Rule and LinkExtrator do and how to use them, and also demo how it works.


Support Me:

# Patreon: https://www.patreon.com/johnwatsonrooney (NEW)
# Amazon UK: https://amzn.to/2OYuMwo
# Hosting: Digital Ocean: https://m.do.co/c/c7c90f161ff6
# Gear Used: https://jhnwr.com/gear/ (NEW)


-------------------------------------
Disclaimer: These are affiliate links and as an Amazon Associate I earn from qualifying purchases
-------------------------------------