Skip to content Skip to sidebar Skip to footer

Recursive Use Of Scrapy To Scrape Webpages From A Website

I have recently started to work with Scrapy. I am trying to gather some info from a large list which is divided into several pages(about 50). I can easily extract what I want from

Solution 1:

use urllib2 to download a page. Then use either re (regular expressions) or BeautifulSoup (an HTML parser) to find the link to the next page you need. Download that with urllib2. Rinse and repeat.

Scapy is great, but you dont need it to do what you're trying to do

Solution 2:

Why don't you want to add all the links to 50 pages? Are the URLs of the pages consecutive like www.site.com/page=1, www.site.com/page=2 or are they all distinct? Can you show me the code that you have now?

Post a Comment for "Recursive Use Of Scrapy To Scrape Webpages From A Website"