Skip to content Skip to sidebar Skip to footer

Is It Possible To Crawl Multiple Start_urls List Simultaneously

I have 3 URL files all of them have same structure so same spider can be used for all lists. A special need is that all three need to be crawled simultaneously. is it possible to

Solution 1:

use start_requests instead of start_urls ... this will work for u

class MySpider(scrapy.Spider):
name = 'myspider'

def start_requests(self):
    for page in range(1,20):
        yield self.make_requests_from_url('https://www.example.com/page-%s' %page)

Post a Comment for "Is It Possible To Crawl Multiple Start_urls List Simultaneously"