Scrapy is a nice python environment for web scraping, i.e. extracting information from web sites automatically by crawling them. It works best with anonymous data discovery, but nothing stops you from having active sessions as well. In fact, scrapy transparently manages cookies, which are usually used to track user sessions. Unfortunately, the sessions don't survive between runs. This, however, can be fixed quite easily by adding custom cookie middleware. Here is an example:

from __future__ import absolute_import

import os
import os.path
import logging
import pickle

from scrapy.http.cookies import CookieJar

from scrapy.downloadermiddlewares.cookies import CookiesMiddleware

import settings as settings

class PersistentCookiesMiddleware(CookiesMiddleware):
    def __init__(self, debug=False):
        super(PersistentCookiesMiddleware, self).__init__(debug)
        self.load()
    
    def process_response(self, request, response, spider):
        # TODO: optimize so that we don't do it on every response
        res = super(PersistentCookiesMiddleware, self).process_response(request, response, spider)
        self.save()
        return res
    
    def getPersistenceFile(self):
        return settings.COOKIES_STORAGE_FILE

    def save(self):
        logging.debug("Saving cookies to disk for reuse")
        with open(self.getPersistenceFile(), "wb") as f:
            pickle.dump(self.jars, f)
            f.flush()

    def load(self):
        filename = self.getPersistenceFile()
        logging.debug("Trying to load cookies from file '{0}'".format(filename))
        if not os.path.exists(filename):
            logging.info("File '{0}' for cookie reload doesn't exist".format(filename))
            return
        if not os.path.isfile(filename):
            raise Exception("File '{0}' is not a regular file".format(filename))

        with open(filename, "rb") as f:
            self.jars = pickle.load(f)

Then configure your spider to use the new middleware in settings.py:

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': None,
    'middlewares.cookies.PersistentCookiesMiddleware': 701,
}
Tags: computers
Categories: None |

4 comments have been posted.

    Feb. 18, 2016, 12:59 a.m. - markos  
    Hi, Why you added this line? # TODO: optimize so that we don't do it on every response
    Reply
    Feb. 18, 2016, 7:43 p.m. - Andre  
    I thought it was obvious - right now on each call the persistence file is written to. This might not be very efficient, especially if cookies didn't change often. One way to deal with it would be to have a cache of what was written and write only if the value was new. This is left as an exercise to the reader :)
    Reply
    Nov. 24, 2015, 5:46 a.m. - Alex Wang  
    Suggest put following link to the end of this thread: http://stackoverflow.com/questions/20748475/how-to-add-custom-spider-download-middlewares-to-scrapy
    Reply
    Nov. 30, 2015, 4:55 a.m. - Andre  
    Your comment should be good enough in order to preserve the link :)
    Reply
Your email: we will send you a confirmation link to this address to confirm your identity and to prevent robot posting
Get in touch
»...«
Follow updates

Join our social networks and RSS feed to keep up to date with latest news and publications