Scrapy is a nice python environment for web scraping, i.e. extracting information from web sites automatically by crawling them. It works best with anonymous data discovery, but nothing stops you from having active sessions as well. In fact, scrapy transparently manages cookies, which are usually used to track user sessions. Unfortunately, the sessions don't survive between runs. This, however, can be fixed quite easily by adding custom cookie middleware. Here is an example:

from __future__ import absolute_import

import os
import os.path
import logging
import pickle

import settings as settings

def __init__(self, debug=False):

def process_response(self, request, response, spider):
# TODO: optimize so that we don't do it on every response
res = super(PersistentCookiesMiddleware, self).process_response(request, response, spider)
self.save()
return res

def getPersistenceFile(self):

def save(self):
logging.debug("Saving cookies to disk for reuse")
with open(self.getPersistenceFile(), "wb") as f:
pickle.dump(self.jars, f)
f.flush()

filename = self.getPersistenceFile()
if not os.path.exists(filename):
return
if not os.path.isfile(filename):
raise Exception("File '{0}' is not a regular file".format(filename))

with open(filename, "rb") as f:
self.jars = pickle.load(f)

Then configure your spider to use the new middleware in settings.py:

DOWNLOADER_MIDDLEWARES = {
}
Tags: computers
Categories: None |

###### Feb. 18, 2016, 12:59 a.m. - markos  ¶
Hi, Why you added this line? # TODO: optimize so that we don't do it on every response
###### Feb. 18, 2016, 7:43 p.m. - Andre  ¶
I thought it was obvious - right now on each call the persistence file is written to. This might not be very efficient, especially if cookies didn't change often. One way to deal with it would be to have a cache of what was written and write only if the value was new. This is left as an exercise to the reader :)