Beagle is software used to track changes to web resources. It reads site urls from a MongoDB database and runs a scraper (called beagleboy) to check if the sites have changed. It also looks at resources linked to by the site (in case content is being served with an iframe, swf file etc.
The recommended way to install the software is to use a virtual environment and assumes you have installed virtualenv and git:
> git clone https://github.com/tryggvib/beagle.git
> cd beagle
> virtualenv venv
> source venv/bin/activate
> pip install -r requirements.txt
The scraper is a python software built on scrapy and is used like a scrapy scraper.
This assumes you're in the beagle directory (from step 2 in the installation). If you haven't activated the virtual environment (assuming you called it venv) start py activating it:
> source venv/bin/activate
Then to run beagleboy you initially have to put in your email server settings in beagleboy/beagleboy/settings.py after that it's always the same:
> cd beagleboy
> scrapy crawl webresources
Beagleboy fetches the sites from a user collection in the MongoDB database (database name defaults to beagle). A users collection document has the following structure:
{
_id: <email address of user, e.g. [email protected]>,
name: <name of user, e.g. Bigtime Beagle>,
sites: [
{
url: <url of a budget page to be scraped>
last_modified: <date when change was last seen>
},
]
}
So to add a page that should be scraped one only needs to push a document like:
{
url: 'http://scrooge.mcduck.com'
}
to a specific users sites array. Beagleboy will pick this up and notify that particular user when a change is noticed in the url.
Since Beagleboy is built using scrapy it can use scrapyd to schedule scraping jobs with a json configuration file.
Please read the documentation on scrapyd but it's really easy. You install it. It exposes a webservice where you can schedule scraping via a curl request. This would be the curl request for beagleboy
> curl http://localhost:6800/schedule.json -d project=beagleboy -d spider=webresources
You can expose the scrapyd web server if you want but then you should definitely put in some authentication.
- Extract messages
- Initialise or update translations files
- Translate
- Compile translations
The process assumes you're in the beagle directory as described in step 2 of the installation.
Even though all messages are stored in beagleboy/messages.py pybabel works on directories so to extract the run the following command
> pybabel extract -F babel.cfg -o locale/beagleboy.pot .
If you want to create a new language to translate messages into you need to initialise it with the following command (where language code is something like is_IS):
> pybabel init -D beagle -i locale/beagleboy.pot -d locale/ -l <language code>
However if you're updating a translation you don't have to initialise the language but update it with the following command (again where language code is something like en_GB):
> pybabel update -D beagle -i locale/beagleboy.pot -d locale/ -l <language code>
Translate with your favourite po file translator, e.g. poedit. The project can also be uploaded to Transifex with little effort (not supported at the moment). The po file to be translated will be available in locale//LC_MESSAGES/beagle.po
To compile translations (and thus make them available to the software) one just runs the following command:
> pybabel compile -D beagle -d locale/
This compiles all of the translations in one go and everybody is happy.