Giter VIP home page Giter VIP logo

chef-valhalla's Introduction

 ██▒   █▓ ▄▄▄       ██▓     ██░ ██  ▄▄▄       ██▓     ██▓    ▄▄▄      
▓██░   █▒▒████▄    ▓██▒    ▓██░ ██▒▒████▄    ▓██▒    ▓██▒   ▒████▄    
 ▓██  █▒░▒██  ▀█▄  ▒██░    ▒██▀▀██░▒██  ▀█▄  ▒██░    ▒██░   ▒██  ▀█▄  
  ▒██ █░░░██▄▄▄▄██ ▒██░    ░▓█ ░██ ░██▄▄▄▄██ ▒██░    ▒██░   ░██▄▄▄▄██ 
   ▒▀█░   ▓█   ▓██▒░██████▒░▓█▒░██▓ ▓█   ▓██▒░██████▒░██████▒▓█   ▓██▒
   ░ ▐░   ▒▒   ▓▒█░░ ▒░▓  ░ ▒ ░░▒░▒ ▒▒   ▓▒█░░ ▒░▓  ░░ ▒░▓  ░▒▒   ▓▒█░
   ░ ░░    ▒   ▒▒ ░░ ░ ▒  ░ ▒ ░▒░ ░  ▒   ▒▒ ░░ ░ ▒  ░░ ░ ▒  ░ ▒   ▒▒ ░
     ░░    ░   ▒     ░ ░    ░  ░░ ░  ░   ▒     ░ ░     ░ ░    ░   ▒   
      ░        ░  ░    ░  ░ ░  ░  ░      ░  ░    ░  ░    ░  ░     ░  ░
     ░                                                                    

Valhalla is an open source routing engine and accompanying libraries for use with Open Street Map and other open data sets. The chef-valhalla repository, as its name suggests, is a chef cookbook. The cookbook demonstrates how to deploy the valhalla stack to a virtual machine (sample vagrant file included). Upon completion the virtual machine will have cut a set of routable graph tiles and started up a server to handle route requests against that tile set. We hope this can serve as a primer on how one might deploy valhalla in one's own routing cluster.

Deployment Types

Vagrant allows you to run a virtual machine as if it were a node in cluster of machines, perhaps in the cloud. We allow you to exercise several types of deployment simply by making use of different combinations of recipies within the cookbook. Currently we have the following deployment types: Global Tile Cutter, Routing Service (route/matrix/locate requests), Elevation Service (height requests), Extract Routing Service (like routing above but on static OSM extract).

TODO: describe all of the recipes and the combinations needed to make the deployment types above, this info is laying around in a google doc some

chef-valhalla's People

Contributors

acwilton avatar baldur avatar dgearhart avatar dnesbitt61 avatar gknisely avatar heffergm avatar kdiluca avatar kevinkreiser avatar kopkins avatar stevendlander avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chef-valhalla's Issues

deploy via packages in ppa

it would be nice instead of installing valhalla by building from source to create a ppa and get versioned valhalla software from it

alias for latest s3 data

right now we have our s3 bucket full of files with dates in their names. that is fine on one hand as long as your tool can list them and then get the newest or oldest or whatever. but it would be nice to have a redirect in place to get the newest one. apparently s3 lets you put a metadata on an object that lets you set a redirect to another place in the bucket. to make this happen we'd need to update the push_tiles.py to reset the metadata on the latest version of every given thing we push.

this leads me to wonder about a better way.. maybe we should start sticking this stuff in a folder with the date on it. then we just need a latest folder that redirects to the dated one. not sure if the redirect is more like an alias though so that different parts of the path are just synonyms for something else.

Faster Updates

We should change the cut tiles recipe to install two crons, one that runs cut_tiles.sh but another that runs minutely_update.sh. We should then change cut_tiles.sh so that it tries to catch the minutely_update.sh when its not running by dropping its lock file in place to grab a copy of the data to cut tiles from. in this way the diff applications should be much smaller and happen faster, we can also cut tiles and apply diffs at the same time.

Another thing we can do is change the push to s3 to be done in the background, waiting for it to complete doesnt have much value.

Meili - Map Matching

There is currently no mechanism to start up a map matching service. We need to add that here and allow it to be done. It might be best to follow the pattern that Skadi uses since its a similar single part service.

Loki Makefile missing a parameter.

In the Makefile you have to find this line and change it from :

loki_benchmark_LDADD = $(DEPS_LIBS) $(VALHALLA_LDFLAGS) -L/usr/lib/x86_64-linux-gnu -lboost_program_options -lboost_filesystem -lboost_system libvalhalla_loki.la

To:
loki_benchmark_LDADD = $(DEPS_LIBS) $(VALHALLA_LDFLAGS) -L/usr/lib/x86_64-linux-gnu -lpthread -lboost_program_options -lboost_filesystem -lboost_system libvalhalla_loki.la

max_cache_size computation fails for elevation

When using the equation for computing max_cache_size the data producers fail in the elevation step. Do we need a separate variable to control cache size during separate steps? Experiment with what works for data import.

locales

install locales to a directory under valhalla and set LOCPATH to point to it. do the same for docker

use planet.tar memory mapped

make sure to let others know of this change and carefully coordinate the roll out

  • update cut_tiles.sh to tar the tiles properly and to upload the tar to s3 instead of the tgz
  • use awscli instead of custom python to do the uploading, but have to cull old stuffs
  • update get_routing_tiles.rb recipe to not unpack the tiles and to use awscli to download from s3
  • to do deploy this we'll need to disable and stop the tile cutting cron, update the cookbooks, and reenable it
  • add edge dumping to the end of tile cutting and upload to s3, it only takes a few minutes and is useful for others

Building and Running Bash File

In this bash file there is a error when running the pbfgraphbuilder .

the pbf files are located in the parent directory and not in the mjolnir, Small change will fix this change the "cd mjolnir" the the line before the wget.

wget http://download.geofabrik.de/europe/switzerland-latest.osm.pbf http://download.geofabrik.de/europe/liechtenstein-latest.osm.pbf
sudo mkdir /data
sudo chown whoami /data
cd mjolnir
pbfgraphbuilder -c conf/valhalla.json switzerland-latest.osm.pbf liechtenstein-latest.osm.pbf
cd ..

run stats generation

stats has been moved out from tile cutting so we'll need to run it one off like connectivity. this is nice because we dont have to do the kludge of finding and moving it with a date.

push tiles

Send an email or trigger a pager duty when push tiles fails.

s3 bucket overflow is broke

looks like the code that checks the stuff in the bucket no longer trims stuff out of it once it hits the configured limit. find and fix this..

OpenStreetMap import will break if links aren't updated before May 7

http://planet.openstreetmap.org is available over https, and will start redirecting to https on 2018-05-07.

Your project seems to be using osmosis, which will not follow redirects by default.

Would it please be possible to update all files from http://planet.openstreetmap.org to httpS://planet.openstreetmap.org before then?

Ideally, would it be possible to make all requests to openstreetmap.org over https? All services respond over https and will start redirecting soon.

Vagrant up not working

Hello, I'm trying to spin up a valhalla instance locally to use for development. I expected that I could do a vagrant up in the repo and it would provision for me. However I've run into the following issues:

  1. had to change Berksfile source to https://supermarket.chef.io
  2. getting undefined method []' for nil:NilClass`
    at this line

I haven't been able to get past the second issue, so I'm now trying the bash script install method. But it would be great if vagrant up could do it all automagically! :)

add acceptance tests after tiles are cut

after cut_tiles.sh finishes with the valhalla_build_tiles bit we should be ready to do some testing to see if the tiles are any good. lets make a new script which will run the RAD testing scripts and check their output for both success of route computation as well as containing particular items, maybe strings to grep for? we'll need a route test file. we'll need to write some logic to replace the date times with the next tuesday from the time we are running it. we'll also need a file of expected results so we have something to grep for.

if this new script detects something is wrong it should return a non zero error code. this will trigger cut_tiles to not ship these tiles to s3 and all the rest of the stuff it does.

as a temporary measure we should also add a key to attributes.rb that contains an email address to email when the data producer acceptance tests have failed. default it to no value and then add the emailing of failure to the acceptance tests script before returning with non zero.

update transit fetching cron to kill cut_tiles

when we get new transit data we want to make use of it ASAP. to do so all we need to do is gracefuly kill the cut_tiles.sh script and the valhalla_build_tiles program. simple psgrep to get the pids, followed by kill to the script. if the script is dead a smiple kill to the program should suffice. if the script didnt die, kill it with -9 and go remove the lock file, and continue on to the program. if the program didnt use -9 on that.

setup ran with cron causes dragons

so setup downloads some data and then installs crons to continually update the data and make new tiles. the problem is that setup is not idempotent. crons and downloads will compete because the download uses md5sum (which changes when diffs apply) to know whether or not it should re-download the extract. we need to get around this by never redownloading in the case of using cron. this could be remedied by dropping a new md5sum after each diff application or by not using md5sum to decide whether or not to download the extract

Dockerfile for Valhalla

Hello, I wanted to play around with Valhalla, and ended up porting the bash script to a Dockerfile. It assumes you have each of the Valhalla components in subdirectories below the Dockerfile. You probably want to change the OSM export files for your own testing.

I'm not expecting this to replace the chef repo, but it might be useful for someone, so I thought I'd drop it in here. If it's useful to put this into a separate repo, let me know.

https://gist.github.com/tomtaylor/b84e05a3aaeb337e4e75

status server

we should have a little python status server deployed along side the data producer. this thing can be made to monitor what is going on with the machine at any given time. things like:

what stage is valhalla_build_tiles on
how long has it been in that stage
how long ago was the last push of tiles to s3
did the batch of tests succeed or fail

basically all the things. after that we can hook up monitoring to alert us through the proper channels when things go wrong

recipe descriptions in readme

we have a bunch of documentation about what all the recipes do and how you can use them to deploy different types of nodes in a given cluster. we need to copy this info into the readme and show some examples of how this can be used in vagrant etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.