kinshukdua / liveactionmap Goto Github PK
View Code? Open in Web Editor NEWAn attempt to map the areas with active conflict in Ukraine using twitter data and NLP.
Home Page: https://www.live-action-map.com
License: MIT License
An attempt to map the areas with active conflict in Ukraine using twitter data and NLP.
Home Page: https://www.live-action-map.com
License: MIT License
I was trying to get a domain on freenom, but it doesn't let me register, for the time being I'll deploy it on my custom domain but if anyone has a spare domain lying around or if they can register a free "liveactionmap" domain on freenom that would be great.
How about I make a quick mockup with vuejs that would just read an api. That way once the project is evolved enough it would be enough to just hook one and the other together.
Originally posted by @Krishna-Sivakumar in #24 (comment)
Using celery sounds like a better option here.
Since we're no longer dependent on the cronjob, we need to remove the tweets in the python code itself. However deleting the file after each run as done in an earlier PR is not useful because there's too few data points, we can try to increase the number of results we get from Twitter but that's a separate issue.
Also let's make sure the schedule time for the scraping and removing tweets don't overlap (they are not multiples) otherwise the tweets file might be deleted while the bot is running outputting an empty map.
Create a GET API serving /api/zonesproviding a response containing all currently available markers.
Expected response
[
{
"positions": [
[
50.36429316995319,
30.228621662109
],
[
50.51303377189189,
30.28741424805162
],
[
50.60122461757218,
30.787151228563896
],
[
50.44548234821721,
30.77081995469095
]
],
"color": "green",
"title": "Safe Zone",
"content": "Some safe zone over here"
}
]
I think it would be good to add the time of the tweet in the popup,
I don't really know a lot about python but should dockerize the app for easier future scalability.
Moving to a dynamic backend should be it's own issue
Originally posted by @Krishna-Sivakumar in #24 (comment)
This is so people can access the site using tor, might make some people feel safer.
This would need a separate frontend that doesn't contain any tracking and no links to anything but out service
Should we block Russia from using the applet?
This might be actually helpful in order not to leak possible troop information,
Add selective plane Tracking to the map
Useful api
https://opensky-network.org/
Create a GET
API serving /api/markers
providing a response containing all currently available markers.
Expected response
{
"markers": [
{
"position": [
49.038230248475905,
31.450182690663947
],
"title": "Twit Title with image",
"content": "",
"user": "Twitter User",
"uri": "https://twitter/postUrl",
"image": "imageURL",
"timestamp": "unixtimestamp of tweet"
},
{
"position": [
48.038230248475905,
31.450182690663947
],
"title": "Twit Title without image",
"content": "Twit Text",
"user": "Twitter User",
"uri": "https://twitter/postUrl",
"timestamp": "unixtimestamp of tweet"
}
],
"timestamp": "unixTimeStamp"
}
As the project grows, we need a way to make sure it is easy to test incoming PRs instead of me manually pulling and testing everything. The best way would be to setup a CI/CD pipeline using Github Actions. At the present moment we don't have any tests except the one introduced in #21. Ideally I would like to setup tests and then a CI/CD pipeline. However, the server running the website right now, is manually using cron jobs. I would instead like to wait for #16 to finish and then create a pipeline with the docker image.
Meanwhile, feel free to add tests and help me write a CI pipeline.
I sugest adding external collaborators, who, are able to review stuff.
As getting the latitude and longitude of the found location seems to be the most time-consuming operation, it would probably make sense to cache the responses of the Nominatim API.
Self-hosting a Nominatim service will help us cut down on time wasted doing API calls.
Map building takes the most time. And right now, most of the time while building the map is spent on doing 2 API calls to a public Nominatim service per tweet, which adds a lot of latency.
Since we already have a docker build, it doesn't hurt to add one or two more containers to our deployment.
Links to refer:
https://github.com/mediagis/nominatim-docker
https://nominatim.org/release-docs/latest/admin/Installation/
This will help us shift to a dynamic backend in the long run.
As discussed in #16, the current storage of scraped tweets is not optimal, because the newly scraped tweets will just be appended to the existing tweets.txt
-file, creating a lot of duplicates.
Integrating a database is probably not necessary at this point, we could store the scraped tweets with their ID in a json-file and only add new ones in the run of the application.
It would be neat to see how many hits the site gets.
I'd recommend either google or unami(this is more gdpr complaint and can be self hosted on Heroku for free)
It might be useful to aid development, implement an uptime monitor notification, and general future requests.
Add a blacklist for Twitter accounts spreading misinformation.
Show the location of places like subway stations where people can hide in case of a conflict.
The current scare.py file does not work as the nominatin api times out
Add some SEO tags to make the map easier to find online
So I don't see an error and it certainly didn't trigger the gitlab CI
https://github.com/kinshukdua/LiveActionMap/runs/5413715608?check_suite_focus=true
These ware triggered by hand sadly
Originally posted by @DomiiBunn in #52 (comment)
I think it would be a pretty good idea to implement a ui for this instead of just generating a PNG.
If needed i could spare some time in Vue. By all means i have a free weekend right now.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.