Giter VIP home page Giter VIP logo

frwl's Introduction

FRWL: From Russia with love

Announcement

I can now confirm that this article aligns with what my contact has told me:

https://www.zdnet.com/article/putin-signs-runet-law-to-cut-russias-internet-off-from-rest-of-world/

I have been in contact with an unamed source who has let me know that we can expect the shutdown to happen between now and late fall. There is more that I am not at liberty to say at this time.

link to inception Reddit thread

There is a survey available for those participating: Google Form

There is also a place to submit any IPFS hashes of data you've collected: Google Form

If you all would like a place to chat I've set up an orbit channel (IPFS based chat): Orbit Channel (Just join #frwl by clicking the channel menu in the top left. Seems hot-linking doesn't work.)

Server IPs can now be claimed on peerpad by putting a # in front of them. PeerPad

Goals

  • Figure out when the shutdown happens, as well as when everything comes back up. Currently all we know is "before April 1st 2019" that's not good enough.
  • Be the first to identify the new "great firewall" infrastructure.
  • Keep it decentralized, they can't hack everyone if they get angry.
  • Find news and articles to corroborate our findings.
  • Keep it running up to a week after Russia comes back online.
  • Run some pretty data analysis on it later.

How it do?

We will be tracerouting the most nuclear servers I could think of. NTP servers. You can find them on shodan or use this list I've gathered servers.txt.

Currently a shell script. Improvements welcome as pull requests.

Data will be hosted on IPFS. The data gets packaged into txz by the shell script as 50MB uncompressed chunks (about 2.3MB max compressed). The data is just the output of a traceroute. When its all done IPFS hashes of your data can be submitted here as pull requests appended to the hashes.txt file. Don't forget to add your name to the bottom of this readme if you contribute!

The script creates logs in a weird way. Each file has a unique ID in the set and each set has a unique ID as well. The logs end in either .new or .old this allows me to use diff tools a little easier.

final logs should be compressed in the same manner in the style final.servername.yourtimezone.tar.xz with max compression in the hopes of saving even more space. You can join or stop at any time but please leave an IPFS hash as an issue or a pull request, I'll do my best to pin it as soon as I can. You can use this command to do the final compression:

xz -9evv --lzma2=dict=128MiB,lc=4,lp=0,pb=2,mode=normal,nice=273,mf=bt4,depth=1024

Read the comments and code before proceeding.

Current Statistics

It's about 14 compressed files a day or 31.5MB per day with a projected size of about 2GB of data per server for the entire 2 month long endeavor.

Guidelines

Your traceroute logs should have a bunch of data. but if there are a bunch of *** next to a hop then you're behind some sort of nasty filtering firewall. Pop a hole in it to get clean data. We want hostnames not just latency. It's probably a good idea to be using a VPN for this. Use one really close to you to cut down on the hops. I highly recommend NordVPN.

Watch for updates to the script they may be important for data processing. You may have to work them into your environment somehow.

If you are editing the code tabs are 4 spaces. Don't make me write a CONTRIBUTING.md.

Extra stuff

The current shodan query for Russian NTP servers: ntp country:"RU" port:"123"

The deduplication script can be used so you can dump any additional IPs at the bottom of the list, then remove any duplicates.

Docker

Dockerfile

The Dockerfile has been updated to function off of the same script by passing in the same argument for server counts.

Docker Run

By creating a script to create multiple containers and volumes to automate launching containers with different IPs will be able to test against many servers easily.

docker run -d --name frwl -v "localvolume":/from_russia_with_love_comp -e SERVER_COUNT="n" logoilab/frwl

Contributors

We <3 you!

  • /u/BigT905 and /u/orangejuice3 for the Shodan results! Massive contribution thank you!

  • /u/meostro: Final compression command.

  • Colseph: Awesome script mods

  • Danuke: for Dockerfile and image creation.

  • gidoBOSSftw5731: FreeBSD support.

frwl's People

Contributors

colseph avatar connormcf avatar danukefl avatar gidobossftw5731 avatar hypothermic avatar morrowc avatar susanabino avatar tamiral avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

frwl's Issues

File orgainization

I have over 340k tarballs and I've been running for a day, im willing to set up file trees (I was thinking IP>year> month> day> hour) but I need the thumbs up first.

servers.txt server "Claiming"

saw this mentioned on reddit by /u/turn-down-for-what
what if we had the script iterate through the servers?
the structure would look something like

.
├── from_russia_with_love_comp
│   ├── ip1
│   │   └── 0.2342342.ip1.tar.xz
│   ├── ip2
│   │   └── 0.2342342.ip2.tar.xz
│   └── ip3
│       └── 0.2342342.ip3.tar.xz
├── frwl.2019-02-12.log
├── hashes.txt
├── LICENSE
├── ping_russia.sh
├── README.md
├── servers.txt
└── working_dir
    ├── ip1
    │   ├── 0.1231243.ip1.new
    │   └── 0.1235345.ip1.old
    ├── ip2
    │   ├── 0.1231243.ip2.new
    │   └── 0.1235345.ip2.old
    └── ip3
        ├── 0.1231243.ip3.new
        └── 0.1235345.ip3.old

and for servers.txt - it'll only use lines that dont have a '#' in them, so you can add comments(ie, the pools and explaination (and the # can be anywhere in the line))
i get that it wouldnt be as many traces per minute per sever, but it might get a better overall image?
and if we had enough people running it, i feel like itd have pretty good coverage.

pros:

  • possible better coverage
  • keep data format (you could compress each server folder seperately for upload to IPFS)
    • users that already haved data would just need to move it into a folder w/ server as name
  • possible to update server list without stopping script

cons:

  • you'd be using +4000 times more space before files are tar'd
  • if one server goes down/bad ip, youll be waiting for timeout every iteration. unless you remove it from the list
  • not as many servers per minute - might miss the exact moment russia flips the switch...

ive got it implemented and it seems to be working fine, ill get it up on my fork and you can see if you like it, if so i can submit a pull request

might be worth only having some people run it this way? idk

Errors on line 64 and 95

currently ping_russia.sh has the following non-critical errors (at least on archlinux):

line 64: warning command substitution: ignored null-byte in input.
line 95: [: too many arguments

I think line 64 is coming from a null-byte in servers.txt
I have no clue about line 95.

do the graphing

We said in our goals that we wanted graphs, but what do we want from said graphs?

deduplicate.sh doesn't work

@ConnorMcF The script didin't work the times I tried to use it. It created servers.2.txt having removed one blank line in the beginning of the file, but the server list remained the same on both servers.txt and servers.2.txt.

Local Trunk is Down

Guys bad news. My local area is suffering an outage right now. Don't know how long we'll be out. I'm hesitant to run this on cell data. I'll update once I'm back online.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.