Giter VIP home page Giter VIP logo

turbulence's Introduction

turbulence

We'll get there, but it's gonna be a bumpy ride.


Build Status Latest Version API Documentation

Multiplexed, optionally reliable, async, transport agnostic, reactor agnostic networking library for games.

This library does not actually perform any networking itself or interact with platform networking APIs in any way, it is instead a way to take some kind of unreliable and unordered transport layer that you provide and turn it into a set of independent networking channels, each of which can optionally be made reliable and ordered.

The best way right now to understand what this library is useful for probably to look at the MessageChannels test. This is the highest level, simplest API provided: it allows you to define N message types serializable with serde, define each individual channel's networking settings, and then gives you a set of handles for pushing packets into and taking packets out of this MessageChannels interface. The user is expected to take outgoing packets and send them out over UDP (or similar), and also read incoming packets from UDP (or similar) and pass them in. The only reliability requirement for using this is that if a packet is received from a remote, it must be intact and uncorrupted, but other than this the underlying transport does not need to provide any reliability or order guarantees. The reason that no corruption check is performed is that many transport layers already provide this for free, so it would often not be useful for turbulence to do that itself. Since there is no requirement for reliability, simply dropping incoming packets that do not pass a consistency check is appropriate.

This library is structured in a way that provides a lot of flexibility but does not do very much to help you actually get a network connection set up between a game server and client. Setting up a UDP game server is a complex task, and this library is designed to help with one piece of this puzzle.


What this library actually does

turbulence currently contains two main protocols and builds some conveniences on top of them:

  1. It has an unreliable, unordered messaging protocol that takes in messages that must be less than the size of a packet and coalesces them so that multiple messages are sent per packet. This is by far the simpler of the two protocols, and is appropriate for per-tick updates for things like position data, where resends of old data are not useful.

  2. It has a reliable, ordered transport with flow control that is similar to TCP, but much simpler and without automatic congestion control. Instead of congestion control, the user specifies the target packet send rate as part of the protocol settings.

turbulence then provides on top of these:

  1. Reliable and unreliable channels of bincode serialized types.

  2. A reliable channel of bincode serialized types that are automatically coalesced and compressed.

And then finally this library also provides an API for multiplexing multiple instances of these channels across a single stream of packets and some convenient ways of constructing the channels and accessing them by message type. This is what the MessageChannels interface provides.

Questions you might ask

Why would you ever need something like this?

You would need this library only if most or all of the following is true:

  1. You have a real time, networked game where TCP or TCP-like protocols are inappropriate, and something unreliable like UDP must be used for latency reasons.

  2. You have a game that needs to send both fast unreliable data like position and also stream reliable game related data such as terrain data or chat or complex entity data that is bandwidth intensive.

  3. You have several independent streams of reliable data and they need to not block each other or choke off fast unreliable data.

  4. It is impractical or undesirable (or impossible) to use many different OS level networking sockets, or to use existing networking libraries that hook deeply into the OS or even just assume the existence of UDP sockets.

Why do you need this library, doesn't XYZ protocol already do this (Where XYZ is plain TCP, plain UDP, SCTP, QUIC, etc)

In a way, this library is equivalent to having multiple UDP connections and bandwidth limited TCP connections at one time. If you can already do exactly that and that's acceptable for you, then you might consider just doing that instead of using this library!

This library is also a bit similar to something like QUIC in that it gives you multiple independent channels of data which do not block each other. If QUIC eventually supports truly unrleliable, unordered messages (AFAIK currently this is only a proposed extension?), AND it has an implementation that you can use, then certainly using QUIC would be a viable option.

So this library contains a re-implementation of something like TCP, isn't trying to implement something like that fiendishly complex and generally a bad idea?

Probably, but since it is designed for low-ish static bandwidth limits and doesn't concern itself with congestion control, this cuts out a lot of the complexity. Still, this is the most complex part of this library, but it is well tested and definitely at least works in the environments I have run so far. It's not very complicated, it could probably be described as "the simplest TCP-like thing that you could reasonably write and use".

You should not be using the reliable streams in this library in the same way that you use TCP. A good example of what shouldn't probably go over this library is something like streaming asset data, you should have a separate channel for data that should be streamed as fast as possible and will always be bandwidth rather than gameplay limited.

The reliable streams here are for things that are normally gameplay limited but might be spikey, and where you want to limit the bandwidth so those spikes don't slow down more important data or slow down other players.

Why is this library so generic? It's TOO generic, everything is based on traits like PacketPool and Runtime and it's hard to use. Why can't you just use tokio / async-std?

The PacketPool trait exists not only to allow for custom packet types but also for things like the multiplexer, so it serves double duty. Runtime exists because I use this library in a web browser connecting to a remote server using webrtc-unreliable, and I have to implement it manually on top of web APIs and that is currently not trivial to do.

Current status / Future plans

I've used this library in a real project over the real internet, and it definitely works. I've also tested it in-game using link conditioners to simulate various levels of packet loss and duplication and as far as I can tell it works as advertised.

The library is usable currently, but the API should in no way be considered stable, it still may see a lot of churn.

In the near future it might be useful to have other channel types that provide in-between guarantees like only reliability guarantees but not in-order guarantees or vice versa.

Eventually, I'd like the reliable channels to have some sort of congestion avoidance, but this would probably need to be cooperative between reliable channels in some way.

The library desperately needs better examples, especially a fully worked example using e.g. tokio and UDP, but setting up such an example is a large task by itself.

License

turbulence is licensed under any of:

at your option.

turbulence's People

Contributors

billyb2 avatar hexywitch avatar kyren avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

turbulence's Issues

New release in the works?

Hi there, I have been putting together a networking library for a game that I'm working on, and I just discovered your crate. It looks like exactly what I need. Amazing!

I am trying it out by putting together some channels based on the examples in the tests, and I noticed that the latest crates.io release is behind HEAD in this repo, and some of the API has changed. Do you have any plans to make a new release to incorporate the latest API changes?

Thanks for sharing these works with us ๐Ÿ™Œ

Packets not marked as ready when reconnecting

Hey there,

Wondering if you have any insight, trying to walk through it myself but so far can't figure out how things should be. Basically if I have a server running and connect from another, everything works well. Then if I close down the client and reconnect shortly after, I can see packets being exchanged, but for some reason they are never marked as ready in windows.rs RecvWindow.

When this happens they are always padded in the front with zeros. Would you expect reconnecting to have weird issues like this? Unreliable messages do make it through, but reliable messages do not

Added some debug logging for myself trying to figure it out, not sure if it will be helpful for you, but here are some logs of values in RecvWindow recv method while working vs while not working after reconnecting

Working on initial connection:

received packet: [3, 0, 12, 0, 0, 0, 1, 0, 0]
data len: 3 packet len 3
start_pos: 12
receiving in window with start pos 12: [1, 0, 0]
self.recv_pos: 12, ready: 0
recv_start_pos: 12, recv_end_pos: 1036, end_pos: 15
copy_start_pos: 12 end_pos 15
data_start: 0, buf_start: 0 buf_end: 3
current buffer size: 0
current buffer: []
new buffer size: 3
new buffer: [1, 0, 0]
self.recv_pos: 12, start_pos: 12

Not Working when reconnecting:

received packet: [3, 0, 12, 0, 0, 0, 1, 0, 2]
data len: 3 packet len 3
start_pos: 12
receiving in window with start pos 12: [1, 0, 2]
self.recv_pos: 0, ready: 0
recv_start_pos: 0, recv_end_pos: 1024, end_pos: 15
copy_start_pos: 12 end_pos 15
data_start: 0, buf_start: 12 buf_end: 15
current buffer size: 0
current buffer: []
new buffer size: 15
new buffer: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 2]
self.recv_pos: 0, start_pos: 12

Looks like this is because of the ordering guarantees and some missed packets while disconnected

Suitability for very low-bandwidth links?

Hi,

This isn't an issue, but I couldn't find contact information for you anywhere.

First, thanks for writing this. It looks interesting!

I am interested in very low-bandwidth, long-distance radio links. These may be via amateur radio protocols, via LoRA (my own lorapipe, https://github.com/jgoerzen/lorapipe works with that), or satellite. Bandwidth of these links ranges, depending on the technology in play, from about 0.3Kbps to 100Kbps. Many of them are actually packetized already, and may provide guarantees roughly similar to UDP (error-checked but not reliable, though most guarantee ordering if not delivery). So you can perhaps see why I'm interested in this protocol. Those that are packetized would tend to use packets ranging from about 32 bytes up to maybe several hundred bytes, with an ethernet-like MTU of 1500 being a rather high-end outlier.

My question is: what is the overhead of your reliable protocol when working with a pure binary stream? Also, do you think it would be suitable for situations in which it may actually take a substantial fraction of a second for a packet to be transmitted (or maybe even several seconds for the very slowest)? When dealing with a 300bps link, every added byte definitely counts!

Thanks!

Questions about API usage

So I've been trying to leverage turbulence for a game I'm writing (a networked fighting game), and I've been trying to make a lobby system to enable a better networked place experience. So far, I've been running into inconveniences while using the API, but I'm not sure if that's me not approaching the problem the correct way or not understanding the API. Here is my actual code that I've been working on for context.

My two main pain points are as follows, hopefully you have ideas on how I can address them:

  1. MessageChannels doesn't differentiate between connections (i.e. different people connecting to the same client). I've been approaching this by spawning a new MessageChannels for each connection, which seems to work just fine, but makes it a little painful when trying to read from the socket, because I have to check to make sure that the datagram I'm receiving is from the right address. It also leads me to having to setup a new task to create connections whenever a new address sends data.
  2. Not great compatibility with what I've been trying to do as an async workflow. Here is the code for my message loop:
async fn messages_loop_t(channels: ChannelList, mut messages: MessageChannels) {
    let messages = &mut messages;
    loop {
        // forwards incoming requests out
        if let Ok(value) = channels.join_request.incoming.try_recv() {
            messages.async_send(value).await.unwrap();
            messages.flush::<JoinRequest>();
        }
        if let Ok(value) = channels.join_response.incoming.try_recv() {
            messages.async_send(value).await.unwrap();
            messages.flush::<JoinResponse>();
        }

        // checks received data, and forwards it
        if let Some(value) = messages.recv::<JoinRequest>() {
            channels.join_request.outgoing.send(value).await.unwrap()
        }
        if let Some(value) = messages.recv::<JoinResponse>() {
            channels.join_response.outgoing.send(value).await.unwrap()
        }

        yield_now().await
    }
}

As you can see, I call the sync nonblocking methods of the messages object to check if theres any data to receiver, and yield_await at the end of the loop. It seems like a better approach would be selecting on the message type OR spawning a new task for each message, but neither of these are viable given that recv/send require an &mut MessageChannels. Additionally each recv Future is tied to the lifetime of the borrow, which means I can't return the future to be held around somewhere else. This code forwards each recieved data to a smol::channel, which lets me clone consumers allowing me to engage in behavior like the following:

    async fn request_join(&mut self, lobby: Self::LobbyId) -> Result<Self::LobbyId, JoinError> {
        let handle = self.handle.clone();
        let (incoming, outgoing) = {
            let mut inner = self.handle.write().await;
            let inner = inner.deref_mut();
            let connection = inner.connections.get_or_create_connection(
                lobby,
                inner.socket.clone(),
                inner.runtime.clone(),
            );
            (
                connection.channels.join_response.incoming.clone(),
                connection.send_request(JoinRequest { addr: lobby }),
            )
        };

        outgoing.await.unwrap();

        let response = incoming.recv().await.unwrap();

        match response {
            JoinResponse::Denied => Err(JoinError::Denied),
            JoinResponse::Accepted { self_addr } => {
                let mut lock = handle.write().await;
                lock.mode = Mode::Client(lobby);
                lock.self_addr = self_addr;
                Ok(lobby)
            }
        }
    }

Are there any solutions you would suggest to my issues/should I attempt to PR a design to address them? Thank you for making this really cool library either way!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.