Giter VIP home page Giter VIP logo

rouille's Introduction

Rouille, a Rust web micro-framework

Rouille is a micro-web-framework library. It creates a listening socket and parses incoming HTTP requests from clients, then gives you the hand to process the request.

Rouille was designed to be intuitive to use if you know Rust. Contrary to express-like frameworks, it doesn't employ middlewares. Instead everything is handled in a linear way.

Concepts closely related to websites (like cookies, CGI, form input, etc.) are directly supported by rouille. More general concepts (like database handling or templating) are not directly handled, as they are considered orthogonal to the micro web framework. However rouille's design makes it easy to use in conjunction with any third-party library without the need for any glue code.

Getting started

If you have general knowledge about how HTTP works, the documentation and the well-documented examples are good resources to get you started.

License

Licensed under either of

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you shall be dual licensed as above, without any additional terms or conditions.

FAQ

What about performances?

Async I/O, green threads, coroutines, etc. in Rust are still very immature.

The rouille library just ignores this optimization and focuses on providing an easy-to-use synchronous API instead, where each request is handled in its own dedicated thread.

Even if rouille itself was asynchronous, you would need asynchronous database clients and asynchronous file loading in order to take advantage of it. There are currently no such libraries in the Rust ecosystem.

Once async I/O has been figured out, rouille will be (hopefully transparently) updated to take it into account.

But is it fast?

On the author's old Linux machine, some basic benchmarking with wrk -t 4 -c 4 shows the following results:

  • The hello-world example of rouille yields ~22k requests/sec.
  • A hello world in nodejs (with http.createServer) yields ~14k requests/sec.
  • The hello-world example of tokio-minihttp (which is supposedly the fastest HTTP server that currently exists) yields ~77k requests/sec.
  • The hello example of hyper (which uses async I/O with mio as well) yields ~53k requests/sec.
  • A hello world in Go yields ~51k requests/sec.
  • The default installation of nginx yields ~39k requests/sec.

While not the fastest, rouille has reasonable performances. Amongst all these examples, rouille is the only one to use synchronous I/O.

Are there plugins for features such as database connection, templating, etc.

It should be trivial to integrate a database or templates to your web server written with rouille. Moreover plugins need maintenance and tend to create a dependency hell. In the author's opinion it is generally better not to use plugins.

But I'm used to express-like frameworks!

Instead of doing this: (pseudo-code)

server.add_middleware(function() {
    // middleware 1
});

server.add_middleware(function() {
    // middleware 2
});

server.add_middleware(function() {
    // middleware 3
});

In rouille you just handle each request entirely manually:

// initialize everything here

rouille::start_server(..., move |request| {
    // middleware 1

    // middleware 2

    // middleware 3
});

rouille's People

Contributors

aidanhs avatar bradfier avatar cardoe avatar coder543 avatar dbr avatar diggsey avatar fauxfaux avatar goto-bus-stop avatar hayd avatar hookedbehemoth avatar jaemk avatar kevinmehall avatar kristoff3r avatar lambda-fairy avatar meesfrensel avatar minibikini avatar mitranim avatar nobodyxu avatar oherrala avatar richardhozak avatar rushsteve1 avatar scottjmaddox avatar serprex avatar sindreij avatar svenvdvoort avatar taiki-e avatar tomaka avatar vishalsodani avatar vmiklos avatar yrashk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rouille's Issues

How to use a collection of websockets across threads?

Hi, I really like your library with its simple usage and would like to use it
for experimenting and learning about Rust and websockets by implementing a very
simple chat server.

For that I have a vector of websockets being wrapped in Arc and Mutexes in order
to be able to use them across threads. As shown in your websocket example I call
websocket::start in order to use websockets. Then I spawn a new thread
in which I receive the new websocket from the Receiver<Websocket>
and push it to a vector wrapped in an Arc such that I can store it in the vector
and handle messages for it in the thread. It is also wrapped in a Mutex to be
usable from other websocket threads.

When a new message is received from the websocket (from a HTML page) I want to
send it to all opened websockets using the vector containing the websockets.

However, this won´t work as let message = match ws_arc_.lock().unwrap().next()
locks a socket making it unavailable when looping the websockets in another thread
such that I cannot send data to it.

So, what would be the correct way to handle a collection of websockets across
different threads to be able to send data to all websockets?

fn main() {
    let clients: Arc<Mutex<Vec<Arc<Mutex<websocket::Websocket>>>>> = 
                                            Arc::new(Mutex::new(Vec::new()));
     router!(request,
        // Some more routes ...

        // create websocket
        (GET) (/ws) => {
                let (response, websocket) = try_or_400!(websocket::start::<String>(&request, None));
                let clients_ = clients.clone();
                let ip = request.remote_addr().clone();
                thread::spawn(move || {
                    let mut ws = websocket.recv().unwrap();
                    let ws_arc = Arc::new(Mutex::new(ws));
                    let ws_arc_ = ws_arc.clone();
                    clients_.lock().unwrap().push(ws_arc);
                    ws_arc_.lock().unwrap().send_text(format!("Connected {:?}", &ip).as_str());

                    loop {
                        let message = match ws_arc_.lock().unwrap().next() {
                            Some(m) => m,
                            None => break,
                        };

                        match message {
                            websocket::Message::Text(txt) => {
                                for other in clients_.lock().unwrap().iter() {
                                    other.lock().unwrap().send_text(&txt).unwrap();
                                }
                            },
                            websocket::Message::Binary(_) => {
                                println!("received binary from a websocket");
                            },
                        }
                    }
                });

                response
            },

            _ => Response::html("404 error."")
        )
    });
}

About RouteError

I feel that this aspect is not designed correctly.
When do you return a Response with a 404 status code and when do you return a RouteError::NoRouteFound? Both represent the same thing.

I think that RouteError should maybe be removed and everything should simply return a Response.

Add a cache system

Add this:

fn handle(request: &Request, cache: &rouille::cache::Cache) -> Response {
    cache.with(|request| {
        Response::text("hello world")
    })
}

The Cache struct contains cache entries.
The with function (whose name may change) analyzes the request and tries to find a corresponding entry in the Cache struct. If one is found, the with function immediately returns the entry. If none is found, the closure is called and the response is stored in the cache.

The function automatically behaves like a "real" HTTP caches by trying to find a Cache-Control header in the response to determine whether it should be cached.

Solve how to handle the request's body

Right now the request's body is entirely stored in a Vec, and cloned every time we access it.
This is a bad idea as it makes the server very easy to DDoS by sending large bodies.

Instead, the body should be an object that implements Read and that progressively reads the data from the socket.

But this has the consequence that it will not be possible to retrieve the body twice. In an ideal world it would be possible to retrieve the body as many times as you like, but I'm not sure if in practice it's a real problem or a non-issue.

Possible to remove Content-Length header from Responses?

Is there a way to return a Response without a Content-Length header being set?

According to https://tools.ietf.org/html/rfc7230#section-3.3.2 , this header shouldn't be set when returning certain status codes:

"A server MUST NOT send a Content-Length header field in any response
   with a status code of 1xx (Informational) or 204 (No Content)."

I've tried using e.g.:

Response::empty_404().with_status_code(204).without_header("Content-Length")

but still get back:

HTTP/1.1 204 No Content
Server: tiny-http (Rust)
Date: Wed, 11 Oct 2017 16:36:29 GMT
Content-Length: 0

Use a custom derive for POST input structs

This is a follow-up to #49

Once it's stable (rust-lang/rust#35900), we can use the custom derive system for POST structs instead of using rustc-serialize.

This has the big advantage that we can adjust the corresponding trait to our needs, and make it work for files.

trait DecodePostField {
    fn from_field(content: &str) -> Result<Self, PostError>;
    fn from_file<R: BufRead>(file: R, mime: &str) -> Result<Self, PostError>;
}

trait DecodePostInput {
    fn decode<I: Iterator<Item = PostInputElement>>(elements: I) -> Result<Self, PostError>;
}

Example in practice:

#[derive(RouillePostDecode)]      // implements DecodePostInput on the struct
struct Input {
    field1: i32,         // each field must implement DecodePostField
    field2: String,
    file: TemporaryFileStorage,
}

let input: Input = try_or_400!(rouille::post::decode(&request));

If you read #49 I explain at the bottom that it isn't possible to do this. But if we use a trait other than the one in rustc-serialize, then it becomes a viable option.

In this example, the from_file method of TemporaryFileStorage would write the file to a temporary directory by streaming from the user's request, and then will return a TemporaryFileStorage struct that contains the path to the file. If the TemporaryFileStorage struct gets dropped before being stored in a database (for example because of a panic), its destructor would automatically delete the file.

Rouille could provide some helpers, like:

  • TemporaryFileStorage
  • BufferedFile (stores the content in memory)

There is one unresolved question, however: how do we provide a configuration for TemporaryFileStorage or BufferedFile? While these two cases don't need a configuration, it would be nice if we could write a S3File struct that automatically uploads to S3 or a ProxyFile that transfers the file to another service. But doing so requires some sort of configuration to be passed to from_file.

Handle very large bodies

Modify the various input-parsing functions to return an error if the body is larger than a certain limit.

Question on compile time

Using cargo run, this code takes at least 2.2s to recompile:

extern crate rouille;

fn main() {
    rouille::start_server("localhost:6982", |_| rouille::Response::empty_404());
}

My CPU is a fairly recent 2.9 GHz i7, and I'm using the nightly toolchain. Usually, trivial Rust scripts recompile near-instantly due to dependency caching. Even using procedural macros (e.g. with Maud) doesn't seem to affect recompilation time. Using rouille, my compile time goes way up.

Any suggestions/advice? Is this a rouille-specific problem?

Non-blocking WebSocket polling

Related to #137, would it be possible to have a non-blocking variant of WebSocket::next? The idea would be the same as Receiver<T>::try_recv, it returns the next message if there is one ready, but returns without blocking if there are none. Without this functionality, it seems like it isn't possible to do something like have a thread wait for either a WebSocket or another channel.

I took a brief look at the implementation of WebSocket::next and it looks like its the underlying Box<ReadWrite>::read implementation that blocks waiting for data from the socket. Given that rouille is built on tiny_http, would it even be possible to implement a non-blocking read from the socket? Also, is it possible that I'm completely off-base in thinking that it's not possible to wait on both a WebSocket and another channel?

Numbers in url

Hello,

It looks like the application won't compile if the routes have hardcoded numbers in them.
For example this works :

fn main() {
    println!("Listening!");

    rouille::start_server("0.0.0.0:8080", move |request| {

        router!(request, 
            (GET) (/) => {
                Response::text("Hello world!")
            },

            (GET) (/two) => {
                Response::text("Hello two!")
            },

            _ => Response::empty_404()
        )

    });
}

But this doesn't :

fn main() {
    rouille::start_server("0.0.0.0:8080", move |request| {

        router!(request, 
            (GET) (/) => {
                Response::text("Hello world!")
            },

            (GET) (/2) => {
                Response::text("Hello 2!")
            },

            _ => Response::empty_404()
        )

    });
}

It leads to the following error:

error: no rules expected the token `request_url`
  --> src/main.rs:11:9
   |
11 | /         router!(request,
12 | |             (GET) (/) => {
13 | |                 Response::text("Hello world!")
14 | |             },
...  |
20 | |             _ => Response::empty_404()
21 | |         )
   | |_________^
   |
   = note: this error originates in a macro outside of the current crate

error: Could not compile `rustit`.

Caused by:
  process didn't exit successfully: `rustc --crate-name rustit src/main.rs --crate-type bin --emit=
dep-info,link -C debuginfo=2 -C metadata=9a526171b8297007 -C extra-filename=-9a526171b8297007 --out
-dir /home/antoine/src/rust/rustit/target/debug/deps -L dependency=/home/antoine/src/rust/rustit/ta
rget/debug/deps --extern rouille=/home/antoine/src/rust/rustit/target/debug/deps/librouille-0f6c379
9a677a040.rlib -L native=/home/antoine/src/rust/Rustit/target/debug/build/brotli-sys-faf770ae9206b3
55/out -L native=/home/antoine/src/rust/Rustit/target/debug/build/miniz-sys-c9ca5bf082b97141/out` (
exit code: 101)

Is this normal behaviour? Is there anyway to get around this?

Support SSL

Both for the listening server and the reverse proxy thing.

Transfer encoding and content length in proxy and CGI

The idea behind CGI and the proxy system is that all the headers returned by the CGI program or by the server are directly passed through without any reinterpretation, hence why the code can be so simple.

In practice however rouille will ignore some headers in the response, and there are three problematic situations:

  • If a Transfer-Encoding is used.
  • If a range request is used.
  • If the Trailer header is used.

There's also the problem that the Content-Length header will be ignored. But this is a bug in the current proxy/cgi code and not a real problem.

For the range things, the range system should probably be changed in tiny-http and directly be handled by rouille or by the higher-level library. Range requests are an optional feature of HTTP anyway, so it's not a problem if range requests of the client are mistaken for real requests. This way we could directly pass through any range returned by the program or server.

There are two solutions to the transfer encoding problem:

  • Buffer everything in memory by decoding the transfer encoding.
  • Tweak tiny-http to blindly accept transfer encodings. This is probably preferred.

For Trailer the problem is more complicated because it is assumed that the Response struct contains all the headers.

Performance compared to Iron

There seems to be a noticeable performance difference when comparing Rouille with Iron. The following are numbers I've pulled from my old Macbook Pro:

Rouille:

extern crate rouille;

use rouille::{Response};

fn main() {
    rouille::start_server("localhost:3000", move |request| {
        Response::text("Hello, World")
    });
}

Using wrk -t 2 -c 400 http://localhost:3000 I get:

Running 10s test @ http://localhost:3000
  2 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    32.14ms   12.41ms 103.73ms   60.62%
    Req/Sec     5.71k     1.19k    7.57k    62.50%
  113689 requests in 10.01s, 16.70MB read
  Socket errors: connect 0, read 278, write 0, timeout 0
Requests/sec:  11357.80
Transfer/sec:      1.67MB

Now for Iron:

extern crate iron;

use iron::prelude::*;
use iron::status;

fn main() {
    fn hello_world(_: &mut Request) -> IronResult<Response> {
        Ok(Response::with((status::Ok, "Hello World!")))
    }

    let _server = Iron::new(hello_world).http("localhost:3001").unwrap();
    println!("On 3001");
}

And wrk output:

Running 10s test @ http://localhost:3001
  2 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   778.95us  412.51us  16.91ms   81.08%
    Req/Sec    16.84k     4.80k   28.63k    72.00%
  335056 requests in 10.02s, 36.43MB read
  Socket errors: connect 0, read 358, write 0, timeout 0
Requests/sec:  33454.01
Transfer/sec:      3.64MB

Memory-wise, Rouille starts around 612K and during testing climbs to 31MB. Eventually ending around 18MB.

Iron starts at 1.3MB and during testing climbs to 3.5MB and pretty much stays there.

Latency between the frameworks is also pretty drastic. I'm not well versed in benchmarks but the numbers are greater in difference than I would have imagined.

Any idea what could be causing this?

Short-term improvement of POST input

The way POST input is currently handled has several drawbacks when it comes to multipart form data. You can't use the structs system, and therefore you are forced to manually handle the fields using the multipart facilities. If you happen to use a regular form-encoded form and want to switch to multipart, you have to rewrite all your code.

I think the following should be changed:

  • get_post_input and decode_raw_post_input automatically support multipart form data in addition to url-encoded data.
  • If the user transfers a file and get_post_input/decode_raw_post_input are used, the file is ignored. These functions are only meant to be used for fileless input.
  • Add a get_post_input_with_files function, a bit similar to get_multipart_input. This function returns an iterator (like get_multipart_input) except that any non-file data is buffered and provided at once as the last element of the iterator. This non-file-data is provided as a struct FieldsInput that as a decode method.

Example: before

let mut field1 = None;
let mut field2 = None;
let mut file_handle = None;

let mut multipart = try_or_400!(rouille::input::multipart::get_multipart_input(request));
while let Some(entry) = multipart.next() {
    match entry.data {
        rouille::input::multipart::MultipartData::Text(txt) if entry.name == "field1" => {
            field1 = Some(try_or_400!(txt.parse()));
        },
        rouille::input::multipart::MultipartData::Text(txt) if entry.name == "field2" => {
            field2 = Some(try_or_400!(txt.parse()));
        },
        rouille::input::multipart::MultipartData::File(f) => {
            file_handle = Some(upload_file(f));
        },
     }
}

let field1 = match field1 { Some(f) => f, None => return Response::empty_400() }; 
let field2 = match field2 { Some(f) => f, None => return Response::empty_400() };
let file_handle = match file_handle { Some(f) => f, None => return Response::empty_400() };

add_file_in_database(field1, field2, file_handle);

Example: after

#[derive(RustcDecodable)]
struct Input {
    field1: i32,
    field2: String,
}

let mut file_handle = None;

let mut multipart = try_or_400!(rouille::input::post::get_post_input_with_files(request));
while let Some(entry) = multipart.next() {
    match entry.data {
        rouille::input::multipart::MultipartData::File(f) => {
            file_handle = Some(upload_file(f));
        },
        rouille::input::multipart::MultipartData::Fields(fields) => {
            let input: Input = try_or_400!(fields.decode());
            let file_handle = match file_handle { Some(f) => f, None => return Response::empty_400() };
            add_file_in_database(input.field1, input.field2, file_handle);
        },
     }
}

While these two examples don't look so different, the important part is that in the second example it's much easier and cleaner to add new fields to Input compared to the first example.

Files still need individual handling.

Technical constraints

Before you ask, there are some technical constraints why we can't do this:

struct Input {
    field1: i32,
    field2: String,
    file: SomeFileHandle,
}

Since files and fields can come in any order in the body sent by the browser, we would sometimes be obligated to cache the content of the file in a buffer (or on the disk). That's what PHP does for example. But it's something that we don't want in a high-performance language such as Rust, as it's too costly and/or too magical.

For example some people may want to stream directly the content of the file from the client's socket to S3 or to another service. This is only possible if we use an iterator-like system like presented above.

Parse array of values from form data

I can't find a solution for parsing an array of values such as:

Form parameters

foo = ["a", "b", "c"]
bar = true

In rouille

let data = try_or_400!(post_input!(request, {
    foo : Vec<String>,
    bar: bool
}));

bar and any "non-vector" types bind correctly, foo will always be empty.
I also tried renaming foo to foo[] on the client side.
How can I bind such values?

Unit Tests

It would be nice to be able to start a webserver in a unit test with a unique port, connect to it with a web client to make sure a restful endpoint or socket is implemented properly, and disconnect.

Long-term plan for HTTP2

In order to support HTTP 2 push, there must be a way to indicate a list of resources that the network layer should push alongside with the current response.

I think a field should be added in the Response struct for this. You could either directly pass other Responses (with their URL), or ask rouille to invoke the handler to generate the other Responses.

Use of gh-pages for book hosting

I really like the usage of gh-pages for hosting the book.

Would you write up (blog post, etc) a how-to for this so that other projects may follow along?

I'd be great to see more do this, but unfortunately most of the gh-pages generation I have seen (my own included) just push the orphaned results of cargo doc and call that a day.

Unreachable code warnings with the router! macro

In the nightly the tests (and real code) that use the router! macro are sometimes currently producing warnings about unreachable code.

  • This line expands to something like ret = Some(panic!("Oops!")). The compiler complains that the call to Some can never be reached.
  • This route and this route expand to something like ret = Some(... if ... { return something } else { return something_else }). Again the Some can't be reached.

Unfortunately as far as I know there's no way to fix these warnings in the macro itself.
If we wrap the user's code inside a closure, it successfully removes the warnings but then return would refer to that closure, which would be very counterintuitive.
The proper solution would be rust-lang/rust#15701 which would allow one to add #[allow(unreachable_code)] on the Some itself.

post_input! wrong number of type arguments: expected 1, found 2

It's very possible I'm doing something wrong here.

I believe I'm following the docs/example code, but still I get something strange in the macro.

error[E0244]: wrong number of type arguments: expected 1, found 2
   --> src/main.rs:142:33
    |
142 |                       let input = post_input!(&request, {
    |  _________________________________^
143 | |                         descriptor: String
144 | |                     });
    | |______________________^ expected 1 type argument
    |
    = note: this error originates in a macro outside of the current crate

Should this work?

Use serde instead of rustc_serialize?

Or maybe have it available a feature flag.

In theory, it doesn't REALLY matter, but in practice, I use reqwest for unit tests, which serializes things to JSON using serde. So far the serde and rustc_serialize representations have always matched, but there's no guarantee that will always be the case.

I will try to work on a PR for this if you want.

error when cargo build

Updating registry `https://github.com/rust-lang/crates.io-index`

error: failed to select a version for phf (required by postgres):
all possible versions conflict with previously selected versions of phf
version 0.7.16 in use by phf v0.7.16
possible versions to select: 0.7.19

$ rustc -Vv
rustc 1.14.0 (e8a012324 2016-12-16)
binary: rustc
commit-hash: e8a0123241f0d397d39cd18fcc4e5e7edde22730
commit-date: 2016-12-16
host: x86_64-apple-darwin
release: 1.14.0
LLVM version: 3.9

request.header(&str) is case sensitive

Hello, I really like your library in its simplicity it looks easier to use than the alternatives, and not using a middleware system looks like a smart choice.

I have found one thing that I think might be a bug. request.header(&str) is case sensitive, while HTTP headers is supposed to be case insensitive. tiny-http solves this by having a HeaderField where comparisons are case insensitive, however rouille are casting headers to strings, which makes them case sensitive.

Long-term plan for futures

The request's headers will be entirely parsed before the handler is called, but the request's body will be a Stream.
The handler will be able to return a Future<Item = Response> instead of simply a Response.

In practice, this means that as soon as the headers of a request are parsed, the handler is called. The handler then quickly builds a future and quickly returns. Then it's the library's code that will, through an events loop, advance the actual processing of the request.

The user's code will probably look much messier when using futures, but that's a problem specific to Rust that may eventually be solved by adding async/await to the language.

Option to run using thread pool

Since we're talking breaking changes (#165), what are your opinions on adding an option to process requests in a thread pool? The default still being unlimited thread:::spawns. The Server struct could also be made more "builder-y" in the process.

templates using rouille::Response::html();

This might not really be an issue but here it goes: first thanks for writing rouille and providing with some working examples (a rare circumstance for rust framework). I did manage to use forms, connect to the db etc. but after trying for hours I can't figure out how to make a template working
I have tried
https://crates.io/crates/bart, rustache, handlebars pretty much every package but I can't figure out where I should use the template. Should I use rouille::Response::html(); ? that didn't really work.
Perhaps the example can specify how to use templates or I will be happy to write one as soon I figure this out! thanks

Add an accept! macro

That allows dispatching depending on the Accept header of the request.

Example usage:

accept!(request,
    "text/json" => Response::json(...),
    "text/html" => Response::html(...),
    _ => Response::not_acceptable()
)

By knowing all the possible content types in advance we can determine which one matches the request the most.

How to stop the server?

Hello,

What is the appropriate way to stop the server?

According to documentation for Server::run():

Runs the server forever, or until the listening socket is somehow force-closed by the operating system.

And start_server never returns (!).

Sessions rework

The way sessions currently work is weird and should be changed entirely.

As a reminder, to handle a session you need to:

  • Examine the Request to check for a specific cookie.
  • If present, load session data from some sort of storage (like a database).
  • If necessary, store updated data to the storage.
  • Put the session cookie in the Response.

Since there is a variety of ways to store session data, rouille is not going to handle this. However steps 1 and 4 can (and should) be handled by rouille.

I suggest users should do something like this:

let database = ...;

rouille::start_server("0.0.0.0:8000", move |request| {
    rouille::session::session(&request, "SID", 3600, |session| {
        struct Data { name: String }

        let mut session_data: Option<Data> = if session.client_has_sid() {
            Some(database.load_data(session.id()))
        } else { None };
        let had_data = session_data.is_some();

        let response = handle(request, &mut session_data);

        if let Some(data) = session_data {
            if !had_data { database.insert_entry(session.id(), data); }
            else { database.update_entry(session.id(), data); }
        } else {
            if had_data { database.remove_entry(session.id()); }
        }

        response
    })
});

It may look a bit cumbersome, but I think it's ok. It's coherent with rouille's design of being simple and explicit.

The session function would take care of analyzing the Request and furnishing a Session object. This Session object would be responsible of giving a session ID, potentially generating one if there is none. Then the session function transforms the Response to set the cookie if necessary.

Stop handler when client disconnects

When a client disconnects, the server should stop its request handler and abort all pending work. Reasons:

  • Less susceptible to DDoS attacks
  • Allows the API clients to take advantage of request cancelation without fear of wasting server resources (I do this in my web apps)

New release?

Would there be any problem with releasing a version with serde support now that it has been merged? Is there anything else holding it up?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.