Giter VIP home page Giter VIP logo

redis's Introduction

Boost.Redis

Boost.Redis is a high-level Redis client library built on top of Boost.Asio that implements the Redis protocol RESP3. The requirements for using Boost.Redis are:

  • Boost. The library is included in Boost distributions starting with 1.84.
  • C++17 or higher.
  • Redis 6 or higher (must support RESP3).
  • Gcc (10, 11, 12), Clang (11, 13, 14) and Visual Studio (16 2019, 17 2022).
  • Have basic-level knowledge about Redis and Boost.Asio.

The latest release can be downloaded on https://github.com/boostorg/redis/releases. The library headers can be found in the include subdirectory and a compilation of the source

#include <boost/redis/src.hpp>

is required. The simplest way to do it is to included this header in no more than one source file in your applications. To build the examples and tests cmake is supported, for example

# Linux
$ BOOST_ROOT=/opt/boost_1_84_0 cmake --preset g++-11

# Windows 
$ cmake -G "Visual Studio 17 2022" -A x64 -B bin64 -DCMAKE_TOOLCHAIN_FILE=C:/vcpkg/scripts/buildsystems/vcpkg.cmake

Connection

Let us start with a simple application that uses a short-lived connection to send a ping command to Redis

auto co_main(config const& cfg) -> net::awaitable<void>
{
   auto conn = std::make_shared<connection>(co_await net::this_coro::executor);
   conn->async_run(cfg, {}, net::consign(net::detached, conn));

   // A request containing only a ping command.
   request req;
   req.push("PING", "Hello world");

   // Response where the PONG response will be stored.
   response<std::string> resp;

   // Executes the request.
   co_await conn->async_exec(req, resp, net::deferred);
   conn->cancel();

   std::cout << "PING: " << std::get<0>(resp).value() << std::endl;
}

The roles played by the async_run and async_exec functions are

  • async_exec: Execute the commands contained in the request and store the individual responses in the resp object. Can be called from multiple places in your code concurrently.
  • async_run: Resolve, connect, ssl-handshake, resp3-handshake, health-checks, reconnection and coordinate low-level read and write operations (among other things).

Server pushes

Redis servers can also send a variety of pushes to the client, some of them are

The connection class supports server pushes by means of the boost::redis::connection::async_receive function, which can be called in the same connection that is being used to execute commands. The coroutine below shows how to used it

auto
receiver(std::shared_ptr<connection> conn) -> net::awaitable<void>
{
   request req;
   req.push("SUBSCRIBE", "channel");

   generic_response resp;
   conn->set_receive_response(resp);

   // Loop while reconnection is enabled
   while (conn->will_reconnect()) {

      // Reconnect to channels.
      co_await conn->async_exec(req, ignore, net::deferred);

      // Loop reading Redis pushes.
      for (;;) {
         error_code ec;
         co_await conn->async_receive(resp, net::redirect_error(net::use_awaitable, ec));
         if (ec)
            break; // Connection lost, break so we can reconnect to channels.

         // Use the response resp in some way and then clear it.
         ...

         consume_one(resp);
      }
   }
}

Requests

Redis requests are composed of one or more commands (in the Redis documentation they are called pipelines). For example

// Some example containers.
std::list<std::string> list {...};
std::map<std::string, mystruct> map { ...};

// The request can contain multiple commands.
request req;

// Command with variable length of arguments.
req.push("SET", "key", "some value", "EX", "2");

// Pushes a list.
req.push_range("SUBSCRIBE", list);

// Same as above but as an iterator range.
req.push_range("SUBSCRIBE", std::cbegin(list), std::cend(list));

// Pushes a map.
req.push_range("HSET", "key", map);

Sending a request to Redis is performed with boost::redis::connection::async_exec as already stated.

Config flags

The boost::redis::request::config object inside the request dictates how the boost::redis::connection should handle the request in some important situations. The reader is advised to read it carefully.

Responses

Boost.Redis uses the following strategy to support Redis responses

  • boost::redis::request is used for requests whose number of commands are not dynamic.
  • Dynamic: Otherwise use boost::redis::generic_response.

For example, the request below has three commands

request req;
req.push("PING");
req.push("INCR", "key");
req.push("QUIT");

and its response also has three comamnds and can be read in the following response object

response<std::string, int, std::string>

The response behaves as a tuple and must have as many elements as the request has commands (exceptions below). It is also necessary that each tuple element is capable of storing the response to the command it refers to, otherwise an error will occur. To ignore responses to individual commands in the request use the tag boost::redis::ignore_t, for example

// Ignore the second and last responses.
response<std::string, boost::redis::ignore_t, std::string, boost::redis::ignore_t>

The following table provides the resp3-types returned by some Redis commands

Command RESP3 type Documentation
lpush Number https://redis.io/commands/lpush
lrange Array https://redis.io/commands/lrange
set Simple-string, null or blob-string https://redis.io/commands/set
get Blob-string https://redis.io/commands/get
smembers Set https://redis.io/commands/smembers
hgetall Map https://redis.io/commands/hgetall

To map these RESP3 types into a C++ data structure use the table below

RESP3 type Possible C++ type Type
Simple-string std::string Simple
Simple-error std::string Simple
Blob-string std::string, std::vector Simple
Blob-error std::string, std::vector Simple
Number long long, int, std::size_t, std::string Simple
Double double, std::string Simple
Null std::optional<T> Simple
Array std::vector, std::list, std::array, std::deque Aggregate
Map std::vector, std::map, std::unordered_map Aggregate
Set std::vector, std::set, std::unordered_set Aggregate
Push std::vector, std::map, std::unordered_map Aggregate

For example, the response to the request

request req;
req.push("HELLO", 3);
req.push_range("RPUSH", "key1", vec);
req.push_range("HSET", "key2", map);
req.push("LRANGE", "key3", 0, -1);
req.push("HGETALL", "key4");
req.push("QUIT");

can be read in the tuple below

response<
   redis::ignore_t,  // hello
   int,              // rpush
   int,              // hset
   std::vector<T>,   // lrange
   std::map<U, V>,   // hgetall
   std::string       // quit
> resp;

Where both are passed to async_exec as showed elsewhere

co_await conn->async_exec(req, resp, net::deferred);

If the intention is to ignore responses altogether use ignore

// Ignores the response
co_await conn->async_exec(req, ignore, net::deferred);

Responses that contain nested aggregates or heterogeneous data types will be given special treatment later in The general case. As of this writing, not all RESP3 types are used by the Redis server, which means in practice users will be concerned with a reduced subset of the RESP3 specification.

Pushes

Commands that have no response like

  • "SUBSCRIBE"
  • "PSUBSCRIBE"
  • "UNSUBSCRIBE"

must NOT be included in the response tuple. For example, the request below

request req;
req.push("PING");
req.push("SUBSCRIBE", "channel");
req.push("QUIT");

must be read in this tuple response<std::string, std::string>, that has static size two.

Null

It is not uncommon for apps to access keys that do not exist or that have already expired in the Redis server, to deal with these cases Boost.Redis provides support for std::optional. To use it, wrap your type around std::optional like this

response<
   std::optional<A>,
   std::optional<B>,
   ...
   > resp;

co_await conn->async_exec(req, resp, net::deferred);

Everything else stays pretty much the same.

Transactions

To read responses to transactions we must first observe that Redis will queue the transaction commands and send their individual responses as elements of an array, the array is itself the response to the EXEC command. For example, to read the response to this request

req.push("MULTI");
req.push("GET", "key1");
req.push("LRANGE", "key2", 0, -1);
req.push("HGETALL", "key3");
req.push("EXEC");

use the following response type

using boost::redis::ignore;

using exec_resp_type = 
   response<
      std::optional<std::string>, // get
      std::optional<std::vector<std::string>>, // lrange
      std::optional<std::map<std::string, std::string>> // hgetall
   >;

response<
   boost::redis::ignore_t,  // multi
   boost::redis::ignore_t,  // get
   boost::redis::ignore_t,  // lrange
   boost::redis::ignore_t,  // hgetall
   exec_resp_type,        // exec
> resp;

co_await conn->async_exec(req, resp, net::deferred);

For a complete example see cpp20_containers.cpp.

The general case

There are cases where responses to Redis commands won't fit in the model presented above, some examples are

  • Commands (like set) whose responses don't have a fixed RESP3 type. Expecting an int and receiving a blob-string will result in error.
  • RESP3 aggregates that contain nested aggregates can't be read in STL containers.
  • Transactions with a dynamic number of commands can't be read in a response.

To deal with these cases Boost.Redis provides the boost::redis::resp3::node type abstraction, that is the most general form of an element in a response, be it a simple RESP3 type or the element of an aggregate. It is defined like this

template <class String>
struct basic_node {
   // The RESP3 type of the data in this node.
   type data_type;

   // The number of elements of an aggregate (or 1 for simple data).
   std::size_t aggregate_size;

   // The depth of this node in the response tree.
   std::size_t depth;

   // The actual data. For aggregate types this is always empty.
   String value;
};

Any response to a Redis command can be received in a boost::redis::generic_response. The vector can be seen as a pre-order view of the response tree. Using it is not different than using other types

// Receives any RESP3 simple or aggregate data type.
boost::redis::generic_response resp;
co_await conn->async_exec(req, resp, net::deferred);

For example, suppose we want to retrieve a hash data structure from Redis with HGETALL, some of the options are

  • boost::redis::generic_response: Works always.
  • std::vector<std::string>: Efficient and flat, all elements as string.
  • std::map<std::string, std::string>: Efficient if you need the data as a std::map.
  • std::map<U, V>: Efficient if you are storing serialized data. Avoids temporaries and requires boost_redis_from_bulk for U and V.

In addition to the above users can also use unordered versions of the containers. The same reasoning applies to sets e.g. SMEMBERS and other data structures in general.

Serialization

Boost.Redis supports serialization of user defined types by means of the following customization points

// Serialize.
void boost_redis_to_bulk(std::string& to, mystruct const& obj);

// Deserialize
void boost_redis_from_bulk(mystruct& obj, char const* p, std::size_t size, boost::system::error_code& ec)

These functions are accessed over ADL and therefore they must be imported in the global namespace by the user. In the Examples section the reader can find examples showing how to serialize using json and protobuf.

Examples

The examples below show how to use the features discussed so far

  • cpp20_intro.cpp: Does not use awaitable operators.
  • cpp20_intro_tls.cpp: Communicates over TLS.
  • cpp20_containers.cpp: Shows how to send and receive STL containers and how to use transactions.
  • cpp20_json.cpp: Shows how to serialize types using Boost.Json.
  • cpp20_protobuf.cpp: Shows how to serialize types using protobuf.
  • cpp20_resolve_with_sentinel.cpp: Shows how to resolve a master address using sentinels.
  • cpp20_subscriber.cpp: Shows how to implement pubsub with reconnection re-subscription.
  • cpp20_echo_server.cpp: A simple TCP echo server.
  • cpp20_chat_room.cpp: A command line chat built on Redis pubsub.
  • cpp17_intro.cpp: Uses callbacks and requires C++17.
  • cpp17_intro_sync.cpp: Runs async_run in a separate thread and performs synchronous calls to async_exec.

The main function used in some async examples has been factored out in the main.cpp file.

Echo server benchmark

This document benchmarks the performance of TCP echo servers I implemented in different languages using different Redis clients. The main motivations for choosing an echo server are

  • Simple to implement and does not require expertise level in most languages.
  • I/O bound: Echo servers have very low CPU consumption in general and therefore are excelent to measure how a program handles concurrent requests.
  • It simulates very well a typical backend in regard to concurrency.

I also imposed some constraints on the implementations

  • It should be simple enough and not require writing too much code.
  • Favor the use standard idioms and avoid optimizations that require expert level.
  • Avoid the use of complex things like connection and thread pool.

To reproduce these results run one of the echo-server programs in one terminal and the echo-server-client in another.

Without Redis

First I tested a pure TCP echo server, i.e. one that sends the messages directly to the client without interacting with Redis. The result can be seen below

The tests were performed with a 1000 concurrent TCP connections on the localhost where latency is 0.07ms on average on my machine. On higher latency networks the difference among libraries is expected to decrease.

  • I expected Libuv to have similar performance to Asio and Tokio.
  • I did expect nodejs to come a little behind given it is is javascript code. Otherwise I did expect it to have similar performance to libuv since it is the framework behind it.
  • Go did surprise me: faster than nodejs and libuv!

The code used in the benchmarks can be found at

With Redis

This is similar to the echo server described above but messages are echoed by Redis and not by the echo-server itself, which acts as a proxy between the client and the Redis server. The results can be seen below

The tests were performed on a network where latency is 35ms on average, otherwise it uses the same number of TCP connections as the previous example.

As the reader can see, the Libuv and the Rust test are not depicted in the graph, the reasons are

  • redis-rs: This client comes so far behind that it can't even be represented together with the other benchmarks without making them look insignificant. I don't know for sure why it is so slow, I suppose it has something to do with its lack of automatic pipelining support. In fact, the more TCP connections I lauch the worse its performance gets.

  • Libuv: I left it out because it would require me writing to much c code. More specifically, I would have to use hiredis and implement support for pipelines manually.

The code used in the benchmarks can be found at

Conclusion

Redis clients have to support automatic pipelining to have competitive performance. For updates to this document follow https://github.com/boostorg/redis.

Comparison

The main reason for why I started writing Boost.Redis was to have a client compatible with the Asio asynchronous model. As I made progresses I could also address what I considered weaknesses in other libraries. Due to time constraints I won't be able to give a detailed comparison with each client listed in the official list, instead I will focus on the most popular C++ client on github in number of stars, namely

Boost.Redis vs Redis-plus-plus

Before we start it is important to mention some of the things redis-plus-plus does not support

  • The latest version of the communication protocol RESP3. Without that it is impossible to support some important Redis features like client side caching, among other things.
  • Coroutines.
  • Reading responses directly in user data structures to avoid creating temporaries.
  • Error handling with support for error-code.
  • Cancellation.

The remaining points will be addressed individually. Let us first have a look at what sending a command a pipeline and a transaction look like

auto redis = Redis("tcp://127.0.0.1:6379");

// Send commands
redis.set("key", "val");
auto val = redis.get("key"); // val is of type OptionalString.
if (val)
    std::cout << *val << std::endl;

// Sending pipelines
auto pipe = redis.pipeline();
auto pipe_replies = pipe.set("key", "value")
                        .get("key")
                        .rename("key", "new-key")
                        .rpush("list", {"a", "b", "c"})
                        .lrange("list", 0, -1)
                        .exec();

// Parse reply with reply type and index.
auto set_cmd_result = pipe_replies.get<bool>(0);
// ...

// Sending a transaction
auto tx = redis.transaction();
auto tx_replies = tx.incr("num0")
                    .incr("num1")
                    .mget({"num0", "num1"})
                    .exec();

auto incr_result0 = tx_replies.get<long long>(0);
// ...

Some of the problems with this API are

  • Heterogeneous treatment of commands, pipelines and transaction. This makes auto-pipelining impossible.
  • Any Api that sends individual commands has a very restricted scope of usability and should be avoided for performance reasons.
  • The API imposes exceptions on users, no error-code overload is provided.
  • No way to reuse the buffer for new calls to e.g. redis.get in order to avoid further dynamic memory allocations.
  • Error handling of resolve and connection not clear.

According to the documentation, pipelines in redis-plus-plus have the following characteristics

NOTE: By default, creating a Pipeline object is NOT cheap, since it creates a new connection.

This is clearly a downside in the API as pipelines should be the default way of communicating and not an exception, paying such a high price for each pipeline imposes a severe cost in performance. Transactions also suffer from the very same problem.

NOTE: Creating a Transaction object is NOT cheap, since it creates a new connection.

In Boost.Redis there is no difference between sending one command, a pipeline or a transaction because requests are decoupled from the IO objects.

redis-plus-plus also supports async interface, however, async support for Transaction and Subscriber is still on the way.

The async interface depends on third-party event library, and so far, only libuv is supported.

Async code in redis-plus-plus looks like the following

auto async_redis = AsyncRedis(opts, pool_opts);

Future<string> ping_res = async_redis.ping();

cout << ping_res.get() << endl;

As the reader can see, the async interface is based on futures which is also known to have a bad performance. The biggest problem however with this async design is that it makes it impossible to write asynchronous programs correctly since it starts an async operation on every command sent instead of enqueueing a message and triggering a write when it can be sent. It is also not clear how are pipelines realised with this design (if at all).

Reference

The High-Level page documents all public types.

Acknowledgement

Acknowledgement to people that helped shape Boost.Redis

  • Richard Hodges (madmongo1): For very helpful support with Asio, the design of asynchronous programs, etc.
  • Vinícius dos Santos Oliveira (vinipsmaker): For useful discussion about how Boost.Redis consumes buffers in the read operation.
  • Petr Dannhofer (Eddie-cz): For helping me understand how the AUTH and HELLO command can influence each other.
  • Mohammad Nejati (ashtum): For pointing out scenarios where calls to async_exec should fail when the connection is lost.
  • Klemens Morgenstern (klemens-morgenstern): For useful discussion about timeouts, cancellation, synchronous interfaces and general help with Asio.
  • Vinnie Falco (vinniefalco): For general suggestions about how to improve the code and the documentation.
  • Bram Veldhoen (bveldhoen): For contributing a Redis-streams example.

Also many thanks to all individuals that participated in the Boost review

The Reviews can be found at: https://lists.boost.org/Archives/boost/2023/01/date.php. The thread with the ACCEPT from the review manager can be found here: https://lists.boost.org/Archives/boost/2023/01/253944.php.

Changelog

Boost 1.85

  • (Issue 170) Under load and on low-latency networks it is possible to start receiving responses before the write operation completed and while the request is still marked as staged and not written. This messes up with the heuristics that classifies responses as unsolicied or not.

  • (Issue 168). Provides a way of passing a custom SSL context to the connection. The design here differs from that of Boost.Beast and Boost.MySql since in Boost.Redis the connection owns the context instead of only storing a reference to a user provided one. This is ok so because apps need only one connection for their entire application, which makes the overhead of one ssl-context per connection negligible.

  • (Issue 181). See a detailed description of this bug in this comment.

  • (Issue 182). Sets "default" as the default value of config::username. This makes it simpler to use the requirepass configuration in Redis.

  • (Issue 189). Fixes narrowing convertion by using std::size_t instead of std::uint64_t for the sizes of bulks and aggregates. The code relies now on std::from_chars returning an error if a value greater than 32 is received on platforms on which the size ofstd::size_t is 32.

Boost 1.84 (First release in Boost)

  • Deprecates the async_receive overload that takes a response. Users should now first call set_receive_response to avoid constantly and unnecessarily setting the same response.

  • Uses std::function to type erase the response adapter. This change should not influence users in any way but allowed important simplification in the connections internals. This resulted in massive performance improvement.

  • The connection has a new member get_usage() that returns the connection usage information, such as number of bytes written, received etc.

  • There are massive performance improvements in the consuming of server pushes which are now communicated with an asio::channel and therefore can be buffered which avoids blocking the socket read-loop. Batch reads are also supported by means of channel.try_send and buffered messages can be consumed synchronously with connection::receive. The function boost::redis::cancel_one has been added to simplify processing multiple server pushes contained in the same generic_response. IMPORTANT: These changes may result in more than one push in the response when connection::async_receive resumes. The user must therefore be careful when calling resp.clear(): either ensure that all message have been processed or just use consume_one.

v1.4.2 (incorporates changes to conform the boost review and more)

  • Adds boost::redis::config::database_index to make it possible to choose a database before starting running commands e.g. after an automatic reconnection.

  • Massive performance improvement. One of my tests went from 140k req/s to 390k/s. This was possible after a parser simplification that reduced the number of reschedules and buffer rotations.

  • Adds Redis stream example.

  • Renames the project to Boost.Redis and moves the code into namespace boost::redis.

  • As pointed out in the reviews the to_bulk and from_bulk names were too generic for ADL customization points. They gained the prefix boost_redis_.

  • Moves boost::redis::resp3::request to boost::redis::request.

  • Adds new typedef boost::redis::response that should be used instead of std::tuple.

  • Adds new typedef boost::redis::generic_response that should be used instead of std::vector<resp3::node<std::string>>.

  • Renames redis::ignore to redis::ignore_t.

  • Changes async_exec to receive a redis::response instead of an adapter, namely, instead of passing adapt(resp) users should pass resp directly.

  • Introduces boost::redis::adapter::result to store responses to commands including possible resp3 errors without losing the error diagnostic part. To access values now use std::get<N>(resp).value() instead of std::get<N>(resp).

  • Implements full-duplex communication. Before these changes the connection would wait for a response to arrive before sending the next one. Now requests are continuously coalesced and written to the socket. request::coalesce became unnecessary and was removed. I could measure significative performance gains with theses changes.

  • Improves serialization examples using Boost.Describe to serialize to JSON and protobuf. See cpp20_json.cpp and cpp20_protobuf.cpp for more details.

  • Upgrades to Boost 1.81.0.

  • Fixes build with libc++.

  • Adds high-level functionality to the connection classes. For example, boost::redis::connection::async_run will automatically resolve, connect, reconnect and perform health checks.

v1.4.0-1

  • Renames retry_on_connection_lost to cancel_if_unresponded. (v1.4.1)
  • Removes dependency on Boost.Hana, boost::string_view, Boost.Variant2 and Boost.Spirit.
  • Fixes build and setup CI on windows.

v1.3.0-1

  • Upgrades to Boost 1.80.0

  • Removes automatic sending of the HELLO command. This can't be implemented properly without bloating the connection class. It is now a user responsibility to send HELLO. Requests that contain it have priority over other requests and will be moved to the front of the queue, see aedis::request::config

  • Automatic name resolving and connecting have been removed from aedis::connection::async_run. Users have to do this step manually now. The reason for this change is that having them built-in doesn't offer enough flexibility that is need for boost users.

  • Removes healthy checks and idle timeout. This functionality must now be implemented by users, see the examples. This is part of making Aedis useful to a larger audience and suitable for the Boost review process.

  • The aedis::connection is now using a typeddef to a net::ip::tcp::socket and aedis::ssl::connection to net::ssl::stream<net::ip::tcp::socket>. Users that need to use other stream type must now specialize aedis::basic_connection.

  • Adds a low level example of async code.

v1.2.0

  • aedis::adapt supports now tuples created with std::tie. aedis::ignore is now an alias to the type of std::ignore.

  • Provides allocator support for the internal queue used in the aedis::connection class.

  • Changes the behaviour of async_run to complete with success if asio::error::eof is received. This makes it easier to write composed operations with awaitable operators.

  • Adds allocator support in the aedis::request (a contribution from Klemens Morgenstern).

  • Renames aedis::request::push_range2 to push_range. The suffix 2 was used for disambiguation. Klemens fixed it with SFINAE.

  • Renames fail_on_connection_lost to aedis::request::config::cancel_on_connection_lost. Now, it will only cause connections to be canceled when async_run completes.

  • Introduces aedis::request::config::cancel_if_not_connected which will cause a request to be canceled if async_exec is called before a connection has been established.

  • Introduces new request flag aedis::request::config::retry that if set to true will cause the request to not be canceled when it was sent to Redis but remained unresponded after async_run completed. It provides a way to avoid executing commands twice.

  • Removes the aedis::connection::async_run overload that takes request and adapter as parameters.

  • Changes the way aedis::adapt() behaves with std::vector<aedis::resp3::node<T>>. Receiving RESP3 simple errors, blob errors or null won't causes an error but will be treated as normal response. It is the user responsibility to check the content in the vector.

  • Fixes a bug in connection::cancel(operation::exec). Now this call will only cancel non-written requests.

  • Implements per-operation implicit cancellation support for aedis::connection::async_exec. The following call will co_await (conn.async_exec(...) || timer.async_wait(...)) will cancel the request as long as it has not been written.

  • Changes aedis::connection::async_run completion signature to f(error_code). This is how is was in the past, the second parameter was not helpful.

  • Renames operation::receive_push to aedis::operation::receive.

v1.1.0-1

  • Removes coalesce_requests from the aedis::connection::config, it became a request property now, see aedis::request::config::coalesce.

  • Removes max_read_size from the aedis::connection::config. The maximum read size can be specified now as a parameter of the aedis::adapt() function.

  • Removes aedis::sync class, see intro_sync.cpp for how to perform synchronous and thread safe calls. This is possible in Boost. 1.80 only as it requires boost::asio::deferred.

  • Moves from boost::optional to std::optional. This is part of moving to C++17.

  • Changes the behaviour of the second aedis::connection::async_run overload so that it always returns an error when the connection is lost.

  • Adds TLS support, see intro_tls.cpp.

  • Adds an example that shows how to resolve addresses over sentinels, see subscriber_sentinel.cpp.

  • Adds a aedis::connection::timeouts::resp3_handshake_timeout. This is timeout used to send the HELLO command.

  • Adds aedis::endpoint where in addition to host and port, users can optionally provide username, password and the expected server role (see aedis::error::unexpected_server_role).

  • aedis::connection::async_run checks whether the server role received in the hello command is equal to the expected server role specified in aedis::endpoint. To skip this check let the role variable empty.

  • Removes reconnect functionality from aedis::connection. It is possible in simple reconnection strategies but bloats the class in more complex scenarios, for example, with sentinel, authentication and TLS. This is trivial to implement in a separate coroutine. As a result the enum event and async_receive_event have been removed from the class too.

  • Fixes a bug in connection::async_receive_push that prevented passing any response adapter other that adapt(std::vector<node>).

  • Changes the behaviour of aedis::adapt() that caused RESP3 errors to be ignored. One consequence of it is that connection::async_run would not exit with failure in servers that required authentication.

  • Changes the behaviour of connection::async_run that would cause it to complete with success when an error in the connection::async_exec occurred.

  • Ports the buildsystem from autotools to CMake.

v1.0.0

  • Adds experimental cmake support for windows users.

  • Adds new class aedis::sync that wraps an aedis::connection in a thread-safe and synchronous API. All free functions from the sync.hpp are now member functions of aedis::sync.

  • Split aedis::connection::async_receive_event in two functions, one to receive events and another for server side pushes, see aedis::connection::async_receive_push.

  • Removes collision between aedis::adapter::adapt and aedis::adapt.

  • Adds connection::operation enum to replace cancel_* member functions with a single cancel function that gets the operations that should be cancelled as argument.

  • Bugfix: a bug on reconnect from a state where the connection object had unsent commands. It could cause async_exec to never complete under certain conditions.

  • Bugfix: Documentation of adapt() functions were missing from Doxygen.

v0.3.0

  • Adds experimental::exec and receive_event functions to offer a thread safe and synchronous way of executing requests across threads. See intro_sync.cpp and subscriber_sync.cpp for examples.

  • connection::async_read_push was renamed to async_receive_event.

  • connection::async_receive_event is now being used to communicate internal events to the user, such as resolve, connect, push etc. For examples see cpp20_subscriber.cpp and connection::event.

  • The aedis directory has been moved to include to look more similar to Boost libraries. Users should now replace -I/aedis-path with -I/aedis-path/include in the compiler flags.

  • The AUTH and HELLO commands are now sent automatically. This change was necessary to implement reconnection. The username and password used in AUTH should be provided by the user on connection::config.

  • Adds support for reconnection. See connection::enable_reconnect.

  • Fixes a bug in the connection::async_run(host, port) overload that was causing crashes on reconnection.

  • Fixes the executor usage in the connection class. Before theses changes it was imposing any_io_executor on users.

  • connection::async_receiver_event is not cancelled anymore when connection::async_run exits. This change makes user code simpler.

  • connection::async_exec with host and port overload has been removed. Use the other connection::async_run overload.

  • The host and port parameters from connection::async_run have been move to connection::config to better support authentication and failover.

  • Many simplifications in the chat_room example.

  • Fixes build in clang the compilers and makes some improvements in the documentation.

v0.2.0-1

  • Fixes a bug that happens on very high load. (v0.2.1)
  • Major rewrite of the high-level API. There is no more need to use the low-level API anymore.
  • No more callbacks: Sending requests follows the ASIO asynchronous model.
  • Support for reconnection: Pending requests are not canceled when a connection is lost and are re-sent when a new one is established.
  • The library is not sending HELLO-3 on user behalf anymore. This is important to support AUTH properly.

v0.1.0-2

  • Adds reconnect coroutine in the echo_server example. (v0.1.2)
  • Corrects client::async_wait_for_data with make_parallel_group to launch operation. (v0.1.2)
  • Improvements in the documentation. (v0.1.2)
  • Avoids dynamic memory allocation in the client class after reconnection. (v0.1.2)
  • Improves the documentation and adds some features to the high-level client. (v.0.1.1)
  • Improvements in the design and documentation.

v0.0.1

  • First release to collect design feedback.

redis's People

Contributors

anarthal avatar bram avatar cbodley avatar cthulhu-irl avatar gracicot avatar hailios avatar klemens-morgenstern avatar mrichmon avatar mzimbres avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redis's Issues

is_aggregate issue

Hello, it's problematic to use custom functions like is_aggregate(type t), which have same name as std counterparts. Simple "using namespace std" in project will make aedis uncompilable. Please rename affected fuction.

Error: aedis/adapter/detail/adapters.hpp(148,11): error : no viable constructor or deduction guide for deduction of template arguments of 'is_aggregate'

Improve support for cancellation

At the moment the connection classes provide support to cancellation with cancel member function. Although not common or even advisable, some users may want to cancel operations with timeouts like this

co_await (conn.async_run(...) || timer.async_wait(...));

This is working as long as the timeout is not too small e.g. 10ms. To handle that properly Aedis need to improve cancelation support. Below some parts of a conversation I've had with Richard

timer waits and IO reads will be cancellation-aware. But an operation that you have written yourself may not be.

these operations may want to check the cancellation context hasn't already been cancelled before initiating, and also store a cancellation handler in connected cancellation slots in order to cut short any long-standing operation. asio coroutines do this pre-initiation check for you https://github.com/boostorg/asio/blob/0af7858e7b5603a2415a30b69e16c7ef1d47a5e9/include/boost/asio/detail/deadline_timer_service.hpp#L251

also, If you cancel after posting an operation, when the posted code is executed, it may want to check whether the cancellation_state has been cancelled while the posted step was waiting in the executor.

// some code...
asio::post(exec, some_next_step(std::move(my_handler));
// cancellation happens here...
// some_next_step will not know that the handler was cancelled unless it checks.
// ...
// ... some_next_step now executes...
void some_next_step(auto handler)
{
  auto cs = get_canecllation_state(handler);
  if (cs.cancelled() != asio::cancellation_type::none)
  {
    complete(handler, operation_aborted);
    return;
  }
  // ... not cancelled
}

fail to reconnect

When I run the subscriber example, stopping the redis server for a while and restarting the redis server will cause the program to crash

Port uses std::string type

The port number uses the std::string type in config, which is counter intuitive. It is recommended to use the unsigned short type.

Assertion fails if cancellation is requested before connection establish

It seems when we send cancellation signal during the connection phase this assertion fails.
i'm using https://github.com/mzimbres/aedis/tree/798f193f14b01b44caccfebebcdedbea9432b1d6
: ~/third_party/aedis/include/aedis/detail/connection_ops.hpp:536: void aedis::detail::reader_op<aedis::connection<>>::operator()(Self &, boost::system::error_code, std::size_t) [Conn = aedis::connection<>, Self = boost::asio::detail::composed_op<aedis::detail::reader_op<aedis::connection<>>, boost::asio::detail::composed_work<void (boost::asio::any_io_executor)>, boost::asio::experimental::detail::parallel_group_op_handler<0, boost::asio::experimental::wait_for_one_error, boost::asio::detail::composed_op<aedis::detail::start_op<aedis::connection<>>, boost::asio::detail::composed_work<void (boost::asio::any_io_executor)>, boost::asio::detail::composed_op<aedis::detail::run_op<aedis::connection<>>, boost::asio::detail::composed_work<void (boost::asio::any_io_executor)>, boost::asio::detail::as_tuple_handler<boost::asio::detail::awaitable_handler<boost::asio::any_io_executor, std::tuple<boost::system::error_code>>>, void (boost::system::error_code)>, void (boost::system::error_code)>, (lambda at ~/third_party/aedis/include/aedis/detail/connection_ops.hpp:358:13), (lambda at ~/third_party/aedis/include/aedis/detail/connection_ops.hpp:359:13), (lambda at ~/third_party/aedis/include/aedis/detail/connection_ops.hpp:360:13), (lambda at ~/third_party/aedis/include/aedis/detail/connection_ops.hpp:361:13)>, void (boost::system::error_code)>]: Assertion !conn->read_buffer_.empty()' failed.

Error: deque iterators incompatible

Aedis raised error due to incompatible iterators in function void cancel_push_requests(typename reqs_type::iterator end), called from writer_op as conn->cancel_push_requests(end).

Failed on:
auto point = std::stable_partition(std::begin(reqs_), end, [](auto const& ptr) {
return ptr->req->commands() != 0;
});

Adapt visitor fails with multi/exec sequence

Version: latest master

With request sequence
req.push("MULTI");
req.push("SET", "set-key1", "1");
req.push("SET", "set-key2", "2");
req.push("EXEC");

and resp as

std::tuple<
aedis::ignore, // multi
aedis::ignore, // set1
aedis::ignore, // set2
std::tuple<std::string,std::string> // exec
> resp;

I get assert in adapt visitor in adapt.hpp, BOOST_ASSERT(i < adapters_.size());
i is 4, adapters size is 4. When I add some other command after exec, it works.

Logical dead lock when we don't want to reuse the connection

This is not a bug, but a property of current design, there is a logical dead lock in the scenarios where we don't want to reconnect we just want to cancel and join ongoing tasks that are using connection.

  • A connection is disconnected
  • We cancel current pending requests with coon.cancel(operation::exec)
  • Any new call to async_exec at this points leads to dead lock because will wait for reconnect.

What I can do is to keep connection status somewhere and check it before calls to async_exec.

This happens because the tasks that I cancel on Redis disconnect can be canceled due to other reasons too and in the last step they send a message to Redis:

auto req = aedis::resp3::request{};
req.push("XADD", "connection", "*", "event", "disconnect", "uuid", uuid);
co_await connection_.async_exec(req, aedis::adapt(), asio::use_awaitable);

I expect to get an exception here when Redis itself is disconnected and task get cancelled without any need to for checking connection status.

Another scenario that this can happens is that when I create a new tasks when Redis is disconnected and waited for ongoing tasks. It starts to send requests with async_exec, I expect an exception here without a need to status check.

This all happens because of the nature of connection, it is designed in a way it can be reconnected but it leads to a bit labor when we want to use it in a way that its a reliable connection if it is disconnected we should clean and return with an error.

Is this ready?

I'm looking for a solid C++ redis client to use for an embedded project and I stumbled upon this. It looks pretty active, but young. Is this ready for production use?

Better support for working with XREADGROUP and XREAD replies

XREADGROUP and XREAD replies have the same structure. if we use std::vector< aedis::resp3::node< std::string > > for their response it will be very difficult to work with their replies.
It would be great if you could add some library support to make working with Redis streams easier.

> XREAD COUNT 2 STREAMS stream_1 stream_2 0
1) 1) "stream_1"
   2) 1) 1) 1519073278252-0
         2) 1) "key_1"
            2) "value_1"
            3) "key_2"
            4) "value_2"
      2) 1) 1519073279157-0
         2) 1) "key_1"
            2) "value_1"
            3) "key_2"
            4) "value_2"
      3) 1) 1519073279157-0
         2) 1) "key_1"
            2) "value_1"
            3) "key_2"
            4) "value_2"
2) 1) "stream_2"
   2) 1) 1) 1519073278252-0
         2) 1) "key_1"
            2) "value_1"
      2) 1) 1519073279157-0
         2) 1) "key_1"
            2) "value_1"

aedis/src.hpp can not be built alone

It seems it needs boost/assert.hpp included.

third_party/aedis/include/aedis/impl/error.ipp:40:12: error: use of undeclared identifier 'BOOST_ASSERT'
         default: BOOST_ASSERT(false); return "Aedis error.";

The benchmark is a little weird.

For Go version, you use bufio and not for C++ version.
So I modify your code a little to give them a fair fight.

C++ Server

//
// echo_server.cpp
// ~~~~~~~~~~~~~~~
//
// Copyright (c) 2003-2022 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//

#include <cstdio>
#include <boost/asio.hpp>

namespace net = boost::asio;
namespace this_coro = net::this_coro;
using net::ip::tcp;
using net::detached;
using executor_type = net::io_context::executor_type;
using socket_type = net::basic_stream_socket<net::ip::tcp, executor_type>;
using tcp_socket = net::use_awaitable_t<executor_type>::as_default_on_t<socket_type>;
using acceptor_type = net::basic_socket_acceptor<net::ip::tcp, executor_type>;
using tcp_acceptor = net::use_awaitable_t<executor_type>::as_default_on_t<acceptor_type>;
using awaitable_type = net::awaitable<void, executor_type>;
constexpr net::use_awaitable_t<executor_type> use_awaitable;

awaitable_type echo(tcp_socket socket)
{
    try {
        char data[1024];
        for (;;) {
            std::size_t n = co_await socket.async_read_some(net::buffer(data), use_awaitable);
            co_await async_write(socket, net::buffer(data, n), use_awaitable);
        }
    } catch (std::exception const& e) {
        //std::printf("echo Exception: %s\n", e.what());
    }
}

awaitable_type listener()
{
    auto ex = co_await this_coro::executor;
    tcp_acceptor acceptor(ex, {tcp::v4(), 12345});
    for (;;) {
        tcp_socket socket = co_await acceptor.async_accept(use_awaitable);
        co_spawn(ex, echo(std::move(socket)), detached);
    }
}

int main()
{
    try {
        net::io_context io_context{BOOST_ASIO_CONCURRENCY_HINT_UNSAFE_IO};
        co_spawn(io_context, listener(), detached);
        io_context.run();
    } catch (std::exception const& e) {
        std::printf("Exception: %s\n", e.what());
    }
}

Go server

package main

import (
	"fmt"
	"net"
	"os"
	"runtime"
)

func echo(conn net.Conn) {
	buf := make([]byte, 1024)
	for {
		n, err := conn.Read(buf)
		if err != nil {
			break
		}
		_, err = conn.Write(buf[:n])
		if err != nil {
			break
		}
	}
}

func main() {
	runtime.GOMAXPROCS(1)

	l, err := net.Listen("tcp", "0.0.0.0:12345")
	if err != nil {
		fmt.Println("ERROR", err)
		os.Exit(1)
	}

	for {
		conn, err := l.Accept()
		if err != nil {
			fmt.Println("ERROR", err)
			continue
		}
		go echo(conn)
	}
}

With a Go version client

package main

import (
	"fmt"
	"net"
	"sync"
	"sync/atomic"
	"time"
)

const ConnNumber = 100
const MessageLoop = 10000

func main() {
	start := time.Now()
	wg := &sync.WaitGroup{}
	wg.Add(ConnNumber)
	for i := 0; i < ConnNumber; i++ {
		go OneConn(wg)
	}
	wg.Wait()
	fmt.Println("[TIME USED] ", time.Since(start).Milliseconds(), "[SUM] ", sum)
}

var sum int64 = 0

func OneConn(wg *sync.WaitGroup) {
	defer wg.Done()
	conn, err := net.Dial("tcp", "127.0.0.1:12345")
	if err != nil {
		fmt.Println("Error: ", err)
		return
	}

	for i := 0; i < MessageLoop; i++ {
		n, err := conn.Write([]byte("PING\r\n"))
		if err != nil {
			fmt.Println("Error: ", err)
			return
		}
		buf := make([]byte, 1024)
		n, err = conn.Read(buf)
		if err != nil {
			fmt.Println("Error: ", err)
			return
		}
		atomic.AddInt64(&sum, int64(n))
	}
}

On my computer, I run three loops for each version and here is the result:

Go

[TIME USED]  7493 [SUM]  6000000
[TIME USED]  7070 [SUM]  6000000
[TIME USED]  7241 [SUM]  6000000

C++

[TIME USED]  7925 [SUM]  6000000
[TIME USED]  7752 [SUM]  6000000
[TIME USED]  7857 [SUM]  6000000

As the result shows, Go is a little faster than C++ with this simple IO task. Actually for IO task, Go is not that poor at performance .

No matching constructor for req_info found

Add constructor for req_info to prevent compile errors under clang.
Solution: Add req_info(const executor_type& ex) : timer(ex) {} and adjust constructor calls accordingly.

Docs must explain the async_run rationale

The documentation should explain the rationale for this:

co_await ( conn->async_run() || conn->async_exec( req, adapt(resp) ) );

A "FAQ" section could be a good approach.

No docs?

the docs link you provide (github pages is down or smt)

HGETALL fails with if not sent together with HELLO

I'm trying to read a hash using HGETALL into a std::map<std::string, std::string>. It fails unless the requests starts with a req.push("HELLO", 3).

Doesn't work (fails with Expects resp3 map. [aedis:8]):

aedis::resp3::request req;
req.push("HGETALL", key);
std::tuple<std::map<std::string, std::string>> resp;
co_await conn_->async_exec(req, aedis::adapt(resp), use_awaitable);

However, it works when sending HELLO first (like the examples).

aedis::resp3::request req;
req.push("HELLO", 3);
req.push("HGETALL", key);
std::tuple<aedis::ignore, std::map<std::string, std::string>> resp;
co_await conn_->async_exec(req, aedis::adapt(resp), use_awaitable);

The documentation is not clear about when it's necessary to send a HELLO first.

unguarded headers

There are still some unguarded headers, like aedis/impl/error.ipp and few others.

Revisit Boost reviews and implement what is meaningful

Subscriber error reported on the Boost review

  1. I've modified cpp20_subscriber to add another task that sends commands
    with async_exec while reading subscription replies. If any of the commands
    issued with async_exec contain any errors (like the LPOP above), the program
    crashes with a "conn->cmds_ != 0" assertion.

Sending a request (using async_exec) in async_receive loop hangs/stalls

Hi @mzimbres,

I have made a small example where I receive messages published to a channel ( /my-channel in this example). Upon receiving a message, I issue a HGETALL request to fetch some data from a hash.

If another message is received immediately after, the async_exec with the HGETALL requests hangs forever.

Receiver routine:

auto receiver(shared_ptr<aedis::connection> conn) -> asio::awaitable<void>
{
    aedis::resp3::request sub_req;
    sub_req.push("SUBSCRIBE", "/my-channel");
    co_await conn->async_exec(sub_req, aedis::adapt(), asio::use_awaitable);

    for (;;) {
        std::vector<aedis::resp3::node<std::string>> resp;
        for (;;) {
            co_await conn->async_receive(aedis::adapt(resp), asio::use_awaitable);
            resp.clear();

            map<string, string> values;
            auto resp2 = std::tie(values);
            aedis::resp3::request req;
            req.push("HGETALL", "/my-hash");
            co_await conn->async_exec(req, aedis::adapt(resp2), asio::use_awaitable);            
        }
    }
}

I publish two times to the channel right after each other.
echo -e "PUBLISH /my-channel 1\nPUBLISH /my-channel 2" | redis-cli

Full example code: https://gist.github.com/jsaf0/30c1a32b5208716e3c531b1d389be9c4

Test streams should use fail_count

One of the main test modes of test::stream is that it can fail on the Nth operation, for increasing values of N. This ensures that every possible branch in the code being tested is exercised:

https://github.com/boostorg/beast/blob/97ece405b8127e1d4767a8f63b82478d5637b9ec/include/boost/beast/_experimental/test/fail_count.hpp#L31

The tests should be constructing the test::stream with a fail_count, and then running the same test over and over again in a loop with incrementing fail_count until the test passes.

Consumers / producers example

Hello,
I think it would be nice, following our conversation, to have some 'real life' example with multiple consumers and producers setting getting data, and also to show how to use various completition tokens in async execs, how to properly ensure requests lifetime, their reusal and so on. If possible, in C++17 standard. Petr

Add retry flag to the request class

The retry flag should express the user desire to retry sending requests that have been sent but remained unresponded after a connection lost. For example, users may want to retry GET commands but not SET commands.

fail to compile on linux OS

I used aedis::resp3::request in two different header files, and the link failed.The following is the error message

/usr/bin/ld: ./server.o: in function aedis::adapter::detail::from_bulk(bool&, boost::basic_string_view<char, std::char_traits<char> >, boost::system::error_code&)': /usr/local/include/aedis/adapter/detail/adapters.hpp:68: multiple definition of aedis::adapter::detail::from_bulk(bool&, boost::basic_string_view<char, std::char_traits >, boost::system::error_code&)'; ./redis_client.o:/usr/local/include/aedis/adapter/detail/adapters.hpp:68: first defined here
/usr/bin/ld: ./server.o: in function aedis::adapter::detail::set_on_resp3_error(aedis::resp3::type, boost::system::error_code&)': /usr/local/include/aedis/adapter/detail/adapters.hpp:93: multiple definition of aedis::adapter::detail::set_on_resp3_error(aedis::resp3::type, boost::system::error_code&)'; ./redis_client.o:/usr/local/include/aedis/adapter/detail/adapters.hpp:92: first defined here
/usr/bin/ld: ./server.o: in function aedis::adapt()': /usr/local/include/aedis/adapt.hpp:141: multiple definition of aedis::adapt()'; ./redis_client.o:/usr/local/include/aedis/adapt.hpp:141: first defined here
/usr/bin/ld: ./server.o: in function aedis::adapter::detail::parse_double(char const*, unsigned long, boost::system::error_code&)': /usr/local/include/aedis/adapter/detail/adapters.hpp:42: multiple definition of aedis::adapter::detail::parse_double(char const*, unsigned long, boost::system::error_code&)'; ./redis_client.o:/usr/local/include/aedis/adapter/detail/adapters.hpp:42: first defined here
/usr/bin/ld: ./server.o: in function aedis::adapter::detail::from_bulk(double&, boost::basic_string_view<char, std::char_traits<char> >, boost::system::error_code&)': /usr/local/include/aedis/adapter/detail/adapters.hpp:75: multiple definition of aedis::adapter::detail::from_bulk(double&, boost::basic_string_view<char, std::char_traits >, boost::system::error_code&)'; ./redis_client.o:/usr/local/include/aedis/adapter/detail/adapters.hpp:75: first defined here
collect2: 错误:ld 返回 1

Improve support to Redis error messages

To receive Redis error messages Aedis users must use use resp3::node or std::vector<resp3::node> as response types. This is to restricting as most users want to receive responses in their final data structure.

One idea to support this is to pass the error as the second parameter of the adapt function, so that instead of e.g

co_await conn->async_exec(req, adapt(resp))cpp

users would be able to

co_await conn->async_exec(req, adapt(resp, error))cpp

and the adapter would store the error in the error variable.

Add reserve member to request class

Klemens: the aedis::request object goes into the const_buffer as is?
Occase: Yes, it is the payload written to the socket.
Klemens: ok, then it's probably correct as long as request has a .reserve() member
Occase: I will add one, it is missing now.

More suggestions for documentation improvements

Some of the reasons for the rejections are not particularly strong but there is one recurring theme which even affected me and that is this async_run and async_exec business with the operator|| on the coroutines

it bugged me, it bugged David, it pretty much bugs everyone who sees it

I understand why it is there but it has to be presented better

Aside from the general problem of the documentation needing improvement, it needs to be stated clearly and from the beginning in the README and the html docs, the library's requirements on run and exec

I will try to give an example of some exposition

"This library implements RESP3, a string-based protocol which can multiplex any number of client requests, responses, and server pushes onto a single active socket connection to the Redis server."

"Due to server pushes and multiplexing, there is not a 1:1 correspondence between client requests and server results."

"The interface for the library provides the function async_runto allow the caller to run the necessary, ongoing asynchronous operation which reads and writes to the Redis server as needed to deliver requests, receive responses, and receive server pushes."

"Depending on the user's requirements, there are different styles of calling async_run. If there is only one active client communicating with the server, the easiest way to call async_run is to only run it simultaneously with each exec commands, thusly:"
co_await (con.async_run() || con.async_exec) && ...; // whatevs

(and then we have to explain why this syntax looks like this, and the benefits of doing so which includes the best performance in terms of application-level TCP/IP flow control and backrpessure)

"If there are many in-process clients performing simultaneous requests, an alternative is to launch a long-running coroutine which calls async_run:"
awaitable do_run(connection& c)
{
co_await c.async_run();
}

(and then you have to give an example using callbacks)

"While calling async_run is a sufficient condition for maintaining active two-way communication with the Redis server, most production deployments will want to do more. For example, they may want to reconnect if the connection goes down, either to the same server or a failover server. They may want to perform health checks. They may want to run for a certain amount of time (say, 1 second) and then briefly perform an algorithm which requires that no other threads are accessing shared data structures like the connection."

"The library requires the caller to manually invoke async_run to allow these customizations."

(and then show examples of health_checker and reconnect)

This should be explained in both the README and the html docs, and it has to come first because it is a principal feature of the library that anyone who integrates it will need to be aware of.

Not all control paths return value

aeedis\impl\error.ipp(47): error C4715: 'aedis::detail::error_category_impl::message': not all control paths return a value
clang 14.0.6

Make the connection full-duplex

Currently, write operations happens only after the response to previous command arrive. There is however nothing in the protocol that prohibits writing continuously as requests come.

This is not likely to produced considerable (or even measurable) performance improvement due to the support for automatic pipelining, but there is also no reason not to implement it.

Let the implementation call adapt(resp) automatically

The fact that users should call adapt(resp) was a major source of frustration in the boost review, for example, they have expected

co_await conn->async_exec(req, resp);

instead of

co_await conn->async_exec(req, adapt(resp));

I admit this makes the library look a bit unpolished. However we can change this easily by offering adapt as a customization point (and renaming it to boost_redis_adapt).

We do not HANG

async_receive Javadoc should not use the word "hang". Consider this instead:

When pushes arrive and there is no async_receive operation in progress, pushed data, requests, and responses will be paused until async_receive is called again

Dead lock if cancellation is requested while there is a PING request on the fly

I'm not sure but problem seems to be here.
If I am not mistaken you used close_on_run_completion flag exactly to prevent this situation but I don't know why it leads to dead lock any way.

You can reproduce it with a loop like:

for (;;)
{
    auto timer = asio::steady_timer{ executor, std::chrono::seconds{ 1 } };
    co_await (connection.async_run(endpoint, asio::use_awaitable) || timer.async_wait(asio::use_awaitable));
}

if you change timer to something shorter than ping time like 800ms it will not lead to dead lock.

We need response typedefs

We have a request type but no response type. We should add

using response = std::tuple<...>;

and

using generic_response = std::vector<resp3::node<std::string>>;

The request type can be also move from redis::resp3::request to redis::request

XREAD command and std::vector< aedis::resp3::node< std::string > >

As you know XREAD command can return nil when there is no messages. using std::vector< aedis::resp3::node< std::string > > for response nil would lead to error in read operation and cancel the connection.

And is there a possibility for a more ergonomic type as a container for the result of XREAD and XREADGROUP commands?

Considering that they have a fixed structure:

> XREAD COUNT 2 STREAMS mystream writers 0-0 0-0
1) 1) "mystream"
   2) 1) 1) 1526984818136-0
         2) 1) "duration"
            2) "1532"
            3) "event-id"
            4) "5"
            5) "user-id"
            6) "7782813"
      2) 1) 1526999352406-0
         2) 1) "duration"
            2) "812"
            3) "event-id"
            4) "9"
            5) "user-id"
            6) "388234"
2) 1) "writers"
   2) 1) 1) 1526985676425-0
         2) 1) "name"
            2) "Virginia"
            3) "surname"
            4) "Woolf"
      2) 1) 1526985685298-0
         2) 1) "name"
            2) "Jane"
            3) "surname"
            4) "Austen"

Try replacing use_awaitable with deferred for boost .180 in the TCP server

Klemens has suggested replacing use_awaitable with deferred in boost 1.80 to improve performance in the TCP echo server and the RESP3 echo server. If there are any improvements the graphs should be also updated. Some background information

Any coroutine always allocates a frame, but that doesn't mean an async_op does. I.e. if you async_foo(use_awaitable), that'll return an awaitable that has a frame but, in boost 1.80 you can directly await async ops, e.g.: co_await async_foo(asio::deferred); then it doesn't need to allocate anything

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.