Giter VIP home page Giter VIP logo

folsom's Introduction

Folsom

Folsom is an attempt at a small and stable memcache client. Folsom is fully asynchronous, based on Netty and uses Java 8's CompletionStage through-out the API.

Build status

Build status

Maven central

Maven Central

Build dependencies

  • Java 8 or higher
  • Maven
  • Docker - to run integration tests.

Runtime dependencies

  • Netty 4
  • Google Guava
  • Yammer metrics (optional)
  • OpenCensus (optional)

Usage

Folsom is meant to be used as a library embedded in other software.

To import it with maven, use this:

<!-- In dependencyManagement section -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom-bom</artifactId>
  <version>1.14.0</version>
  <type>pom</type>
  <scope>import</scope>
</dependency>

<!-- In dependencies section -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom</artifactId>
</dependency>

<!-- optional if you want to expose folsom metrics with spotify-semantic-metrics -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom-semantic-metrics</artifactId>
</dependency>

<!-- optional if you want to expose folsom metrics with yammer -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom-yammer-metrics</artifactId>
</dependency>

<!-- optional if you want to expose folsom tracing with OpenCensus -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom-opencensus</artifactId>
</dependency>

<!-- optional if you want to use AWS ElastiCache auto-discovery -->
<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom-elasticache</artifactId>
</dependency>

If you want to use one of the metrics or tracing libraries, make sure you use the same version as the main artifact.

We are using semantic versioning

The main entry point to the folsom API is the MemcacheClientBuilder class. It has chainable setter methods to configure various aspects of the client. The methods connectBinary() and connectAscii() constructs MemcacheClient instances utilising the binary protocol and ascii protocol respectively. For details on their differences see Protocol below.

All calls to the folsom API that interacts with a memcache server is asynchronous and the result is typically accessible from CompletionStage instances. An exception to this rule are the methods that connects clients to their remote endpoints, MemcacheClientBuilder.connectBinary() and MemcacheClientBuilder.connectAscii() which will return a MemcacheClient immediately while asynchronously attempting to connect to the configured remote endpoint(s).

As code using the folsom API should be written so that it handles failing intermittently with MemcacheClosedException anyway, waiting for the initial connect to complete is not something folsom concerns itself with. For single server connections, ConnectFuture provides functionality to wait for the initial connection to succeed, as can be seen in the example below.

final MemcacheClient<String> client = MemcacheClientBuilder.newStringClient()
    .withAddress(hostname)
    .connectAscii();
// make it wait until the client has connected to the server
ConnectFuture.connectFuture(client).toCompletableFuture().get();

client.set("key", "value", 10000).toCompletableFuture().get();
client.get("key").toCompletableFuture().get();

client.shutdown();

Clients are single use, after shutdown has been invoked the client can no longer be used.

To work with generic Serializable types

One can simply use MemcacheClientBuilder.<T>newSerializableObjectClient() method to create a client that works for a specific Java type that implements Serializable.

public record Student(String name, int age) implements Serializable { }

public static void main(String[] args) throws Exception {
  MemcacheClient<Student> client =
      MemcacheClientBuilder.<Student>newSerializableObjectClient()
      .withAddress("localhost")
      .connectAscii();
  // make it wait until the client has connected to the server
  ConnectFuture.connectFuture(client).toCompletableFuture().get();

  client.set("s1", new Student("Elon", 28), 10000).toCompletableFuture().get();
  Student value = client.get("s1").toCompletableFuture().get();
}

Java 7 usage

If you are still on Java 7, you can depend on the older version:

<dependency>
  <groupId>com.spotify</groupId>
  <artifactId>folsom</artifactId>
  <version>0.8.1</version>
</dependency>

Design goals

  • Robustness - If you request something, the future you get back should always complete at some point.
  • Error detection - If something goes wrong (the memcache server is behaving incorrectly or some internal bug occurs), we try to detect it and drop the connection to prevent further problems.
  • Simplicity - The code base is intended to be small and well abstracted. We prefer simple solutions that solve the major usecases and avoid implementing optimizations that would give small returns.
  • Fail-fast - If something happens (the memcached service is slow or gets disconnected) we try to fail as fast as possible. How to handle the error is up to you, and you probably want to know about the error as soon as possible.
  • Modularity - The complex client code is isolated in a single class, and all the extra functionality are in composable modules: (ketama, reconnecting, retry, roundrobin)
  • Efficiency - We want to support a high traffic throughput without using too much CPU or memory resources.
  • Asynchronous - We fully support the idea of writing asynchronous code instead of blocking threads, and this is achieved through Java 8 futures.
  • Low amount of synchronization - Code that uses a lot of synchronization primitives is more likely to have race condition bugs and deadlocks. We try to isolate that as much as possible to minimize the risk, and most of the code base doesn't have to care.

Best practices

Do not use withConnectionTimeoutMillis() or the deprecated withRequestTimeoutMillis() to set timeouts per request. This is intended to detect broken TCP connections to close it and recreate it. Once this happens, all open requests will be completed with a failure and Folsom will try to recreate the connection. If this timeout is set too low, this will create connection flapping which will result in an increase of failed requests and/or increased request latencies.

A better way of setting timeouts on individual requests (in Java 9+) is something like this:

CompletableFuture<T> future = client.get(...)
  .toCompletableFuture()
  .orTimeout(...)
  .whenCompleteAsync((v, e) -> {}, executor);

Note that in case of timeouts, the futures from orTimeout would all be completed on a singleton thread, which may cause contention. To avoid problems with that, we add whenCompleteAsync to ensure that the work is moved to an executor that has sufficient threads.

Protocol

Folsom implements both the binary protocol and ascii protocol. They share a common interface but also extend it with their own specializations.

Which protocol to use depends on your use case. With a regular memcached backend, the ascii protocol is much more efficient. The binary protocol is a bit chattier but also makes error detection easier.

interface MemcacheClient<T> {}
interface AsciiMemcacheClient<T> extends MemcacheClient<T> {}
interface BinaryMemcacheClient<T> extends MemcacheClient<T> {}

Changelog

See changelog.

Features

Ketama

Folsom support Ketama for sharing across a set of memcache servers. Note that the caching algorithm (currently) doesn't attempt to provide compatibility with other memcache clients, and thus when switching client implementation you will get a period of low cache hit ratio.

Micrometer metrics

You can optionally choose to track performance using Micrometer metrics. You will need to include the folsom-micrometer-metrics dependency and initialize using MemcacheClientBuilder. (Optionally add additional tags):

builder.withMetrics(new MicrometerMetrics(metricsRegistry));

Yammer metrics

You can optionally choose to track performance using Yammer metrics. You will need to include the folsom-yammer-metrics dependency and initialize using MemcacheClientBuilder:

builder.withMetrics(new YammerMetrics(metricsRegistry));

OpenTelemetry metrics

You can optionally choose to track performance using OpenTelemetry metrics. You will need to include the folsom-opentelemetry-metrics dependency and initialize using MemcacheClientBuilder:

builder.withMetrics(new OpenTelemetryMetrics(metricsRegistry));

OpenCensus tracing

You can optionally use OpenCensus to trace Folsom operations. You will need to include the folsom-opencensus dependency and initialize tracing using MemcacheClientBuilder:

builder.withTracer(OpenCensus.tracer());

Cluster auto-discovery

Nodes in a memcache clusters can be auto-discovered. Folsom supports discovery through DNS SRV records using the com.spotify.folsom.SrvResolver or AWS ElastiCache using the com.spotify.folsom.elasticache.ElastiCacheResolver.

SrvResolver:

builder.withResolver(SrvResolver.newBuilder("foo._tcp.example.org").build());

ElastiCacheResolver:

builder.withResolver(ElastiCacheResolver.newBuilder("cluster-configuration-endpoint-hostname").build());

Building

mvn package

Code of conduct

This project adheres to the Open Code of Conduct. By participating, you are expected to honor this code.

Authors

Folsom was initially built at Spotify by Kristofer Karlsson, Niklas Gustavsson and Daniel Norberg. Many thanks also go out to Noa Resare.

folsom's People

Contributors

alexey-twilio avatar alexeyongithub avatar aquadrini avatar bcleenders avatar borislopezaraoz avatar boxjreilly avatar cciollaro avatar checketts avatar danielnorberg avatar dependabot[bot] avatar fdfzcq avatar freben avatar geertjanw avatar gizem969 avatar jasmineytchen avatar jreilly650 avatar kamyki avatar klaraward avatar korzepadawid avatar krka avatar liufuyang avatar mattnworb avatar nresare avatar paralainer avatar protocol7 avatar proximyst avatar rgruener avatar snyk-bot avatar spkrka avatar thiagocesarj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

folsom's Issues

Binary client doesn't obey TTL in some cases

I have a wrapper for the cache layer that enforces a TTL on sets. Under some conditions which we haven't identified (Folsom 1.2.1, Oracle JDK1.8 in all cases, memcached 1.4.15 in Linux and memcached 1.5.13 in Mac), we have identified that TTL doesn't work properly in binary protocol, but works perfectly as expected for same test if I change the connection line from connectBinary to connectAscii. With connectBinary, it works well in some cases (a Ubuntu laptop with memcached 1.5.6) but not in others (Mac with memcached 1.5.13). This is the equivalent pseudo-code:

	@Test
	public void testExpiredKey() throws Exception {
		int ttl = 5;
		CacheClient<CachedValue> client = instanceClient(CachedValue.class, ttl);
		String key = randomKey();
		CachedValue value = randomValue();

		client.put(key, value).join();
		await().atMost(ttl * 2, TimeUnit.SECONDS).untilAsserted(() -> assertNull(client.get(key).join()));

	}

SrvKetamaClient doesn't shutdown the old client after DNS update

There's a bug in com.spotify.folsom.ketama.SrvKetamaClient#setPendingClient() which prevents the old client from being shutdown correctly.
The code keeps track of the oldPending, but it always gets a value that got nullified at the new client connect future callback (line 207).

So when a new list of target host:port pairs is detected the client connects to the new targets correctly, but doesn't disconnect the old clients. Also, when a target memecached is really missing for good, the client will keep trying to reconnect.

I think there's also an issue of a possible premature shutdown there if the code would have worked correctly. The lines 218-221 should be moved into the connect future callback IMO.

I can work on a patch for this tomorrow.

Code coverage reporting / coveralls broken since moving to Java 8

Looking at https://coveralls.io/github/spotify/folsom , the coverage graph shows all 0s since #84 was merged:

image

This is because cobertura fails to instrument classes at runtime with exceptions like this in the Travis build:

[cobertura] WARN  [main] net.sourceforge.cobertura.instrument.CoberturaInstrumenter - Unable to instrument file /home/travis/build/spotify/folsom/target/generated-classes/cobertura/com/spotify/folsom/BackoffFunction.class
java.lang.IllegalArgumentException
    at org.objectweb.asm.ClassReader.<init>(Unknown Source)
    at org.objectweb.asm.ClassReader.<init>(Unknown Source)
    at org.objectweb.asm.ClassReader.<init>(Unknown Source)

which leads to the coveralls plugin seeing 0 lines of code in 0 files:

[INFO] Starting Coveralls job for travis-ci (322863796)
[INFO] Git commit f660fb8 in master
[INFO] Writing Coveralls data to /home/travis/build/spotify/folsom/target/coveralls.json...
[INFO] Processing coverage report from /home/travis/build/spotify/folsom/target/site/cobertura/coverage.xml
[INFO] Successfully wrote Coveralls data in 62ms
[INFO] Gathered code coverage metrics for 0 source files with 0 lines of code:
[INFO] - 0 relevant lines
[INFO] - 0 covered lines
[INFO] - 0 missed lines
[INFO] Submitting Coveralls data to API
[INFO] Successfully submitted Coveralls data in 861ms for Job #424.1
[INFO] https://coveralls.io/jobs/32392210

The root cause seems to be that cobertura does not support Java 8:

The underlying problem in cobertura has been fixed but there has not been a release in almost 3 years. The cobertura project seems to be dead.

Extending MemcacheClientBuilder to more easily allow other AbstractMultiMemcacheClient impl

Hi,
I am currently looking at using folsom. One thing I want to do it to implement my own sharding (to match existing impl). To do this, I was planning to implement a class extending AbstractMultiMemcacheClient similar to KetamaMemcacheClient.

Initially I was planning to extend MemcacheClientBuilder, noticing that the MemcacheClientBuilder.connectRaw method is protected, however on closer inspection I noticed that many fields that one might want to use in that method (such as this.addresses) are not accessible. Would you have any objection to a patch that changes the current set of fields from private to protected?

Another possible alternative I was thinking about is addition of a method to the builder that allows one to pass in the code to create an AbstractMultiMemcacheClient given a list of addresses. e.g.

MemcacheClientBuilder<V> withRawClientCreator(Function<List<HostAndPort>, RawMemcacheClient> clientCreationFunction)

Then in connectRaw, use the function if it is not null, otherwise use the existing logic. Either something like this, or being able to extend and override connectRaw would work for me, but I'm wondering if there would be any interest in a solution like withRawClientCreator? Let me know what you think.

Thanks!

Feature request: per node telemetry

To better detect when one memcache node is misbehaving.
Metrics like outstanding request and latencies (per memcache node) would be a good start

Project status?

Hi there,

What is the status of this project? It looks great but it seems there's very low activity on it... do you plan to keep developing it, or is it in "bugfix" mode now? any roadmap?

Client lifetime, usage in Vert.x app

Hi,

I've few questions about the usage:

  1. Quoting from README:

    Clients are single use, after shutdown has been invoked the client can no longer be used.

    Can't we create a long lived client, like Memcached suggest? I mean, is is ok if I create client once and use it throughout the app lifetime? What does 'Clients are single use' means?

  2. I want to use this client in vert.x based app and as this project also uses Netty, I thought this is a perfect fit. Have you aware of any issues while using with vert.x?

  3. Can I assume the project will be maintained further?

  4. Is it possible to merge the pending PRs?

Thanks.

Close connections after DNS update

Hi,

Looking at both SrvKetamaClient and SrvKetamaClientTest and it looks likeupdateDNS only does additive changes. I mean, if the DNS returns a different list of hosts, the connection to the previous hosts will be maintained until the connections get severed by the hosts.

Is this right? If not, will it be possible to close the connections to the previous hosts when this occurs?

Reusable buffers as parameter for getters

Hi,

It would be great to be able to pass custom byte[] buffers as parameters for the GET and SET methods. Using the byte[], offset and size as parameters.

Basic use case for SET:

  1. I have already a buffer that converts Java objects to Kryo serialized objects using a byte[] buffer. (instead of creating new byte[] every time and destroying the GC)
  2. Then I use this buffer (with offsets) to compress it using Snappy (using offsets again, instead of generating new byte[] every time and destroying the GC again)
  3. Then, NOW it would be amazing to use the buffer from 1) here and flush it using offsets back to the memcached socket.

Basic use case for GET:

  1. I want to use my custom byte[] buffer to get the data from a memcached key
  2. From that result (offset + len), I can then pass this same buffer to a Snappy.decompress method, using another cached buffer.
  3. From this Snappy decompresse result, then I can use a Kryo.unserialize using the buffer from 1).

Again, the goal is to not create any objects through the usage of memcached. This random creation of byte[] kills the JVM GC, at least with G1 or concmarksweep. We had to migrate to shenoah for faster "Stop the World" GC, but still it would be wonderful to not create new byte[] for every single object inside memcached (we have millions).

I couldn't see how to do this with XMemcached and Spymemcached, any suggestion how to achieve this with Folsom would be greatly appreciated. I can send a PR if someone send me some pointers where to touch.

thanks!
rafa

Add Micrometer metrics

I noticed the project has Yammer Metrics a an option. Would you be interested in a Micrometer metrics contribution, following the yammer metrics pattern?

It is a metrics facade that allows many other backends (Prometheus in my case)

Folsom - Spring Boot DevTools class loader

Hi,
When using the spring boot dev tools library the are some problems with class loaders, mainly:
Screenshot from 2021-11-16 12-57-48 (1)

The problem is that there are two class loaders for dedicated class. When load the class from memcache and map/cast to object of class there is an exception like:
java.lang.ClassCastException: class xxx.api.cache.CachedResponse cannot be cast to class xxx.api.cache.CachedResponse (xxx.api.cache.CachedResponse is in unnamed module of loader 'app'; xxx.api.cache.CachedResponse is in unnamed module of loader org.springframework.boot.devtools.restart.classloader.RestartClassLoader @2f6f8549)

I have tried to exclude CachedResponse class from restart or some how from RestartClassLoader but i did not manage to me.
I have used
"restart.exclude.classes=file:/app/build/classes/java/main/" property but it exclude all classed in classpath. I have tried to exclude specific class "CachedResponse" but no results.
Also I have tried to include the folsom jar to RestartClassLoader but did not work.

The question is:
Is it possible to implement functionality in folsom to use RestartClassLoader?
Or is there possibility to exclude that CachedResponse class from RestartClassLoader

Configurable number of connections to memcached server

Currently in folsom number of connections are fixed. There should be a way to provide initial number of connections and max number of connections. Advantage of this feature is folsom will initially make some connections and then it will make more connections at runtime as and when required.

Last 2 versions were not released

1.18.0 and 1.19.0 were prepared for release but never actually released, 1.17.0 is the latest one on maven central. We'd like to use some features from latest releases, what is the plan with pushing new versions to repository?

IntegrationTest#testPartitionMultiget is flaky

Seems like jmemcached might be flaky. Most test runs complete successfully but when it fails, the root cause seems to be an IndexOutOfBoundsException in jmemcached.

14:37:24.635 [New I/O server worker #1-1] ERROR c.t.j.p.t.MemcachedResponseEncoder - error
java.lang.IndexOutOfBoundsException: null
    at org.jboss.netty.buffer.SlicedChannelBuffer.checkIndex(SlicedChannelBuffer.java:222) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.buffer.SlicedChannelBuffer.getByte(SlicedChannelBuffer.java:93) ~[netty-3.2.3.Final.jar:na]
    at com.thimbleware.jmemcached.protocol.text.MemcachedCommandDecoder.eol(MemcachedCommandDecoder.java:53) ~[jmemcached-core-1.0.0.jar:na]
    at com.thimbleware.jmemcached.protocol.text.MemcachedCommandDecoder.decode(MemcachedCommandDecoder.java:68) ~[jmemcached-core-1.0.0.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:282) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:214) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:350) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46) [netty-3.2.3.Final.jar:na]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_40]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_40]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
14:37:24.638 [New I/O server worker #1-1] ERROR c.t.j.p.t.MemcachedResponseEncoder - error
java.io.IOException: Broken pipe
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_40]
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.8.0_40]
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_40]
    at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[na:1.8.0_40]
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_40]
    at org.jboss.netty.channel.socket.nio.SocketSendBufferPool$PooledSendBuffer.transferTo(SocketSendBufferPool.java:239) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.write0(NioWorker.java:470) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.writeFromUserCode(NioWorker.java:388) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:137) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:76) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.write(Channels.java:611) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.write(Channels.java:578) ~[netty-3.2.3.Final.jar:na]
    at com.thimbleware.jmemcached.protocol.text.MemcachedResponseEncoder.messageReceived(MemcachedResponseEncoder.java:103) ~[jmemcached-core-1.0.0.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302) ~[netty-3.2.3.Final.jar:na]
    at com.thimbleware.jmemcached.protocol.MemcachedCommandHandler.handleSet(MemcachedCommandHandler.java:290) ~[jmemcached-core-1.0.0.jar:na]
    at com.thimbleware.jmemcached.protocol.MemcachedCommandHandler.messageReceived(MemcachedCommandHandler.java:186) ~[jmemcached-core-1.0.0.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:317) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:299) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:214) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:350) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46) [netty-3.2.3.Final.jar:na]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_40]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_40]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
14:37:24.639 [New I/O server worker #1-1] ERROR c.t.j.p.t.MemcachedResponseEncoder - error
java.nio.channels.ClosedChannelException: null
    at org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:637) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.writeFromUserCode(NioWorker.java:370) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:137) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:76) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.write(Channels.java:611) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.write(Channels.java:578) ~[netty-3.2.3.Final.jar:na]
    at com.thimbleware.jmemcached.protocol.text.MemcachedResponseEncoder.messageReceived(MemcachedResponseEncoder.java:103) ~[jmemcached-core-1.0.0.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302) ~[netty-3.2.3.Final.jar:na]
    at com.thimbleware.jmemcached.protocol.MemcachedCommandHandler.handleSet(MemcachedCommandHandler.java:290) ~[jmemcached-core-1.0.0.jar:na]
    at com.thimbleware.jmemcached.protocol.MemcachedCommandHandler.messageReceived(MemcachedCommandHandler.java:186) ~[jmemcached-core-1.0.0.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:317) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:299) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.cleanup(FrameDecoder.java:333) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.channelDisconnected(FrameDecoder.java:226) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.fireChannelDisconnected(Channels.java:360) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.close(NioWorker.java:587) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.write0(NioWorker.java:513) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.writeFromUserCode(NioWorker.java:388) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:137) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:76) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.write(Channels.java:611) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.write(Channels.java:578) ~[netty-3.2.3.Final.jar:na]
    at com.thimbleware.jmemcached.protocol.text.MemcachedResponseEncoder.messageReceived(MemcachedResponseEncoder.java:103) ~[jmemcached-core-1.0.0.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302) ~[netty-3.2.3.Final.jar:na]
    at com.thimbleware.jmemcached.protocol.MemcachedCommandHandler.handleSet(MemcachedCommandHandler.java:290) ~[jmemcached-core-1.0.0.jar:na]
    at com.thimbleware.jmemcached.protocol.MemcachedCommandHandler.messageReceived(MemcachedCommandHandler.java:186) ~[jmemcached-core-1.0.0.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:302) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:317) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:299) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:214) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:350) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201) ~[netty-3.2.3.Final.jar:na]
    at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46) [netty-3.2.3.Final.jar:na]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_40]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_40]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]

Maximum value size

For set operations, should we do something special to respect maximum value size? Or should we just send it anyway and rely on the failure we get back?

Add tracing

It would be awesome if folsom was wired up with OpenCensus tracing to allow for spans to be automatically created when memcached calls are being made.

Feature request: customizable node locators for ketama

It would be nice to be able to tweak the actual node location logic in the ketama setup (hash key to integer, hash machines + node number to virtual nodes in a ring, et.c.) to make it easier to be compatible with other ketama implementations (such as spymemcached). This would make a lot of transitions easier.

Fix validation of max key size to work with InnoDB Memcached

Hi Folsom developers,

could you please consider changing validation of max key length in folsom from 250 to 249?

Everything works fine with Memcached where their validation looks like this:
https://github.com/memcached/memcached/blob/master/memcached.c

#define KEY_MAX_LENGTH 250

if(nkey > KEY_MAX_LENGTH) {
    out_string(c, "CLIENT_ERROR bad command line format");
    return;
}

But there is issue with MySQL InnoDB Memcached, where they are off by one.
https://github.com/mysql/mysql-server/blob/5.7/plugin/innodb_memcached/innodb_memcache/src/innodb_engine.c#L1462

#define KEY_MAX_LENGTH	250

assert(*name_len < KEY_MAX_LENGTH);

They acutally allow max 249.
Our MySQL with this Memcached "frontend" actually went down due to high load and rate it was throwing errors on too big key length.

Release notes or CHANGELOG

First of all, thanks for maintaining this library.

I was wondering, there is no recent update of changelog file, and there are not released versions in github releases. Is there any place where I can see what's been changed, other than browsing git history?

Binary client throws IllegalStateException on retries?

Example stack trace:

com.spotify.folsom.MemcacheClosedException: opaque may not be set more than one
 at com.spotify.folsom.client.DefaultRawMemcacheClient.send(DefaultRawMemcacheClient.java:194)
 at com.spotify.folsom.reconnect.ReconnectingClient.send(ReconnectingClient.java:94)
 at com.spotify.folsom.ketama.KetamaMemcacheClient.sendSplitRequest(KetamaMemcacheClient.java:98)
 at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
 at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
 at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
 at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:902)
 at com.google.common.util.concurrent.Futures$1$1.run(Futures.java:635)
 at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
 at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
 at com.google.common.util.concurrent.Futures$CombinedFuture.access$400(Futures.java:1608)
 at com.google.common.util.concurrent.Futures$CombinedFuture$2.run(Futures.java:1686)
 at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
 at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
 at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
 at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
 at com.spotify.folsom.client.DefaultRawMemcacheClient.send(DefaultRawMemcacheClient.java:194)
 at com.spotify.folsom.reconnect.ReconnectingClient.send(ReconnectingClient.java:94)
 at com.spotify.folsom.ketama.KetamaMemcacheClient.sendSplitRequest(KetamaMemcacheClient.java:98)
 at com.spotify.folsom.ketama.KetamaMemcacheClient.send(KetamaMemcacheClient.java:69)
 at com.spotify.folsom.retry.RetryingClient$1.create(RetryingClient.java:51)
 at com.google.common.util.concurrent.Futures$FallbackFuture$1.onFailure(Futures.java:471)
 at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
 at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
 at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:902)
 at com.google.common.util.concurrent.Futures$1$1.run(Futures.java:635)
 at com.spotify.folsom.client.Utils$1.execute(Utils.java:44)
 at com.google.common.util.concurrent.Futures$1.run(Futures.java:632)
 at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
 at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
 at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
 at com.google.common.util.concurrent.Futures$CombinedFuture.setExceptionAndMaybeLog(Futures.java:1704)
 at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
 at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
 at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
 at com.spotify.folsom.client.CallbackSettableFuture.run(CallbackSettableFuture.java:35)
 at com.spotify.folsom.reconnect.ReconnectingClient.send(ReconnectingClient.java:94)
 at com.google.common.util.concurrent.Futures$FallbackFuture$1.onFailure(Futures.java:471)
 at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
 at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
 at com.google.common.util.concurrent.Futures$1$1.run(Futures.java:635)
 at com.google.common.util.concurrent.Futures$1.run(Futures.java:632)
 at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
 at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
 at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
 at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
 at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
 at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
 at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)

Client seems to be leaking connections after timeouts calling a non responsive memcached server

We had a temporary faulty production memcached server which caused a few hundreds of request timeouts. When this happened the client did reconnect, but the old connections seem to remain open, and never got closed.

I have managed to recreate this behaviour using a stand-alone test.

I start 2 nodes like so:

memcached -m 64 -p 11213 -u memcache -l 127.0.0.1
memcached -m 64 -p 11214 -u memcache -l 127.0.0.1

I register them in consul so the DnsSrvResolver will pick them up. I omitted this code, cause it doesn't matter to this test.
I then execute the following main class:

public class ReconnetTest {

  private static final Logger log = LoggerFactory.getLogger(ReconnetTest.class);
  private static final String PAYLOAD = "012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789";

  public static void main(String[] args) throws ExecutionException, InterruptedException {
    final MemcacheClient<Serializable> client = MemcacheClientBuilder.newSerializableObjectClient()
      .withSRVRecord("memcached")
      .withSrvResolver(new HealthyMemcachedResolver())
      .withSRVRefreshPeriod(1000)
      .withRequestTimeoutMillis(100)
      .withSRVShutdownDelay(1000)
      .withBackoff(reconnectAttempt -> 2000)
      .connectBinary();
    ConnectFuture.connectFuture(client).get();

    final String key = "kkk";

    for (int i = 0; i < 100000000; i++) {
      try {
        client.set(key, i + "-" + PAYLOAD, 100000).get();
        client.get(key).get();
      } catch (final Exception e) {
        log.error(i + " " + e.getMessage());
      }
      Thread.sleep(1);
    }
  }

  private static class HealthyMemcachedResolver implements DnsSrvResolver {

    public static final ArrayList<LookupResult> NODES = Lists.newArrayList(LookupResult.create("localhost", 11213, 1, 1, 1000), LookupResult.create("localhost", 11214, 1, 1, 1000));

    @Override
    public List<LookupResult> resolve(String fqdn) {
      return NODES;
    }
  }
}

Then I move the server to the background for a few sec (ctrl+z), and return it to foreground.
I verify that the connections are still open:

~ $ netstat | grep 11214 | grep ESTABLISHED | wc -l
66

I think I have a fix for this - adding an IdleStateHandler to the pipeling, and forcing close() when idle channel. I will submit a PR if you verify this, and once I have a better test that can be executed in CI.
When I add my code, the idle connections get closed after the idle timeout.

Hopefully I'm not wasting anyone's time with a bogus configuration again ;)

We're using the 0.7.2 client.

If the memcache server stops responding to requests, the outstanding requests in the client are never cleared

The other day we had a memcache server (single host) suddenly power off with a bunch of clients connected using Folsom.

After bringing that host back online hours later, I noticed that all of the existing clients that were connected before the power-off were unable to successfully send new requests to the server that was now online - all of the clients that were connected to the server previously were getting MemcacheOverloadedException. Restarting the client process would resolve the issue.

It seems like when the server simply goes away or stops responding (as opposed to sending connection resets), then a bunch of outstanding requests get stuck in the "outstanding" queue, long after they should have timed out.

Here is a test case which I think recreates the issue.

I am not totally sure but I think the issue is that DefaultRawMemcacheClient's ConnectionHandler sets up a TimeoutChecker and disables the channel/connection when the head of the queue times out, and tries to empty out the outstanding queue in channelInactive(..), but the pendingCounter is not decremented in the same place.

file descriptor leak

Version: 1.1.1, 1.2.0
We are noticing that AsciiMemcacheClient/ReconnectingClient is leaking file descriptors when a memcached server is restarted (or stopped).
Can be reproduced with the following client config:

MemcacheClientBuilder
          .newStringClient()
          .withAddress("localhost")
          .connectAscii();

When memcached is restarted, the ReconnectingClient parent class will usually make about 5 or 6 attempts to reconnect to the server. In our case, we're seeing the fd count for the process grow by 24 per reconnect attempt (or fail?). If you stop memcached completely, this will continue to grow.

Yammer metrics to Dropwizard?

This lib has a dependency on Yammer metrics, which, as far as I know, are now Dropwizard metrics. Any chance this might get migrated?

Question - is it possible that the returned Future from a write is completed successfully before the actual write happens?

I have a cache abstraction layer in my project. This abstraction layer was implemented with xmemcached and now I'm porting it to use folsom. As part of it, I have tests that check the functionality:

    @Test
    public void setAsyncExpiration() throws Exception {
        Person p = aPerson();
        assertTrue(cache().setAsync(regionKey1, p, 2, TimeUnit.SECONDS).get());
        Thread.sleep(2100);
        assertNull(cache().get(regionKey1, Person.class));
    }

cache() returns the instance of my abstracted client, which in case of folsom implements the setAsync operation as:

    @Override
    protected CompletableFuture<Boolean> setAsync(String key, Object value, long expiration, TimeUnit timeUnit) {
        return toCompletableFuture(client.set(key, value, (int) timeUnit.toSeconds(expiration)))
                .thenApply(MemcacheStatus.OK::equals).exceptionally(x -> false);
    }

This test, however, randomly fails with Folsom: the assertNull fails saying that an instance of the object was returned instead of null (note that expiration is 2 seconds and, after the get() over the returned future, a sleep of 2.1 seconds happens).

ReconnectingClient prints "Lost connection" for old nodes when another one is added - while requests go through just fine

I assume this is just a logging issue rather than a bug affecting client behavior.
I see this regularly -

say, I boot up 2 Memcached nodes: A and B. they are used by several app nodes (3 in my latest fault+perf test).
the total traffic sent to Memcached from these app nodes is 3000 RPS.
then I boot up one more Memcached node (C) and bring it into Load Balancer.
we use 3 seconds Dns refresh period in Memcached client setting, so the client recognizes that there are changes in the Memcached nodes list pretty quickly.

I see "Successfully connected" and "client connected: BinaryMemcacheClient(com.spotify.folsom.ketama.SrvKetamaClient" in logs, so that is all good.

but -
even though Memcached client keeps working fine (I can see requests in memcached tracking dashboards, no timeouts in the app, all good), I see these worrying messages in logs about 60 seconds after the new node is picked by the client. these messages refer to original Memcached nodes A and B, but not C.

INFO [2019-02-12 14:33:10,641] [folsom-default-scheduled-executor] [com.spotify.folsom.reconnect.ReconnectingClient] [] - Lost connection to A:11211
INFO [2019-02-12 14:33:10,641] [folsom-default-scheduled-executor] [com.spotify.folsom.reconnect.ReconnectingClient] [] - Lost connection to A:11211
INFO [2019-02-12 14:33:10,643] [folsom-default-scheduled-executor] [com.spotify.folsom.reconnect.ReconnectingClient] [] - Lost connection to B:11211
INFO [2019-02-12 14:33:10,643] [folsom-default-scheduled-executor] [com.spotify.folsom.reconnect.ReconnectingClient] [] - Lost connection to B:11211

we use 2 connections per host, this is why you see two "lost connection" lines per node.

in other words, ReconnectingClient prints "Lost connection" for old nodes after its nodes list was refreshed and another node was picked up.

I am guessing there is no problem with the current client connections and those messages are just some delayed cleanup messages from old connections. if that is the case, it would be nice to change the logging to be more explicit. "lost connection" looks scary

let's add docs how to run build and tests locally? (Docker container)

the build fails on my computer due to some configuration required for Docker container.:

Previous attempts to find a Docker environment failed. Will not retry. Please see logs and check configuration

java.lang.IllegalStateException: Previous attempts to find a Docker environment failed. Will not retry. Please see logs and check configuration

if the build requires some initial setup on a local computer or a Jenkins node, let's add a short snippet about this to the docs?

Client failure after responding to too many requests where value is too large

I ran into an issue in production where we started getting MemcacheOverloadedExceptions and the client was never able to reconnect. In our use case we are perhaps abusing the client with 1 to 2 requests per minute to cache values that are too large and after a few days the client crashes. I think I've tracked the problem to the code here. When responding for a VALUE_TO_LARGE the pendingCounter is not decremented. Looking at the outstandingRequests metric, I can see this counter get incremented for every request to cache a too large value. Once the pending requests counter reaches the specified outstanding request limit (default 1000) the client fails and the application must be restarted.

The fix looks straight forward and I'll be happy to submit a pull request if you'd like.

Feature request: optimize cancelled requests

If a request is cancelled there are two things we can optimize:

  1. if it's cancelled before it's actually written to the network socket, we can skip doing that.
  2. if it's cancelled by the time we get a reply, we can skip parsing the reply (though that's a fairly small gain)

DefaultRawMemcacheClient logs error but does now throw

Whenever there is a timeout an error is logged but there is not exception thrown. This means that the caller can't catch anything and do something else. The use case for us is to catch a timeout exception, throw a custom timeout exception our framework provides which ticks metrics, logs, et cetera. This is what we do for all clients we use (Cassandra, RPC, et cetera).

If you're OK with this change, we can submit a patch for this behavior.

https://github.com/spotify/folsom/blob/master/src/main/java/com/spotify/folsom/client/DefaultRawMemcacheClient.java#L285

Exceptions with java 11

executes and returns the value successfully, but it throws several exceptions!!

Here is log.

/home/scala/.sdkman/candidates/java/11.0.8.j9-adpt/bin/java -javaagent:/snap/intellij-idea-community/246/lib/idea_rt.jar=38149:/snap/intellij-idea-community/246/bin -Dfile.encoding=UTF-8 -classpath /home/scala/Documentos/memcached-falson/target/classes:/home/scala/.m2/repository/com/spotify/folsom/1.7.3/folsom-1.7.3.jar:/home/scala/.m2/repository/com/spotify/completable-futures/0.3.2/completable-futures-0.3.2.jar:/home/scala/.m2/repository/com/google/guava/guava/28.0-android/guava-28.0-android.jar:/home/scala/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/scala/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/scala/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/scala/.m2/repository/org/checkerframework/checker-compat-qual/2.5.5/checker-compat-qual-2.5.5.jar:/home/scala/.m2/repository/com/google/errorprone/error_prone_annotations/2.3.2/error_prone_annotations-2.3.2.jar:/home/scala/.m2/repository/com/google/j2objc/j2objc-annotations/1.3/j2objc-annotations-1.3.jar:/home/scala/.m2/repository/org/codehaus/mojo/animal-sniffer-annotations/1.17/animal-sniffer-annotations-1.17.jar:/home/scala/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/home/scala/.m2/repository/io/netty/netty-transport/4.1.34.Final/netty-transport-4.1.34.Final.jar:/home/scala/.m2/repository/io/netty/netty-common/4.1.34.Final/netty-common-4.1.34.Final.jar:/home/scala/.m2/repository/io/netty/netty-buffer/4.1.34.Final/netty-buffer-4.1.34.Final.jar:/home/scala/.m2/repository/io/netty/netty-resolver/4.1.34.Final/netty-resolver-4.1.34.Final.jar:/home/scala/.m2/repository/io/netty/netty-codec/4.1.34.Final/netty-codec-4.1.34.Final.jar:/home/scala/.m2/repository/io/netty/netty-transport-native-epoll/4.1.34.Final/netty-transport-native-epoll-4.1.34.Final.jar:/home/scala/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.34.Final/netty-transport-native-unix-common-4.1.34.Final.jar:/home/scala/.m2/repository/com/spotify/dns/3.1.4/dns-3.1.4.jar:/home/scala/.m2/repository/dnsjava/dnsjava/2.1.7/dnsjava-2.1.7.jar:/home/scala/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/home/scala/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/scala/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar org.example.App
12:51:55.362 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework
12:51:56.080 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
12:51:56.080 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap - -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
12:51:56.462 [main] DEBUG io.netty.util.internal.PlatformDependent0 - -Dio.netty.noUnsafe: false
12:51:56.464 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java version: 11
12:51:56.534 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
12:51:56.594 [main] DEBUG io.netty.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
12:51:56.598 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
12:51:56.617 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct buffer constructor: unavailable
java.lang.UnsupportedOperationException: Reflective setAccessible(true) disabled
at io.netty.util.internal.ReflectionUtil.trySetAccessible(ReflectionUtil.java:31)
at io.netty.util.internal.PlatformDependent0$4.run(PlatformDependent0.java:224)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:678)
at io.netty.util.internal.PlatformDependent0.(PlatformDependent0.java:218)
at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:213)
at io.netty.util.internal.PlatformDependent.(PlatformDependent.java:81)
at io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:195)
at io.netty.channel.epoll.Native.(Native.java:61)
at io.netty.channel.epoll.Epoll.(Epoll.java:38)
at com.spotify.folsom.client.DefaultRawMemcacheClient.lambda$static$0(DefaultRawMemcacheClient.java:96)
at com.spotify.folsom.client.DefaultRawMemcacheClient$$Lambda$29/0000000000000000.get(Unknown Source)
at com.google.common.base.Suppliers$NonSerializableMemoizingSupplier.get(Suppliers.java:167)
at com.spotify.folsom.client.DefaultRawMemcacheClient.connect(DefaultRawMemcacheClient.java:138)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$0(ReconnectingClient.java:80)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$27/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.authenticate.AuthenticatingClient.authenticate(AuthenticatingClient.java:28)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$1(ReconnectingClient.java:105)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$28/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.reconnect.ReconnectingClient.retry(ReconnectingClient.java:157)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:120)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:102)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:76)
at com.spotify.folsom.MemcacheClientBuilder.createReconnectingClient(MemcacheClientBuilder.java:651)
at com.spotify.folsom.MemcacheClientBuilder.createClient(MemcacheClientBuilder.java:632)
at com.spotify.folsom.MemcacheClientBuilder.createClients(MemcacheClientBuilder.java:608)
at com.spotify.folsom.MemcacheClientBuilder.connectRaw(MemcacheClientBuilder.java:581)
at com.spotify.folsom.MemcacheClientBuilder.connectAscii(MemcacheClientBuilder.java:558)
at org.example.App.main(App.java:21)
12:51:56.624 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: available, true
12:51:56.626 [main] DEBUG io.netty.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable
java.lang.IllegalAccessException: class io.netty.util.internal.PlatformDependent0$6 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @35f3676b
at java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
at java.base/java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591)
at java.base/java.lang.reflect.Method.invoke(Method.java:558)
at io.netty.util.internal.PlatformDependent0$6.run(PlatformDependent0.java:334)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:678)
at io.netty.util.internal.PlatformDependent0.(PlatformDependent0.java:325)
at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:213)
at io.netty.util.internal.PlatformDependent.(PlatformDependent.java:81)
at io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:195)
at io.netty.channel.epoll.Native.(Native.java:61)
at io.netty.channel.epoll.Epoll.(Epoll.java:38)
at com.spotify.folsom.client.DefaultRawMemcacheClient.lambda$static$0(DefaultRawMemcacheClient.java:96)
at com.spotify.folsom.client.DefaultRawMemcacheClient$$Lambda$29/0000000000000000.get(Unknown Source)
at com.google.common.base.Suppliers$NonSerializableMemoizingSupplier.get(Suppliers.java:167)
at com.spotify.folsom.client.DefaultRawMemcacheClient.connect(DefaultRawMemcacheClient.java:138)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$0(ReconnectingClient.java:80)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$27/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.authenticate.AuthenticatingClient.authenticate(AuthenticatingClient.java:28)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$1(ReconnectingClient.java:105)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$28/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.reconnect.ReconnectingClient.retry(ReconnectingClient.java:157)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:120)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:102)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:76)
at com.spotify.folsom.MemcacheClientBuilder.createReconnectingClient(MemcacheClientBuilder.java:651)
at com.spotify.folsom.MemcacheClientBuilder.createClient(MemcacheClientBuilder.java:632)
at com.spotify.folsom.MemcacheClientBuilder.createClients(MemcacheClientBuilder.java:608)
at com.spotify.folsom.MemcacheClientBuilder.connectRaw(MemcacheClientBuilder.java:581)
at com.spotify.folsom.MemcacheClientBuilder.connectAscii(MemcacheClientBuilder.java:558)
at org.example.App.main(App.java:21)
12:51:56.629 [main] DEBUG io.netty.util.internal.PlatformDependent0 - java.nio.DirectByteBuffer.(long, int): unavailable
12:51:56.630 [main] DEBUG io.netty.util.internal.PlatformDependent - sun.misc.Unsafe: available
12:51:56.826 [main] DEBUG io.netty.util.internal.PlatformDependent - maxDirectMemory: 1026686976 bytes (maybe)
12:51:56.826 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
12:51:56.826 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
12:51:56.830 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.maxDirectMemory: -1 bytes
12:51:56.831 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.uninitializedArrayAllocationThreshold: -1
12:51:56.834 [main] DEBUG io.netty.util.internal.CleanerJava9 - java.nio.ByteBuffer.cleaner(): available
12:51:56.845 [main] DEBUG io.netty.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
12:51:56.900 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.workdir: /tmp (io.netty.tmpdir)
12:51:56.901 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.deleteLibAfterLoading: true
12:51:56.901 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - -Dio.netty.native.tryPatchShadedId: true
12:51:56.927 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Unable to load the library 'netty_transport_native_epoll_x86_64', trying other loading mechanism.
java.lang.UnsatisfiedLinkError: netty_transport_native_epoll_x86_64 (Not found in java.library.path)
at java.base/java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1742)
at java.base/java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:1694)
at java.base/java.lang.System.loadLibrary(System.java:598)
at io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:369)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:678)
at io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:361)
at io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:339)
at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:136)
at io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:198)
at io.netty.channel.epoll.Native.(Native.java:61)
at io.netty.channel.epoll.Epoll.(Epoll.java:38)
at com.spotify.folsom.client.DefaultRawMemcacheClient.lambda$static$0(DefaultRawMemcacheClient.java:96)
at com.spotify.folsom.client.DefaultRawMemcacheClient$$Lambda$29/0000000000000000.get(Unknown Source)
at com.google.common.base.Suppliers$NonSerializableMemoizingSupplier.get(Suppliers.java:167)
at com.spotify.folsom.client.DefaultRawMemcacheClient.connect(DefaultRawMemcacheClient.java:138)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$0(ReconnectingClient.java:80)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$27/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.authenticate.AuthenticatingClient.authenticate(AuthenticatingClient.java:28)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$1(ReconnectingClient.java:105)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$28/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.reconnect.ReconnectingClient.retry(ReconnectingClient.java:157)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:120)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:102)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:76)
at com.spotify.folsom.MemcacheClientBuilder.createReconnectingClient(MemcacheClientBuilder.java:651)
at com.spotify.folsom.MemcacheClientBuilder.createClient(MemcacheClientBuilder.java:632)
at com.spotify.folsom.MemcacheClientBuilder.createClients(MemcacheClientBuilder.java:608)
at com.spotify.folsom.MemcacheClientBuilder.connectRaw(MemcacheClientBuilder.java:581)
at com.spotify.folsom.MemcacheClientBuilder.connectAscii(MemcacheClientBuilder.java:558)
at org.example.App.main(App.java:21)
12:51:56.931 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - netty_transport_native_epoll_x86_64 cannot be loaded from java.libary.path, now trying export to -Dio.netty.native.workdir: /tmp
java.lang.UnsatisfiedLinkError: netty_transport_native_epoll_x86_64 (Not found in java.library.path)
at java.base/java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1742)
at java.base/java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:1694)
at java.base/java.lang.System.loadLibrary(System.java:598)
at io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
at io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:349)
at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:136)
at io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:198)
at io.netty.channel.epoll.Native.(Native.java:61)
at io.netty.channel.epoll.Epoll.(Epoll.java:38)
at com.spotify.folsom.client.DefaultRawMemcacheClient.lambda$static$0(DefaultRawMemcacheClient.java:96)
at com.spotify.folsom.client.DefaultRawMemcacheClient$$Lambda$29/0000000000000000.get(Unknown Source)
at com.google.common.base.Suppliers$NonSerializableMemoizingSupplier.get(Suppliers.java:167)
at com.spotify.folsom.client.DefaultRawMemcacheClient.connect(DefaultRawMemcacheClient.java:138)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$0(ReconnectingClient.java:80)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$27/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.authenticate.AuthenticatingClient.authenticate(AuthenticatingClient.java:28)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$1(ReconnectingClient.java:105)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$28/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.reconnect.ReconnectingClient.retry(ReconnectingClient.java:157)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:120)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:102)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:76)
at com.spotify.folsom.MemcacheClientBuilder.createReconnectingClient(MemcacheClientBuilder.java:651)
at com.spotify.folsom.MemcacheClientBuilder.createClient(MemcacheClientBuilder.java:632)
at com.spotify.folsom.MemcacheClientBuilder.createClients(MemcacheClientBuilder.java:608)
at com.spotify.folsom.MemcacheClientBuilder.connectRaw(MemcacheClientBuilder.java:581)
at com.spotify.folsom.MemcacheClientBuilder.connectAscii(MemcacheClientBuilder.java:558)
at org.example.App.main(App.java:21)
Suppressed: java.lang.UnsatisfiedLinkError: netty_transport_native_epoll_x86_64 (Not found in java.library.path)
at java.base/java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1742)
at java.base/java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:1694)
at java.base/java.lang.System.loadLibrary(System.java:598)
at io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:369)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:678)
at io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:361)
at io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:339)
... 23 common frames omitted
12:51:56.947 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - Unable to load the library 'netty_transport_native_epoll', trying other loading mechanism.
java.lang.UnsatisfiedLinkError: netty_transport_native_epoll (Not found in java.library.path)
at java.base/java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1742)
at java.base/java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:1694)
at java.base/java.lang.System.loadLibrary(System.java:598)
at io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:369)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:678)
at io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:361)
at io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:339)
at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:136)
at io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:201)
at io.netty.channel.epoll.Native.(Native.java:61)
at io.netty.channel.epoll.Epoll.(Epoll.java:38)
at com.spotify.folsom.client.DefaultRawMemcacheClient.lambda$static$0(DefaultRawMemcacheClient.java:96)
at com.spotify.folsom.client.DefaultRawMemcacheClient$$Lambda$29/0000000000000000.get(Unknown Source)
at com.google.common.base.Suppliers$NonSerializableMemoizingSupplier.get(Suppliers.java:167)
at com.spotify.folsom.client.DefaultRawMemcacheClient.connect(DefaultRawMemcacheClient.java:138)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$0(ReconnectingClient.java:80)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$27/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.authenticate.AuthenticatingClient.authenticate(AuthenticatingClient.java:28)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$1(ReconnectingClient.java:105)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$28/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.reconnect.ReconnectingClient.retry(ReconnectingClient.java:157)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:120)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:102)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:76)
at com.spotify.folsom.MemcacheClientBuilder.createReconnectingClient(MemcacheClientBuilder.java:651)
at com.spotify.folsom.MemcacheClientBuilder.createClient(MemcacheClientBuilder.java:632)
at com.spotify.folsom.MemcacheClientBuilder.createClients(MemcacheClientBuilder.java:608)
at com.spotify.folsom.MemcacheClientBuilder.connectRaw(MemcacheClientBuilder.java:581)
at com.spotify.folsom.MemcacheClientBuilder.connectAscii(MemcacheClientBuilder.java:558)
at org.example.App.main(App.java:21)
12:51:56.958 [main] DEBUG io.netty.util.internal.NativeLibraryLoader - netty_transport_native_epoll cannot be loaded from java.libary.path, now trying export to -Dio.netty.native.workdir: /tmp
java.lang.UnsatisfiedLinkError: netty_transport_native_epoll (Not found in java.library.path)
at java.base/java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1742)
at java.base/java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:1694)
at java.base/java.lang.System.loadLibrary(System.java:598)
at io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
at io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:349)
at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:136)
at io.netty.channel.epoll.Native.loadNativeLibrary(Native.java:201)
at io.netty.channel.epoll.Native.(Native.java:61)
at io.netty.channel.epoll.Epoll.(Epoll.java:38)
at com.spotify.folsom.client.DefaultRawMemcacheClient.lambda$static$0(DefaultRawMemcacheClient.java:96)
at com.spotify.folsom.client.DefaultRawMemcacheClient$$Lambda$29/0000000000000000.get(Unknown Source)
at com.google.common.base.Suppliers$NonSerializableMemoizingSupplier.get(Suppliers.java:167)
at com.spotify.folsom.client.DefaultRawMemcacheClient.connect(DefaultRawMemcacheClient.java:138)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$0(ReconnectingClient.java:80)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$27/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.authenticate.AuthenticatingClient.authenticate(AuthenticatingClient.java:28)
at com.spotify.folsom.reconnect.ReconnectingClient.lambda$new$1(ReconnectingClient.java:105)
at com.spotify.folsom.reconnect.ReconnectingClient$$Lambda$28/0000000000000000.connect(Unknown Source)
at com.spotify.folsom.reconnect.ReconnectingClient.retry(ReconnectingClient.java:157)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:120)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:102)
at com.spotify.folsom.reconnect.ReconnectingClient.(ReconnectingClient.java:76)
at com.spotify.folsom.MemcacheClientBuilder.createReconnectingClient(MemcacheClientBuilder.java:651)
at com.spotify.folsom.MemcacheClientBuilder.createClient(MemcacheClientBuilder.java:632)
at com.spotify.folsom.MemcacheClientBuilder.createClients(MemcacheClientBuilder.java:608)
at com.spotify.folsom.MemcacheClientBuilder.connectRaw(MemcacheClientBuilder.java:581)
at com.spotify.folsom.MemcacheClientBuilder.connectAscii(MemcacheClientBuilder.java:558)
at org.example.App.main(App.java:21)
Suppressed: java.lang.UnsatisfiedLinkError: netty_transport_native_epoll (Not found in java.library.path)
at java.base/java.lang.ClassLoader.loadLibraryWithPath(ClassLoader.java:1742)
at java.base/java.lang.ClassLoader.loadLibraryWithClassLoader(ClassLoader.java:1694)
at java.base/java.lang.System.loadLibrary(System.java:598)
at io.netty.util.internal.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:38)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at io.netty.util.internal.NativeLibraryLoader$1.run(NativeLibraryLoader.java:369)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:678)
at io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(NativeLibraryLoader.java:361)
at io.netty.util.internal.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:339)
... 23 common frames omitted
12:51:57.175 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 4
12:51:57.486 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
12:51:57.486 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
12:51:57.557 [main] DEBUG io.netty.util.internal.PlatformDependent - org.jctools-core.MpscChunkedArrayQueue: available
12:51:58.167 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.processId: 9804 (auto-detected)
12:51:58.176 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv4Stack: false
12:51:58.176 [main] DEBUG io.netty.util.NetUtil - -Djava.net.preferIPv6Addresses: false
12:51:58.257 [main] DEBUG io.netty.util.NetUtil - Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
12:51:58.260 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 4096
12:51:58.285 [main] DEBUG io.netty.channel.DefaultChannelId - -Dio.netty.machineId: 00:1d:4f:ff:fe:fa:4d:d8 (auto-detected)
12:51:58.599 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
12:51:58.600 [main] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.targetRecords: 4
12:51:59.379 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numHeapArenas: 4
12:51:59.379 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.numDirectArenas: 4
12:51:59.380 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.pageSize: 8192
12:51:59.380 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxOrder: 11
12:51:59.380 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.chunkSize: 16777216
12:51:59.380 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.tinyCacheSize: 512
12:51:59.381 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.smallCacheSize: 256
12:51:59.381 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.normalCacheSize: 64
12:51:59.381 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedBufferCapacity: 32768
12:51:59.382 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.cacheTrimInterval: 8192
12:51:59.382 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.useCacheForAllThreads: true
12:51:59.382 [main] DEBUG io.netty.buffer.PooledByteBufAllocator - -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
12:51:59.462 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: pooled
12:51:59.462 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 0
12:51:59.463 [main] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
12:52:00.961 [defaultRawMemcacheClient-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
12:52:00.964 [defaultRawMemcacheClient-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
12:52:00.964 [defaultRawMemcacheClient-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
12:52:00.965 [defaultRawMemcacheClient-1-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
12:52:00.987 [defaultRawMemcacheClient-1-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
12:52:00.988 [defaultRawMemcacheClient-1-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
12:52:00.991 [defaultRawMemcacheClient-1-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@b8703ac
12:52:01.328 [ForkJoinPool-1-worker-3] INFO com.spotify.folsom.reconnect.ReconnectingClient - Successfully connected to 127.0.0.1:11211
value
12:52:02.089 [main] INFO com.spotify.folsom.reconnect.ReconnectingClient - Lost connection to 127.0.0.1:11211

Process finished with exit code 0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.