Giter VIP home page Giter VIP logo

twemproxy's People

Contributors

andyqzb avatar areina avatar arnecls avatar atdt avatar ayutthaya avatar caniszczyk avatar charsyam avatar dentarg avatar eleusive avatar esindril avatar guilhem avatar idning avatar lizhe-bytedance avatar manjuraj avatar matschaffer avatar mckelvin avatar mkadin avatar mortonfox avatar nikai3d avatar oldmantaiter avatar paravoid avatar pataquets avatar pavanky avatar raghukul01 avatar remotezygote avatar rhoml avatar rohitpaulk avatar tan-lawrence avatar tom-dalton-fanduel avatar tysonandre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

twemproxy's Issues

redis branch: Support to use only part of the key for calculating hash - "{hash tags}" or "{key tags}"

After finding redis branch in twemproxy I'd like to propose a future:

Support to use only part of the key for calculating hash.

Same solution is also suggested by Redis author in:
http://antirez.com/post/redis-presharding.html (search for Hash tags)

Reason:
We would like to keep all data for one user on the same redis server. Therefore we use {} as to mark part of the key that should be used for consistent hashing. If {} is not present, use key as is (use full key).

Some examples of keys:
//when hash tags found
userdata:{user1}:firstseen (string) ->crc32("user1")->server1
userdata:{user1}:dayssen (set) ->crc32("user1")->server1
userdata:{user2}:firstseen (string) ->crc32("user2")->server2
userdata:{user2}:dayssen (set) ->crc32("user2")->server2
//when hash tags not found
userdata:user1:dayssen (set) ->crc32("userdata:user1:dayssen")->server1

We can't mix all above data using redis hashes since we mix different redis data types.

So while the key "foo" will be hashed as SHA1("foo"), the key "bar{zap}" will be hashed just as SHA1("zap").

Some implementations:
In Redis Java library
https://github.com/xetorthio/jedis/wiki/AdvancedUsage (Force certain keys to go to the same shard - keytags)
in redis sharding proxy:
https://github.com/kni/redis-sharding (Sharding is done based on the CRC32 checksum of a key or key tag ("key{key_tag}").)

twemproxy does not startup if it cannot resolve hostnames

twemproxy does not startup if it cannot resolve hostnames. This might occur is you use name instead of IP address in your config file and twemproxy fails to resolve these names.

The fix for this to not lazily resolve hostnames rather than resolving them on start-up. If a name doesn't resolve on run-time then it is not included in the hash ring (cluster)

why 4 keys hash to the same server

My nutcracker.yml file is :
leaf:
listen: 127.0.0.1:22121
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: true
server_retry_timeout: 2000
server_failure_limit: 1
servers:

  • 127.0.0.1:11212:1
  • 127.0.0.1:11213:1

then i use command: telnet 127.0.0.1 22121
and set 4 keys: a,b,c,d , like set a 0 0 3
then
telnet 127.0.0.1 11212
and i found 4 keys all stored on this server,
why?

How to rehash and devide an exist redis into several redis instances?

We are running a real time log collect and simple anaylze service with one redis instance. While it have been work fine about half a year and the memory consumption come to about 8g, our server is not strong enough to serve such a big guy. Now we want to devide/cluster more redis instances for the service, here comes the problem, I am really exciting finding the cool twemproxy, but according to the doc I found nothing about rehash an exist redis instance like "redis-sharding-hs". What should I do in the situation?

Is there any parameter for custom monitor IP

Hello Manju Rajashekhar,

I have found that the monitor stat port listened all IP on my server. e.g:
:::22222

but right now, i just need it listened as 127.0.0.1:22222 . i'm not clear is there any
parameter for custom monitor IP . or is there any way to custom your source code for
this setting ?

BTW

GET fails when key length + data length is close to a multiple of 16k bytes

Hi,

I am testing twemproxy before deploying it to production. The test application is a simple Perl script that executes an infinite single-threaded loop that calls set() with a random key and value, then get() with the same key, and compares the sent and received values. Keys and values are [a-z] strings, keys are 1..100 bytes long, values are 1..50000 bytes long.

Occasionally, once in about 10000 requests, get() returns undef.

After some poking around, I found out that it only happens when the sum of key length and data length is close to a multiple of 16k bytes and equals to certain numbers. For example, with key length of 10 bytes the following data lengths result in error on get():

16306, 16307, 32642, 32643, 48978, 48979, 65314, 65315

I was able to reproduce this bug using the following minimal Perl script:

#!/usr/bin/perl

use warnings;
use strict;

use Cache::Memcached;

my $memd = new Cache::Memcached { 'servers' => [ '127.0.0.1:11211' ] };

my $key = 'a' x 10;
for my $dlen (16306, 16307, 32642, 32643, 48978, 48979, 65314, 65315) {
        my $data = 'b' x $dlen;
        $memd->set($key, $data) || die $!;
        my $got = $memd->get($key);
        warn "dlen $dlen: UNDEF on get()" unless defined $got;
}

(also saved to http://pastebin.com/mijxadpX just in case)

Here's a sample of the respective log lines:

[Mon Nov  5 00:10:00 2012] nc_memcache.c:1215 parsed bad rsp 2492671 res 1 type 19 state 13
00000000  45 4e 44 0d 0a                                     |END..|
[Mon Nov  5 00:10:00 2012] nc_core.c:168 recv on s 12 failed: Invalid argument
[Mon Nov  5 00:10:00 2012] nc_core.c:207 close s 12 '10.x.x.x:11211' on event 0001 eof 0 done 0 rb 65356 sb 65360: Invalid argument

Environment details:

OS: Debian squeeze/wheezy/sid (mixed), amd64.
Memcached version: 1.4.13.
Cache::Memcached version: 1.29
Twemproxy version: current git master.
Twemproxy config:

pool:
  listen: 127.0.0.1:11211
  hash: fnv1a_64
  distribution: ketama
  timeout: 10000
  auto_eject_hosts: true
  server_retry_timeout: 30000
  server_failure_limit: 2
  servers:
    - (four servers total)

Is twemproxy like memagent?

I have test twemproxy , it doesn't like memagent, while memagent can use for cluster

I want a proxy for memcached for cluster, but i found memagent has some issuse.

Messy result when setting some utf8 strings

I met this problem when try to set some utf8 strings as follows:

$twem = new Memcached();
$twem->addServer('127.0.0.1', 22122);

$json = '{"Content":"2011\u5e749\u6708\uff0c\u7b2c\u4e00\u6b21\u89c1\u4f60\uff0c\u4fbf\u89c9\u5f97\u4ece\u672a\u6709\u4eba\u50cf\u4f60\u8fd9\u822c\u628a\u767d\u886c\u886b\u7a7f\u7684\u5982\u6b64\u597d\u770b\u3002\u6211\u4e00\u76f4\u5728\u4f60\u7684\u8eab\u8fb9\u5145\u5f53\u7740\u53ef\u6709\u53ef\u65e0\u7684\u89d2\u8272\uff0c\u53ef\u662f\u6211\u5374\u4e0d\u66fe\u540e\u6094\uff0c\u4e5f\u8bb8\u6709\u4e00\u5929\u6211\u4eec\u53ef\u4ee5\u8d70\u5230\u7ec8\u70b9\u3002\u5373\u4f7f\u4f60\u5fc3\u91cc\u8fd8\u6709\u4e00\u4e2a\u5979\uff0c\u5373\u4f7f\u6211\u5e38\u5e38\u88ab\u4f60\u5ffd\u89c6\u3002\u4e00\u5468\u5e74\u7eaa\u5ff5\u65e5\u65f6\uff0c\u4f60\u8bf4\u5bf9\u4e0d\u8d77\uff0c\u6211\u5f88\u597d\uff0c\u4f46\u6211\u4eec\u4e0d\u9002\u5408\uff0c\u6211\u4f1a\u627e\u5230\u6bd4\u4f60\u66f4\u597d\u7684\u4eba\u3002\u300a\u6211\u6ca1\u6709\u5f88\u60f3\u4f60\u300bJPM\r\n\u4f60\u8bf4\u4e0d\u7231\u7684\u65f6\u5019\uff0c\u6211\u54ed\u4e86\u4e00\u5929\u4e00\u591c\uff0c\u6211\u4e0d\u61c2\u4f60\u4e3a\u4f55\u90a3\u4e48\u8f7b\u6613\u5c31\u653e\u5f03\u4e86\u6211\u4eec\u7684\u7231\u60c5\u3002\u5728\u4f60\u53bb\u76f8\u4eb2\u7684\u90a3\u4e00\u5929\uff0c\u6211\u5e26\u7740\u7b80\u5355\u7684\u884c\u674e\uff0c\u51fa\u53bb\u6d41\u6d6a\u3002\u4e00\u4e2a\u4eba\u8d70\u5728\u5f02\u4e61\u7684\u8857\u9053\uff0c\u4e00\u4e2a\u4eba\u5728\u5f02\u4e61\u7684\u516c\u4ea4\u8f66\u4e0a\u9ed8\u9ed8\u7684\u6d41\u7740\u6cea\u3002\u300a\u6cea\u4e0d\u505c\u300b\u6e29\u5c9a\r\n\u4f60\u4e0d\u66fe\u77e5\u9053\u6211\u54ed\u7740\u5199\u5b8c\u4e00\u5c01\u51e0\u5343\u5b57\u7684\u4fe1\uff0c\u54c0\u6c42\u4f60\u7684\u7236\u6bcd\u63a5\u53d7\u6211\u4eec\u7684\u611f\u60c5\u3002\u53ef\u662f\u4f60\u518d\u4e5f\u4e0d\u662f\u6211\u7684\u4e86\u3002\u90a3\u4e2a\u4e3a\u4f60\u8bbe\u7684\u4e13\u5c5e\u94c3\u58f0\uff0c\u518d\u4e5f\u4e0d\u4f1a\u54cd\u8d77\uff0c\u5728\u540c\u4e00\u4e2a\u7ad9\u53f0\u4e0b\u8f66\uff0c\u5374\u51b7\u6f20\u7684\u50cf\u4e2a\u964c\u751f\u4eba\u3002\u800c\u6211\u4eec\u7684\u7231\uff0c\u53ea\u5230\u4e00\u534a\u5c31\u621b\u7136\u800c\u6b62\u3002 \u300a\u53ea\u7231\u5230\u4e00\u534a\u300b\u9b4f\u6668\r\n\u70b9\u6b4c\u73af\u8282\uff1a\r\n WilliamZhang\uff1a\u60f3\u70b9\u4e00\u9996\u300a\u518d\u8bf4\u4e00\u6b21\u6211\u7231\u4f60\u300b \u66fe\u7ecf\u6709\u90a3\u4e48\u7684\u4e00\u4e2a\u4eba\u503c\u5f97\u6211\u53bb\u73cd\u60dc\uff0c\u53ef\u60dc\u5929\u610f\u5f04\u4eba\u3002\u90a3\u662f\u6211\u7684\u521d\u604b\uff0c\u4e00\u76f4\u90fd\u5728\u5fc3\u91cc\u4ece\u672a\u6539\u53d8\uff0c\u5982\u4eca\uff0c\u5979\u5df2\u6709\u4e24\u6b21\u7231\u604b\uff0c\u6211\u4e5f\u6e10\u6e10\u5730\u4ece\u5979\u7684\u4e16\u754c\u91cc\u79bb\u53bb\uff0c\u627e\u4e86\u4e00\u4e2a\u6027\u683c\uff0c\u6837\u8c8c\u50cf\u5979\u7684\u4e00\u4e2a\u4eba\uff0c\u53ea\u60f3\u518d\u8bf4\u4e00\u6b21\u201c\u6211\u7231\u4f60\u201d\u3002\u300a\u518d\u8bf4\u4e00\u6b21\u6211\u7231\u4f60\u300b\u5218\u5fb7\u534e\r\n\u3010\u5fc3\u613f\u5899\u3011\r\n56\u7f51D.Largo \uff1aLyx\u7ea6\u5b9a\u4e86\u90a3\u4e48\u591a\u73b0\u5728\u4f60\u662f\u5426\u8fd8\u8bb0\u5f97\uff1f\u60f3\u70b9\u4e00\u9996\u6211\u4eec\u5408\u5531\u6700\u591a\u7684\u6b4c\u66f2\u300a\u5e78\u798f\u604b\u4eba\u300b\uff0c\u5e0c\u671b\u4f60\u4f1a\u5e78\u798f\uff0c\u81f3\u5c11\u6bd4\u6211\u8981\u5e78\u798f\u3002\r\n56\u7f51\u65f6\u800c\u95f9\u95f9\u60c5\u7eea\u7684\u4f22\uff1a\u7b28\u7b28\uff0c\u6211\u5f88\u7fa1\u6155\u90a3\u4e2a\u5c06\u5728\u672a\u6765\u966a\u5728\u4f60\u8eab\u8fb9\u7684\u4eba\uff0c\u4ed6\u5f97\u5230\u4e86\u4f60\u7684\u7231\u3002\u4f60\u7684\u7231\uff0c\u6211\u5f97\u4e0d\u5230\uff0c\u4f46\u6211\u7684\u7231\uff0c\u6211\u4f1a\u7ed9\u4f60\uff0c\u76f4\u5230\u6211\u7ed9\u4e0d\u8d77\u7684\u90a3\u4e00\u5929\u3002\r\n\u4eba\u4eba\u7f51\u5415\u5965\uff1a\u6211\u60f3\u70b9\u4e00\u9996\u300a\u5982\u679c\u6ca1\u6709\u4f60\u300b\u7ed9\u8fdc\u65b9\u6b63\u5728\u51c6\u5907\u51fa\u56fd\u7684\u5973\u670b\u53cb\uff0c\u82b3 \u4f60\u597d\u4e45\u6ca1\u6709\u4e3b\u52a8\u8054\u7cfb\u6211\u4e86\u3002\u6211\u77e5\u9053\u4f60\u662f\u5426\u4e5f\u4f1a\u5076\u5c14\u60f3\u8d77\u6211\u3002\r\n\u4eba\u4eba\u7f51\u674e\u6587\u5a77\uff1a\u6211\u60f3\u70b9\u4e00\u9996\u9001\u7ed9\u6770\uff0c\u8c22\u8c22\u4ed6\u4e00\u76f4\u4ee5\u6765\u7684\u5305\u5bb9\u8fd8\u6709\u90a3\u4e9bsurprise\u3002\r\n\u5fae\u535a\u56de\u5fc6\u65f6\u5149\u65e7\u68a6 \uff1avicky_\u5f20\u5c0f\u8d1d \uff0c\u5982\u679c\u4e0d\u662f\u6211\u7684\u8f9e\u804c\uff0c\u4fbf\u4e0d\u4f1a\u9047\u5230\u4f60\uff0c\u4fbf\u4e0d\u4f1a\u6709\u4ee5\u540e\u7684\u6545\u4e8b\u53d1\u751f\uff0c\u661f\u5ea7\u4e0a\u8bf4\u5929\u874e\u548c\u53cc\u9c7c\u662f\u7edd\u914d\uff0c\u80fd\u9047\u5230\u4f60\uff0c\u662f\u6211\u8fd9\u8f88\u5b50\u6700\u5e78\u8fd0\u7684\u4e8b\uff01\r\n\u5fae\u535a\u975c\u5f85\u7159\u96e8\u50be\u57ce \uff1a\u670b\u53cb \u8c5a\u4e0e\u6d77\u6d0b \u7684\u751f\u65e5\uff0c\u6211\u5e0c\u671b\u70b9\u4e00\u9996 \u6c5f\u7f8e\u742a\u7684\u300a\u751f\u65e5\u5feb\u4e50\u300b\u9001\u7ed9\u5979\uff0c\u795d\u5979\u751f\u65e5\u5feb\u4e50\uff0c\u5e0c\u671b\u5979\u5e78\u798f\u5feb\u4e50"}';

$twem->set('testkey123', $json, 2);
$rs = $twem->get('testkey123');
var_dump($rs); // messy as string 'G����{"Content":"2011\u5e749\u6708\uf�f0c\u7b2 ��4e00 ��b2 $�89c ��4f6 ��#�4fbf`

$json = '{"Content":"2011\u5e749\u6708\uff0c\u7b2c\u4e00\u6b21\u89c1\u4f60\uff0c\u4fbf\u89c9\u5f97\u4ece\u672a\u6709\u4eba\u50cf\u4f60\u8fd9\u822c\u628a\u767d\u886c\u886b\u7a7f\u7684\u5982\u6b64\u597d\u770b\u3002\u6211\u4e00\u76f4\u5728\u4f60\u7684\u8eab\u8fb9\u5145\u5f53\u7740\u53ef\u6709\u53ef\u65e0\u7684\u89d2\u8272\uff0c\u53ef\u662f\u6211\u5374\u4e0d\u66fe\u540e\u6094\uff0c\u4e5f\u8bb8\u6709\u4e00\u5929\u6211\u4eec\u53ef\u4ee5\u8d70\u5230\u7ec8\u70b9\u3002\u5373\u4f7f\u4f60\u5fc3\u91cc\u8fd8\u6709\u4e00\u4e2a\u5979\uff0c\u5373\u4f7f\u6211\u5e38\u5e38\u88ab\u4f60\u5ffd\u89c6\u3002\u4e00\u5468\u5e74\u7eaa\u5ff5\u65e5\u65f6\uff0c\u4f60\u8bf4\u5bf9\u4e0d\u8d77\uff0c\u6211\u5f88\u597d\uff0c\u4f46\u6211\u4eec\u4e0d\u9002\u5408\uff0c\u6211\u4f1a\u627e\u5230\u6bd4\u4f60\u66f4\u597d\u7684\u4eba\u3002\u300a\u6211\u6ca1\u6709\u5f88\u60f3\u4f60\u300bJPM\r\n\u4f60\u8bf4\u4e0d\u7231\u7684\u65f6\u5019\uff0c\u6211\u54ed\u4e86\u4e00\u5929\u4e00\u591c\uff0c\u6211\u4e0d\u61c2\u4f60\u4e3a\u4f55\u90a3\u4e48\u8f7b\u6613\u5c31\u653e\u5f03\u4e86\u6211\u4eec\u7684\u7231\u60c5\u3002\u5728\u4f60\u53bb\u76f8\u4eb2\u7684\u90a3\u4e00\u5929\uff0c\u6211\u5e26\u7740\u7b80\u5355\u7684\u884c\u674e\uff0c\u51fa\u53bb\u6d41\u6d6a\u3002\u4e00\u4e2a\u4eba\u8d70\u5728\u5f02\u4e61\u7684\u8857\u9053\uff0c\u4e00\u4e2a\u4eba\u5728\u5f02\u4e61\u7684\u516c\u4ea4\u8f66\u4e0a\u9ed8\u9ed8\u7684\u6d41\u7740\u6cea\u3002\u300a\u6cea\u4e0d\u505c\u300b\u6e29\u5c9a\r\n\u4f60\u4e0d\u66fe\u77e5\u9053\u6211\u54ed\u7740\u5199\u5b8c\u4e00\u5c01\u51e0\u5343\u5b57"}';

$twem->set('testkey123', $json, 2);
$rs = $twem->get('testkey123');
var_dump($rs); // pretty as string '{"Content":"2011\u5e749\u6708\uff0c\u7b2c\u4e00\u6b21\u89c1\u4f60\uff0c\u4fbf\u89c9\u5f97\u4ece\u672a\u6709

Just don't know why, would it be a possible that the twemproxy change some chars when setting the string?

configuration file 'conf/nutcracker.yml' syntax is invalid

Hello , I got the lastest brach .
And edit the nutcracker.yml as

leaf:
  listen: 127.0.0.1:22121
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
   - 127.0.0.1:11212:1
   - 127.0.0.1:11213:1

then test bash $ nutcracker -t

but it told me :

configuration file 'conf/nutcracker.yml' syntax is invalid

however I edit it as root.yml ,but it also dont work.

Ulimit, open files

Hi,

I got this error while running Nutcracker: nc_proxy.c:290 accept on p 7 failed: Too many open files

My ulimit configuration for "open files" was 1024, I increased that value to 32768. Understood it's specific to my environment, but is there there something else I should check to make sure I don't run into the same issue again?

Thank you.

Challenges in running multiple instances of twemproxy?

Hello, I have a question about running multiple instances of twemproxy, and specifically what happens when adding/removing memcached backends.

For instance, if I have two instances of twemproxy running with exactly the same configuration, and then need to add a new backend node, what risks are there in terms of having to synchronize the process restarts, etc?

Consider the following scenario:

twemproxy A refers to memcached servers 1, 2 and 3

twemproxy B refers to memcached servers 1, 2 and 3

At this point everything is in synch and working well.

What happens in this case if you:

Add server 4 to twemproxy A and restart the process

Wait 5 number of minutes

Add server 4 to twemproxy B and restart the process

In the 5 minutes between, are the two instances of twemproxy now using different ring/hashing algorithms?

Thanks in advance - just trying to get a feel for the best practices around using multiple instances of twemproxy.

Redis Inline commands

Is there any way to make twemproxy support redis inline commands instead of the unified protocol?

Like GET key SET key value instead of *2\r\nGET\r\n$3\r\nkey\r\n

Could you point out which part of the code needs to be edited. I'm glad to help

Ability to know which servers are currently in the pool

We are seeing some weird network behaviors and we spend a lot of time trying to understand which host is being ejected from the nutcracker pool. If the telnet stats port can also dump of ejected hosts that would be really cool to plot graphs.

Expire DNS caching

So we had a pool with a bunch of memcaches in it. One of the memcache server became unresponsive and we had to swap it with a new server. We brought the new machine with the same hostname as the dead one, expecting nutcracker to try the new server. But nutcracker upon startup performs a DNS lookup and uses ip address to connect to the hosts. In this case we had to restart nutcrackers all over place. Can we use hostnames ? But that might not be ideal. So a cache of hostname to ip address with some 15 mts expiration would be ideal.

Confused with the rehashing policy

Now we have a 2 nodes twemproxy cluster, after added a new node, here comes a disaster. We "LOST" all our existing data. In fact, the data are still in the old 2 nodes, but I cant access them any longer through twemproxy because of the new 3rd node adding.
I understand the data would be distributed to the nodes by its key rehashing policy, but I am confused that all existing data in the nodes should be adapter my scale of the nodes, and the situation I faced is not. So I wonder is there anything I missed about the rehashing policy of twemproxy,In my mind, after a new node added, the twemproxy could still router my access of "old" data to the "old" nodes.

Invalid argument

hello,i set two proxy like :

nodurex0:
  listen: 192.168.0.22:23333
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
   - 192.168.0.22:15555:1       
   - 192.168.0.22:16666:1


nodurex1:
  listen: 192.168.0.22:24444 
  hash: fnv1a_64 
  distribution: ketama
  auto_eject_hosts: true
  server_retry_timeout: 2000 
  server_failure_limit: 1
  servers:
   - 192.168.0.22:17777:1       
   - 192.168.0.22:18888:1

And when my java memcache client set a vaklue to the memcache, it takes me

[Fri Jan  4 11:28:36 2013] nc_core.c:207 close c 46 '192.168.0.155:62851' on event 0001 eof 0 done 0 rb 9 sb 0: Invalid argument
[Fri Jan  4 11:28:36 2013] nc_core.c:207 close c 16 '192.168.0.155:62821' on event 0001 eof 0 done 0 rb 9 sb 0: Invalid argument
[Fri Jan  4 11:28:36 2013] nc_core.c:207 close c 31 '192.168.0.155:62836' on event 0001 eof 0 done 0 rb 9 sb 0: Invalid argument
[Fri Jan  4 11:28:36 2013] nc_core.c:207 close c 32 '192.168.0.155:62837' on event 0001 eof 0 done 0 rb 9 sb 0: Invalid argument
[Fri Jan  4 11:28:36 2013] nc_core.c:207 close c 47 '192.168.0.155:62852' on event 0001 eof 0 done 0 rb 9 sb 0: Invalid argument
[Fri Jan  4 11:28:36 2013] nc_core.c:207 close c 17 '192.168.0.155:62822' on event 0001 eof 0 done 0 rb 9 sb 0: Invalid argument

i konw it becomes one port to copy to another. But i don't konw how to fix it.

Connection re-try after failed instance before accepting commands

I have tried setting up a 3-node cluster of nutcracker and 2 redis servers using the config below:

alpha:
listen: 127.0.0.1:22121
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: true
redis: true
preconnect: true
server_retry_timeout: 2000
server_failure_limit: 1
servers:

  • tp.dev.infinity.local:6379:1
  • rn.dev.infinity.local:6379:1

This works fine, until I stop one of the redis servers.

Then requests through nutcracker for keys that are hashed to the downed redis server are generating an error to the requesting client each time the server_retry_timeout passes and an attempt to reconnect is made.

Is there a way of having nutcracker try re-connecting to the downed redis server before re-adding it to the hash ring. I had thought that this was what the preconnect option was for, but I think I have misunderstood its use.

Thanks

how can I print the process of nutcracker

I'm a new of nutcracker.
I have build nutcracker in my linux, and it can run redis already, How can I judge the nutcracker is running. and how can I print the process of nutcracker

Connection timed out to AWS elasticache

Found a strange bug when was trying to run twemproxy with cluster of elasticache (Amazon cloud memcached) servers. Amazon use CNAMEs as entry points for elasticache servers and twemproxy could connect to the backend memcached on start, but couldn't send any request to them. If I use "direct" hostnames for the backend servers, all requests are ok.

user@localhost:~$ telnet my.proxy.server 11311
Trying xx.xx.xx.xx...
Connected to xx.xx.xx.xx.
Escape character is '^]'.
get foo
SERVER_ERROR Connection timed out
^]

twemproxy config:

staging-cache:
  listen: 0.0.0.0:11311
  hash: fnv1a_64
  distribution: ketama
  timeout: 10000
  backlog: 1024
  preconnect: true
  auto_eject_hosts: true
  server_retry_timeout: 30000
  server_failure_limit: 3
  servers:
   - myserver.0001.use1.cache.amazonaws.com:11211:1
   - myserver.0002.use1.cache.amazonaws.com:11211:1
   - myserver.0003.use1.cache.amazonaws.com:11211:1
   - myserver.0004.use1.cache.amazonaws.com:11211:1

twemproxy was running as

[email protected]:~$ nutcracker -c /etc/nutcracker.yml -v 11

Here is part of twemproxy log: http://pastebin.com/DTE8gAva

When I modified servers section:

  servers:
   - ec2-xx-xx-xx-xx.compute-1.amazonaws.com:11211:1
   - ec2-xx-xx-xx-xx.compute-1.amazonaws.com:11211:1
   - ec2-xx-xx-xx-xx.compute-1.amazonaws.com:11211:1
   - ec2-xx-xx-xx-xx.compute-1.amazonaws.com:11211:1

I received response:

user@localhost:~$ telnet my.proxy.server 11311
Trying xx.xx.xx.xx...
Connected to xx.xx.xx.xx.
Escape character is '^]'.
get foo
END
^]

And, of course, *.cache.amazonaws.com could be resolved from instance where twemproxy is running:

[email protected]:~$ host myserver.0002.use1.cache.amazonaws.com
myserver.0002.use1.cache.amazonaws.com is an alias for ec2-xx-xx-xx-xx.compute-1.amazonaws.com.
ec2-xx-xx-xx-xx.compute-1.amazonaws.com has address xx-xx-xx-xx

P.S. Oct 26 code snapshot was used; Ubuntu 12.04.1 x86_64

After Adding new nodes, the data could not auto move to new nodes

According to the consistent-hashing, my access of some old data on old nodes would be route to new added empty nodes by twemproxy, some of my data would be "missing" in the end. Does twemproxy provide an automatic way to move the affacted(by key hashing) data to new added nodes?Or show me a manual workround instead?

New nodes lead to inaccurate data read!!!

This is my configuration files:

beta:
listen: 0.0.0.0:22122
hash: fnv1a_64
hash_tag: "{}"
distribution: ketama
timeout: 400
redis: true
auto_eject_hosts: true
server_retry_timeout: 30000
server_failure_limit: 1
servers:

  • 127.0.0.1:6379:1 guanyu
  • 127.0.0.1:6380:1 liubei

I have 8 GB of data distribution in the node 1 and node 2, I want to add a node 3, increase the node in the data read after 3 will present inaccurate problem.
Could you please tell me how to correct add a new node?

PS: node3 - 127.0.0.1:6370:1 zhangfei

Is there an easy way to delete all the key in cluster at once

When openning the option "auto_eject_hosts", it turns up to be normal that the "key" has been SAVED in one or multi nodes. Especially when the nodes got busy, as usual...

I am now using the original method to delete all of them by :
Connect to each of the node and send "DELETE key"

But imagine if I have 20 nodes... then each REFRESH operation will got 20 deletes.
My question is if there is an easier way to refresh/unset/delete the key.

Can't use netcat with twemproxy

When nc one of the node of memcached group:
$ echo get key | nc 127.0.0.2 20000
It works.

But when nc the twemproxy:
$ echo get key | nc 127.0.0.1 22121
It doesn't...

Why ?

Just can't delete through PHP's memcache extension

When using the memcache extension in PHP to connect the twemproxy, within one script, set/get all failed just after delete() , dont know why...

$twem = new Memcache;
$twem->addServer('127.0.0.1', 22121);

$twem->set('key1', 123);
$rs = $twem->get('key1');
var_dump($rs); // string(3) "123"
$ddd = $twem->delete('key1');

$twem->set('key1', 123);
$rs = $twem->get('key1');
var_dump($rs); // bool(false)

SIGSEGV in redis branch

While running a update process of my Redis instance, a Segmentation Fault occurred. The process was sending a lot of HMSETs, HDELs and PERSIST the Redis.

The SIGSEGV happens every time I run my update process.

My update process uses HiRedis opening a single TCP connection to the server em sending commands with the async API.

Config:

alpha:
listen: 127.0.0.1:22121
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: true
server_retry_timeout: 2000
server_failure_limit: 1
servers:

  • 127.0.0.1:63700:1

Compiled with debug logs, the output was the following:

[Mon Aug 13 17:33:29 2012] nc_parse.c:1052 parsed req 121 res 0 type 37 state 0 rpos 31 of 316
[Mon Aug 13 17:33:29 2012] nc_parse.c:1053
00000000 2a 32 0d 0a 24 37 0d 0a 50 45 52 53 49 53 54 0d |2..$7..PERSIST.|
00000010 0a 24 38 0d 0a 6f 4d ff 3c 05 bd fa 0f 0d 0a 2a |.$8..oM.<......
|
00000020 38 0d 0a 24 35 0d 0a 48 4d 53 45 54 0d 0a 24 38 |8..$5..HMSET..$8|
00000030 0d 0a 6f af 00 3d 05 48 be 0d 0d 0a 24 31 0d 0a |..o..=.H....$1..|
00000040 64 0d 0a 24 31 30 30 0d 0a 54 6f 63 61 20 44 56 |d..$100..Toca DV|
00000050 44 20 6e 6f 20 66 6f 72 6d 61 74 6f 20 64 65 20 |D no formato de |
00000060 63 61 72 72 6f 2c 20 63 6f 6d 20 65 6e 74 72 61 |carro, com entra|
00000070 64 61 20 55 53 42 20 65 20 63 6f 6e 74 72 6f 6c |da USB e control|
00000080 65 20 72 65 6d 6f 74 6f 2c 20 63 6f 6d 70 61 74 |e remoto, compat|
00000090 ed 76 65 6c 20 63 6f 6d 20 44 56 44 20 2f 20 56 |.vel com DVD / V|
000000a0 43 44 20 2f 20 43 44 20 2f 20 4d 50 33 0d 0a 24 |CD / CD / MP3..$|
000000b0 31 0d 0a 69 0d 0a 24 36 30 0d 0a 68 74 74 70 3a |1..i..$60..http:|
000000c0 2f 2f 74 68 75 6d 62 73 2e 62 75 73 63 61 70 65 |//thumbs.buscape|
000000d0 2e 63 6f 6d 2e 62 72 2f 54 31 30 30 78 31 30 30 |.com.br/T100x100|
000000e0 2f 5f 5f 32 2e 39 30 30 36 38 30 2d 35 33 64 30 |/__2.900680-53d0|
000000f0 30 61 66 2e 6a 70 67 0d 0a 24 31 0d 0a 70 0d 0a |0af.jpg..$1..p..|
00000100 24 35 36 0d 0a 68 74 74 70 3a 2f 2f 77 77 77 2e |$56..http://www.|
00000110 73 68 6f 70 69 6e 74 65 72 6e 61 63 69 6f 6e 61 |shopinternaciona|
00000120 6c 2e 63 6f 6d 2e 62 72 2f 3f 70 61 67 65 3d 64 |l.com.br/?page=d|
00000130 65 74 61 6c 68 65 26 69 64 3d 32 38 |etalhe&id=28|
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x10060) [0x7f427cb4b060]
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [1] /lib/x86_64-linux-gnu/libc.so.6(_IO_vfprintf+0x2775) [0x7f427c55fc65]
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [2] /lib/x86_64-linux-gnu/libc.so.6(__vsnprintf_chk+0xb0) [0x7f427c60ea90]
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [3] ./src/nutcracker(_vscnprintf+0x1b) [0x417cdb]
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [4] ./src/nutcracker(_log+0x151) [0x416c31]
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [5] ./src/nutcracker(parse_request+0xb65) [0x40fa85]
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [6] ./src/nutcracker(msg_recv+0xb9) [0x40cf99]
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [7] ./src/nutcracker(core_loop+0x1c9) [0x4096a9]
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [8] ./src/nutcracker(main+0x528) [0x408b68]
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [9] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed) [0x7f427c53730d]
[Mon Aug 13 17:33:29 2012] nc_util.c:291 [10] ./src/nutcracker() [0x408f59]
[Mon Aug 13 17:33:29 2012] nc_signal.c:122 signal 11 (SIGSEGV) received, core dumping
Segmentation fault

Redis RPOPLPUSH

Hi - Really love this project, but I was wondering if there are any plans to support the RPOPLPUSH list operation in Redis?

Distribution : all server

I need to dispatch my memcache query to all server of the list (set query)
I have try the three mode : ketama, modula and random, but in each case, only one server receive the query

Any idea to do that ?

How to add a new node?

This is my running configuration file:

beta:
listen: 0.0.0.0:22122
hash: fnv1a_64
hash_tag: "{}"
distribution: ketama
timeout: 400
redis: true
auto_eject_hosts: true
server_retry_timeout: 30000
server_failure_limit: 2
servers:

  • 152.121.123.24:6379:1 guanyu
  • 152.121.123.23:6379:1 zhangfei

I add a new node after I data will go wrong. " - 152.121.123.22:6379:1 liubei"
I want to add a new node, in the case of data without error.How to do it?

Allow an optional instance name, use it for consistent hashing

The problem

Twemproxy can be configured in order to avoid auto ejecting nodes, and when it is configured this way the user can rely on the fact that a given key will always be mapped to the same server, as long as the list of hosts remain the same.

This is very useful when using the proxy with Redis, especially when Redis is not used as a cache but as a data store, because we are sure keys are never moved in other instances, never leaked, and so forth, so the cluster is consistent.

However since Twemproxy adds a given host into the hash ring by hashing the ip:port:priority string directly, it is not possible for users to relocate instances without as a side effect changing the key-instance mapping. This little detail makes very hard to work with Twemproxy and Redis in production environments where network addresses can change.

Actually this is a problem with Memcached as well. For instance if our memcached cluster changes subclass, the consistent hashing will completely shuffle the map, and this will result info many cache misses happening after the reconfiguration.

Proposed solution

The proposed solution is to change the configuration so that instead of a list of instances like:

servers:
   - 127.0.0.1:6379:1
   - 127.0.0.1:6380:1
   - 127.0.0.1:6381:1
   - 127.0.0.1:6382:1

It is (optionally) possible to specify an host / name pair for every instance:

servers:
   - 127.0.0.1:6379:1 server1
   - 127.0.0.1:6380:1 server2
   - 127.0.0.1:6381:1 server3
   - 127.0.0.1:6382:1 server4

When an instance name is specified, it is used to insert the node in the hash ring instead to hash the ip:port:priority.

Open problems

One open problem with this solution is that modifying the priority will still mess with the mapping.
There are several solutions to this problem:

  • Simply ignore the problem and warn the user in the documentation.
  • Ignore the priority when an instance name is specified.
  • Ignore the priority when an instance name is specified, but read it instead from the name. For instance an instance name like "myserver:100" has priority 100. In this way it is obvious that to change the priority the user is forced to change the name, and hence the map.

How to move keys (data as well) between nodes added or removed?

According to issue #49 , I am informed to make a manual workaround to fix the problem. But I don't know how to manually move affacted keys and data from existing nodes to new added nodes for the hashing policy. Could somebody give some hints for the situation?

Also I am interesting in the way Twitter or other team shard exist redis data during scaling with the twemproxy. Full copy would not be a nice answer for me.

Robust hash ring failure retry mechanism

So once a host failed and we have a server_retry_timeout of 30 secs nutcracker retries the failed host on production traffic. I think nutcracker needs to perform a background hearbeat request like fetch a simple key and assure that host is up.

Memory Leak on mbufs

Seeing a multi-gigabyte RES sizes for 36 clients each over 4 pools after running some 8-9 hours. Without valgrind attached, the process starts out using ~20-30MB, then after a few minutes, you can watch the process jump a few 100MB in a matter of seconds, and then it'll slowly climb for many minutes/hours, then the jump again, ad nauseum.

Here's what valgrind dumped after throwing a subset of traffic thru nutcracker:

=7153== HEAP SUMMARY:
==7153== in use at exit: 163,774,838 bytes in 20,859 blocks
==7153== total heap usage: 26,161 allocs, 5,302 frees, 164,485,928 bytes allocated
==7153==
==7153== 272 bytes in 1 blocks are possibly lost in loss record 9 of 47
==7153== at 0x4C279FC: calloc (vg_replace_malloc.c:467)
==7153== by 0x4011AB8: _dl_allocate_tls (dl-tls.c:300)
==7153== by 0x4E36871: pthread_create@@GLIBC_2.2.5 (allocatestack.c:570)
==7153== by 0x418BD8: stats_start_aggregator (nc_stats.c:844)
==7153== by 0x418EEE: stats_create (nc_stats.c:931)
==7153== by 0x404FDA: core_ctx_create (nc_core.c:67)
==7153== by 0x4052DC: core_start (nc_core.c:137)
==7153== by 0x41D53E: nc_run (nc.c:484)
==7153== by 0x41D684: main (nc.c:533)
==7153==
==7153== 14,155,776 bytes in 864 blocks are possibly lost in loss record 45 of 47
==7153== at 0x4C28FAC: malloc (vg_replace_malloc.c:236)
==7153== by 0x41BD4B: _nc_alloc (nc_util.c:221)
==7153== by 0x410568: _mbuf_get (nc_mbuf.c:53)
==7153== by 0x4105B9: mbuf_get (nc_mbuf.c:91)
==7153== by 0x40C3CE: msg_recv_chain (nc_message.c:534)
==7153== by 0x40C639: msg_recv (nc_message.c:592)
==7153== by 0x405362: core_recv (nc_core.c:164)
==7153== by 0x4059BF: core_core (nc_core.c:297)
==7153== by 0x405AE3: core_loop (nc_core.c:326)
==7153== by 0x41D558: nc_run (nc.c:491)
==7153== by 0x41D684: main (nc.c:533)
==7153==
==7153== 51,625,984 bytes in 3,151 blocks are possibly lost in loss record 46 of 47
==7153== at 0x4C28FAC: malloc (vg_replace_malloc.c:236)
==7153== by 0x41BD4B: _nc_alloc (nc_util.c:221)
==7153== by 0x410568: _mbuf_get (nc_mbuf.c:53)
==7153== by 0x4105B9: mbuf_get (nc_mbuf.c:91)
==7153== by 0x410CC0: mbuf_split (nc_mbuf.c:244)
==7153== by 0x40C004: msg_fragment (nc_message.c:434)
==7153== by 0x40C317: msg_parse (nc_message.c:503)
==7153== by 0x40C54B: msg_recv_chain (nc_message.c:559)
==7153== by 0x40C639: msg_recv (nc_message.c:592)
==7153== by 0x405362: core_recv (nc_core.c:164)
==7153== by 0x4059BF: core_core (nc_core.c:297)
==7153== by 0x405AE3: core_loop (nc_core.c:326)
==7153==
==7153== 92,979,200 bytes in 5,675 blocks are possibly lost in loss record 47 of 47
==7153== at 0x4C28FAC: malloc (vg_replace_malloc.c:236)
==7153== by 0x41BD4B: _nc_alloc (nc_util.c:221)
==7153== by 0x410568: _mbuf_get (nc_mbuf.c:53)
==7153== by 0x4105B9: mbuf_get (nc_mbuf.c:91)
==7153== by 0x410CC0: mbuf_split (nc_mbuf.c:244)
==7153== by 0x40BE24: msg_parsed (nc_message.c:398)
==7153== by 0x40C2FB: msg_parse (nc_message.c:499)
==7153== by 0x40C54B: msg_recv_chain (nc_message.c:559)
==7153== by 0x40C639: msg_recv (nc_message.c:592)
==7153== by 0x405362: core_recv (nc_core.c:164)
==7153== by 0x4059BF: core_core (nc_core.c:297)
==7153== by 0x405AE3: core_loop (nc_core.c:326)
==7153==
==7153== LEAK SUMMARY:
==7153== definitely lost: 0 bytes in 0 blocks
==7153== indirectly lost: 0 bytes in 0 blocks
==7153== possibly lost: 158,761,232 bytes in 9,691 blocks
==7153== still reachable: 5,013,606 bytes in 11,168 blocks
==7153== suppressed: 0 bytes in 0 blocks

Redis AUTH Support

There doesn't seem to be a mailing list (that I've found) so I ask here: are there any plans to support the Redis AUTH command? I've got a set of servers that I'd like to migrate to twemproxy but we need to be able to authenticate.

I think the general use case would be to try the auth against any node in the pool and assume that any one of them can be considered authoritative. Thus it seems like a relatively simple addition, though there may be other concerns I've not considered yet (I've just run into this).

MSET

Do you plan introducing fully fledged support of mutly-kes commands: MSET, MGET, DEL? I mean exactly the fully fledged support, not the one based on "hash tag" (#9).

easy feature request: blacklist commands / route by blacklist

For each 'server' of each 'serverpool' we could have a list of prohibited commands.
This would be great because:

  1. Better security than redis' command renaming
  2. Routing specific traffic to specific server in a serverpool: eg. if I decide to separate PubSub functionality from storage, I'll just blacklist four commands in the storage server and Twemproxy will route them to the other server.

memcache set key bug

[Fri Jan 25 14:58:03 2013] nc_core.c:207 close c 8 '127.0.0.1:39473' on event 0005 eof 1 done 1 rb 20 sb 33
[Fri Jan 25 14:58:04 2013] nc_proxy.c:337 accepted c 8 on p 7 from '127.0.0.1:39475'
[Fri Jan 25 14:58:04 2013] nc_core.c:207 close c 8 '127.0.0.1:39475' on event 0005 eof 1 done 1 rb 20 sb 8
[Fri Jan 25 14:58:04 2013] nc_proxy.c:337 accepted c 8 on p 7 from '127.0.0.1:39476'
[Fri Jan 25 14:58:04 2013] nc_core.c:207 close s 12 '127.0.0.1:11211' on event 0019 eof 0 done 0 rb 0 sb 0: Connection refused
[Fri Jan 25 14:58:04 2013] nc_core.c:207 close c 8 '127.0.0.1:39476' on event 0005 eof 1 done 1 rb 20 sb 33

alpha:
listen: 127.0.0.1:22121
hash: murmur
distribution: ketama
auto_eject_hosts: true
redis: false
server_retry_timeout: 400
server_failure_limit: 1
servers:

  • 127.0.0.1:11211:1
  • 127.0.0.1:11212:1
  • 192.168.0.1:11212:1
  • 192.168.0.1:11211:1

i set key "b"
printf "set b 0 0 5\r\nabcde\r\n" | nc 127.0.0.1 2212

i print key "b"
printf "get b\r\n" | nc 127.0.0.1 11211

i kill memcache 11211 proceedings

i again set key "b"
printf "set b 0 0 5\r\nabcde\r\n" | nc 127.0.0.1 22121
SERVER_ERROR Connection refused

but i tautology set key "b", but set success
printf "set b 0 0 5\r\nabcde\r\n" | nc 127.0.0.1 22121
STORED

but get key "b" value fail
printf "get b\r\n" | nc 127.0.0.1 22121
SERVER_ERROR Connection refused

questions

  1. Before you started work on twemproxy, did you consider using moxi: https://github.com/steveyen/moxi? If yes, why did you reject moxi?
  2. Are you planning to support binary protocol in addition to ascii?
  3. Do you see any need for multi-threaded support? Moxi supports both single and multi-threaded configurations.

Comment: I definitely like the implementation of twemproxy better than moxi. Much cleaner. Nice work.

User server names in stats

it would be helpful to have the server names used in the stats.

servers:

  • 127.0.0.1:6380:1 server1
  • 127.0.0.1:6381:1 server2
  • 127.0.0.1:6382:1 server3
  • 127.0.0.1:6383:1 server4

Question about multiget

Hi there,

First of all, thanks a lot for committing such a great project to GitHub. It looks really nice and I'm currently experimenting with it.

While I was doing my tests I noticed that multigets are broken apart and translated to single gets. Is there any specific reason behind this? Isn't multiget faster on memcached than a single get? Or am I wrong here?

get user-1 user-2 user-3\r\n

translates into

get user-1 \r\n (note the extra space)
get user-2 \r\n (note the extra space)
get user3\r\n (no extra space on last part)

Thanks a lot,
Nicolas

feature request: kqueue support

feature request for kqueue support.

$ ./configure
....
configure: error: required sys/epoll.h header file is missing

It would be nice to be able to use twemproxy/nutcracker while doing development on osx, as well as possible deployment onto FreeBSD. Would it be possible to add kqueue support?

configuration file 'conf/nutcracker.yml' syntax is invalid

my conf/nutcracker.yml is:
listen: 127.0.0.1:22123
hash: fnv1a_64
distribution: ketama
timeout: 400
backlog: 1024
preconnect: true
auto_eject_hosts: true
server_retry_timeout: 2000
server_failure_limit: 3
servers:

  • 127.0.0.1:11212:1
  • 127.0.0.1:11213:1

when i rum /src/nutcracker -t is show
configuration file 'conf/nutcracker.yml' syntax is invalid
why? help ,please!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.