Giter VIP home page Giter VIP logo

pigato's People

Contributors

alexeygolev avatar allain avatar bmeck avatar denisgorbachev avatar julbra avatar maxired avatar moperacz avatar oprearocks avatar prdn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pigato's Issues

way to have workers reject ability to handle a message

design / bikeshed

sometimes workers do have state due to being on a machine with files / other services (unfortunately). being able to tell the broker that the worker cannot handle the message and to try another worker would be amazing.

feel free to discuss this, I will be implementing this later on our impl and would rather it synced up with upstream.

Architecture Question

Hello,

This is a very promising framework, thank you for creating it! I have done a lot of work with RabbitMQ but not much at all with ZMQ. I'm interested to know what architecture this framework best represents. The architectures I'm referring to can be found in this article:

http://zeromq.org/whitepapers:brokerless

I'm mostly concerned with how this framework will handle high concurrency at scale. Particularly, I wonder if the drawbacks outlined in the article are addressed with this framework?

Best,
Aaron

bug Client.js#108

I think this is not possible

err = err.toString();
if (err === 0) {

kiuma

What does test "test/system.js" ?

Hi,

looking at test/system.js, I am not sure what this file is supposed to test.
It looks like a performance test, but the fact that there is no assert is kind of weird.

Because the connection between a client and a worker is asynchronous, we don't really know what is done in the meantime.

way to map/query across all workers

design / bikeshed

this is a common problem but being able to run a reduce across workers is important.

right now we can use a separate registry from the broker to generate a list of workers and run a reduce across them.

being able to wait on / run a reduce across all workers of a service would be a big win. technically all that is needed is the ability to:

  • get a list of workers for a service
  • send targeted messages to workers

since pigato is more than just a simple omdp we should discuss if this should be a service or built into the broker.

Should Disconnect and hearbeat be symmetrical ?

Hi,

I am wondering whether we should change the behavior of heartbeat and disconnect to be symmetrical.

Let me explain :

  • In the Worker side, after I called 'close', there is currently no way to know that the disconnection has been correctly done.
    • I suggest that upon reception of a W_DISCONNECT message from a Worker, the broker acknowledge this. Maybe we should aslo send a W_DISCONNECT, or maybe introduce a new message such as W_DISCONNECTED.
  • On the client side, there is no way to know that a broker is still there. While we send W_HEARTBEAT to the broker, we never get some back. We can also see that 'HEARTBEAT_LIVENESS' is never used on the client.
    • I suggest that upon reception of a HEARTBEAT_LIVENESS without RequestId, the broker answer to the client with a W_HEARTBEAT.
  • On the client side, when I call 'stop', we don't contact the broker(s) to tell them that we leave. I am not sure how zeromq will behave in the case where the broker try to forward a reponse from a worker but the client has been disconnect before.
    • Would an exchange of W_DISCONNECT message make sense in this case ?

The goal I am trying to pursue here, is to be able to build a callback system on top of connection/deconnection event, so I never have to setup timeout as a user of the library.

Please let me know your opinion about all of theses.

Hearbeat from client can be badly forwarded

Hi,

When I connect a client to 2 (or more) brokers, and each of the broker have a worker for a current service,
from a long request, when I call 'heartbeat(req.id)' on my client, the hearbeat is not always forwarded to the good worker.

I suspect it to be because of the way the dealer socket work in ZMQ, but haven't investigated more.

Multiple brokers?

Does PIGATO.Client really support connecting to multiple brokers? The code says this.socket.connect(this.broker) (single connect call), and original ZMQ guide says A subscriber can connect to more than one publisher, using one connect call each time (multiple connect calls). So, what's right? :)

Or am I missing the point of connect?

[Worker] W_DISCONNECT not correctly sended

Hi,

As described is #31 , when we call worker.stop(), the W_DISCONNECT message is not always sended.
That is why I think we should acknowledge it before closing the zmq socket.

Please find here a test showing this behavior.

Add wildcard for service name

Hi,

The current implementation of the broker doesn't allow to use a wildcard mechanism to route query.
I think this would be a great improvement.

I am trying to serve a website with pigato, and want different workers to deal with dynamic queries or static files without configuration except on the worker.

I wish my worker dealing with static files to be started with

var worker = new Worker('tcp://' + conf.broker.host + ':' + conf.broker.port, 'http/mywebsite.com/*')

whereas dynamic URI to be declared with the full uri , such as

var worker = new Worker('tcp://' + conf.broker.host + ':' + conf.broker.port, 'http/mywebsite.com/stock')

Would you be open for a contribution in this direction ?

Code Documentation

Even though the code in the library is quite easy to understand, it might be worth considering adding jsdoc comments.

I would happily do this if others agree its a good idea.

[Broker] No HEARTBEAT message sended to Worker

Hi,

I just figured out that the Broker currently doesn't send heartbeat message back to workers. (Except heartbeat for a specific request).

This lead to worker disconnecting and reconnecting endlessly.

Broker and project organisation

@maxired I like the idea to let devs to override the routing system.
Keeping that in mind we should think to the best solution performance-wise.
We should have a method in the broker like route that allows devs to override the default worker/service selection behaviour.

I think that we should have the tiniest broker possible and then create a separate project with addons to override default behaviours.

The wildcard and versioning system can be viewed at the moment as "core" features because they are really useful in quite any scenario. Anyway we may want to make these features as addons in the future.
For sure we have to maintain the cleanest and tinies code base possible.

The future I see for pigato broker is mainly a directory service and load balancer.
Something that I really want to have is that the broker should continue to act like a router but it should also let worker and clients speak directly if possible (depending on some factors like network access, authorisation, ....)

Let's start moving into that direction.

response.end() first argument required

Currently, pigato requires the worker to call response.end(something). If you just call response.end(), the W_REPLY will not be sent. That means you can't fs.createReadStream("/some/file").pipe(response) (despite what README states). That's because official Streams API states that last argument to .end() is optional, and fs.createReadStream indeed calls .end() without arguments.

Could you please fix it?

worker heartbeat not working

I don't know if i'm reading something wrong, but it seems that heartbeat part of Worker.prototype.onMsg function checks for params that are not passed by Worker.prototype.heartbeat. lacks rid param, and requires clientId that is never used.

Sending heartbeat:

  ...
  this.send([
    MDP.WORKER, MDP.W_HEARTBEAT, '',
    JSON.stringify({
      concurrency: this.conf.concurrency
    })
  ...

Heartbeat processing event onMsg requires message to have 5 parameters (and it only has 2)

if (msg.length === 5) {
  clientId = msg[2];
  rid = msg[4];
  if (rid && this.reqs[rid]) {
    this.reqs[rid].liveness = HEARTBEAT_LIVENESS;
  }
}

It's a bit confusing for me ... there are 2 different "livenesses" one is [this|self].liveness and the other is req.liveness. When checking if reply is active req.liveness is tested but stream is still open because testing is done on this.liveness (which on every message is set to HEARTBEAT_LIVENESS)

if (self.liveness <= 0) {
  ...
  self.stop();

interoperability with other Majordomo 0.2 implementation

I'm trying to use pigato to implement a client and broker, but c++ to implement worker based on Zeromq's reference Majordomo protocol implementation.
however pigato seems to send/expect different protocol string so they dont work together
e.g.
'C' for client instead 'MDPC02'

is pigato compatible with Majordomo protocol ?

rmap is growing

it seems that rmap will grow indefinitely as long as the worker which handled the request is alive, is there a reason for this?

socket.identity must be unique for each fresh socket

When a Worker is stopped and restarted its socket it closed and then a new one is created.

Both sockets share the same identity (the worker name). This causes the second socket to silently drop any messages it sends out.

I was able to resolve this by setting the identity of the created socket to this.conf.name + Date.now(). The sockets don't interfere with each other any more, but then a whole bunch of unit tests start failing since the identity of the socket is used to identify the Worker.

Timeout if the broker handling client request is shut down

I implemented PIGATO with following architecture,

I placed 2 brokers and 2 workers

broker 1 - have registered 2 workers
broker 2 - have registered 2 workers

Client use both the broker, and when we request it just go through any one,
and, if I stopped the broker that is handling request then client will receive timeout, and is not redirected to other broker.
(this seem to be message get lost)

Is there any way to distinguish this timeout from timeout when client fails (or client didn't send the response? )

Not able to connect the remote worker

Hi,

We are trying to implement the micro service concept in our project using Node Js. We are using Pigato 0.0.38 for this. The client and worker concept work nice when workers, broker and client in single machine. During this time, I used the host as "tcp://127.0.0.1:3008" .

I am trying to implement the same in different server which means client and broker are hosted in one server and workers are hosted in different server. I am using Amazon free tier. In this case, I have changed the Amazon IP. But It is not woking properly.

Find the bellow code where I give broker ip and initiate worker class.

var Worker = require('pigato').Worker;
var worker = new Worker(tcp://brokerHostingIp:3008, 'workername');

Basically, I would like to know whether this plugin supports that workers are hosted in different server.
If you say yes, then Please let me know what I missed.

Find the below my server details.

Node version : 0.12.2
NPM version : 2.7.4
AWS Amazon linux : 3.14.35-28.38.amzn1.x86_64

Thanks
RamP

Callback functions

The use of callback functions when EventEmitters are readily available is problematic since there are limitations that are self imposed by using them.

  • Only one callback can be registered at a time since it's a simple variable.
  • If an error occurs during the processing of the callback, it can halt execution of the thing that's calling the callback, even through the failure of the callback should not directly impact the execution path. For ex: an error during onDisconnect in Worker will stop the "stop" event from ever going out. though technically all operations during the stop call could be done even if the callback had failed.

arguments possible issue

Hello I'm the author of elenajs (https://github.com/elenajs/ and http://elenajs.org) and I've found your work very interesting.
I will use it to create my own version of mdp-02 but based on AMD and my framework(and dojo).

I've just seen this lines in Client.js#38

this.socket.on('message', function() {
    var args = arguments;
    setImmediate(function() {
        self.onMsg.call(self, args);
    });
});

why do you call setImmediate in this case, is it really necessary ? (just asking :) )

and you probably have a performance issue, please verify:
https://github.com/petkaantonov/bluebird/wiki/Optimization-killers#32-leaking-arguments

Cheers,
kiuma

Modular features documentation

Hello!

Referencing #46

You could add a section in the documentation for those kind modular features
It is important to help Pigato grow, isn't it?
Thanks!

frontend facing webservices. Pigato good solution?

So, I'm more at home with message structures for the backend, e.g.: enriching documents based on CEP, CQRS type of stuff with Kafka etc.

However, currently a client has asked to look into restructuring a large legacy code-base + new functional stuff on top into several webservices (8 for now). This stuff needs to be exposed through a Restful API to frontend clients.

I picture the following flow/user story. Would really appreciate your input if thinking is correct and if Pigato (+zeroMQ) would be the way to go:

  • there's a API layer which acts as a proxy/facade for the entire system.
  • a client may request a resource by talking to the api -layer
  • the api-layer has knowledge on how to compose a response for each particular request.
  • for that, it needs to (often in parallel, sometimes sequentially) get sub-responses from other parts of the system, and combine them together to form the response.
  • the API-layer doesn't want to know which services in the system do what. In fact it doesn't want to know which services exist at all.
  • Therefore it chooses to put it's messages on a message bus/queue. 1 message for each subresource it wants to receive
  • Messages are passed using req/resp protocol (instead of pub/sub) because the API-layer (the requester) wants to know when all sub-responses are received so it can start it's task of composing the final response.

This story may be typical for some, but it's pretty new to me.
Would you say the above story is correct in how a message bus fit's in?
Am I correct in not wanting pub/sub because I can't see how the api-layer (the requester) would know when it has received all responses?

And lastly, would Pigato (+ zeroMQ) offer us a good solution / abstraction to approach this?

P.s.: I'm looking into simple service buses, instead of brokers because part of the routing logic will end up in the routing-table of said brokers. This to me causes friction with wanting to have your microservices as loosely coupled as possible. Would you agree?

one class per file

In the broker there is the Controller class and Cache class even though they are not used anywhere but the broker I think we should separate them into there own files.

Client not closing

When calling stop on a client it does not actually close the zmq socket (file handle)

this causes the process to run out of file handles even if the client is no longer in use.

Workers disconnection handling

It very well might be that I'm not understanding the source code correctly. If so just point me in the right direction (source/tests).
Let's say I have too workers that can handle the same service prefix messages. I assume if one goes down then a message from a client should be routed to another one. As far as I understand we're doing heartbeating from router to dealer. So after 3 failed responses we just pop workers identity from our workers list. Do I understand correctly that until we remove this worker from our list it is still the preferred worker? (it means that we first need to wait until router marks it as failed and only then we'll be able to make another request from client). Or am I missing something?
Thank you.
PS. really impressed with this project... was about to write my own abstraction for zmq bindings but then found this gem:)

No parallels query for a worker ?

Hi,

from what I understand from the code, if we want to send multiples queries to a worker, the queries will be done one after the other ? I am correct ? I understand this from the call to serviceWorkerUnwait, and also the use of self.lock which seems even more restrictive ( One request by service ? )

If so, is there any good reason to justify that ? This seems to me to be counter intuitive, specially when using Node.JS.

Workers communication strategy

Hello guys,

I am using Pigato in some projects for now, but sometimes I still get confusing about allowing workers to communicate or even to use each others services.

In some scenarios, I prefer to not allow sub levels of workers, but in other scenarios, this gives me much duplicate code, which I solve creating shared modules.

To illustrate a little better:
captura de tela de 2016-02-13 21 19 52

So, what are your thoughts about it?
Does anyone knows some good reference on this subject?

Test do not properly test network scenarios

Changing the location definition in startStop.js to var location = 'tcp://0.0.0.0:2020';

Causes 3 tests to fail for example. I have not made a similar change in the other tests, but I suspect they are equally susceptible.

Caching support

Hi all,

I've just pushed a basic cache support.
Basically a worker may tell to the Broker to cache a reply for a specified number of milliseconds.
Broker generates a SHA1 for each Client request and if it finds an existing cache hit, it will reply directly to Client.

Caching should be improved. The Broker for example should search for the request queue to see if a cached reply could be forwarded to other Clients.

Please test and let me know

concurrency is not respected on restart of worker

Hi,

following scenario: imagine you have a client which pushes 10 jobs to broker and have 2 workers with concurency 1.. Then you stop the workers and start them again and they tend to take all the jobs from the broker without respecting concurrency settings.

Worker concurrency setting

Hi,

I've noticed that even though i set concurrency to 1 on worker creation:

var worker = new pigato.Worker('127.0.0.1:5555', 'testName', {concurrency: 1});

When i send 10 requests at once they all get handled simultaneously. I did a little research on broker code and it does something like this:

_.each(['concurrency'], function(fld) {
      if (!_.isUndefined(opts[fld])) {
        worker.[fld] = opts[fld];
      }
    });

it assignes worker.concurrency a value from worker config, but it is never used. Border.workerAvailable method checks only worker.opts.concurrency.

if (worker.opts.concurrency === -1) {
    return true;
  }

  if (worker.rids.length < worker.opts.concurrency) {
    return true;
  }

When i change worker.[fld] = opts[fld]; to worker.opts[fld] = opts[fld]; things look a lot better and request handling works like i imagine it would :)

Grenache

Hello all

learning from Pigato experience I'm working on a lightweight and really distributed solution that I'd like to present to you all:
https://github.com/bitfinexcom/grenache

My aim is to keep it simple but allow complex constructions and patterns without enforcing the protocol.

What do you think?

[Worker] hbtimer cleared asynchronously may lead to problems

Hi,

On the worker side, when handling with a reconnection, after for example the liveness being 0, the hbtimer is not cleared immediatly. but later, in the stop method.

This is indeed launched after the conf.reconnect ms.

When conf.reconnect is greater than conf.heartbeat, this lead to a bad behavior : the function linked to hbtimer will be called several times with the livenness being 0 or negative, and so the start method will later be called several times, sending more 'W_DISCONNECT' and 'W_READY' than necessary , and probably resulting in old hbtimer never been deleted.

I haven't checked, but this might also appeared with conf.reconnect equal or lower than conf.heartbeat on a system with high load.

[Client onMsg] emitErr vs doing nothing is not coherent

Hi,

When handling messages on the client library, when handling unknown messages there are cases when we emit error, and case when we do nothing. This should probaly be uniformized.

Currently,
We emit error when :

  • we received a message with a header different from MDP.CLIENT
  • we received a message of length >=3 and whose type is neither MDP.W_HEARTBEAT nor MDP.W_REPLY nor MDP.W_REPLY_PARTIAL

We don't emit error when :

  • we received a message whose length is <3 and type different of MDP.W_HEARTBEAT
  • we received a message whose requestId is unknown

I am not sur this should really be considered as client side error; this is probably more usefull for debugs than anything. What do you think should be the good behavior ?

Usefullness of timeout in tests ?

Hi,

I am currently trying to remove/shorten the timeouts in the unit tests.
Is there any reasons why you set them at high value ?

I sometimes have ZeroMQ (4.0.4+dfsg-2+deb.sury.org~precise+1) error like :

  • zmq_assert (it != outpipes.end ());
  • Resource temporarily unavailable (signaler.cpp:236)
  • Assertion failed: ok (mailbox.cpp:82)
  • *** glibc detected *** node: double free or corruption (fasttop)Aborted

Random errors in tests

starfall@nx:~/workspace/pigato$ mocha


  Client
    ✓ connect to a zmq endpoint and call callback once heartbeat made round trip
    ✓ connect to a zmq endpoint and emit connect once heartbeat made round trip
    ✓ doesn't call callback if no heartbeat response
    ✓ emit an error if answer with bad header
    ✓ doesn't call callback if answer with bad id
    ✓ can do callback request with no partial
    ✓ can do stream request with no partial
    ✓ can do stream request with partial
    ✓ can do callback request with partial
Assertion failed: ok (mailbox.cpp:82)
Aborted
starfall@nx:~/workspace/pigato$ mocha


  Client
    ✓ connect to a zmq endpoint and call callback once heartbeat made round trip
    ✓ connect to a zmq endpoint and emit connect once heartbeat made round trip
    ✓ doesn't call callback if no heartbeat response
Assertion failed: ok (mailbox.cpp:82)
Aborted
starfall@nx:~/workspace/pigato$ mocha


  Client
    ✓ connect to a zmq endpoint and call callback once heartbeat made round trip
    ✓ connect to a zmq endpoint and emit connect once heartbeat made round trip
    ✓ doesn't call callback if no heartbeat response
    ✓ emit an error if answer with bad header
    ✓ doesn't call callback if answer with bad id
    ✓ can do callback request with no partial
    ✓ can do stream request with no partial
    ✓ can do stream request with partial
    ✓ can do callback request with partial
    ✓ emit an error if ERR_MSG_LENGTH we send a message to short
    ✓ emit an ERR_REQ_INVALID error if we send a reply to an invalid request
    ✓ emit an error if answer with bad type
    ✓ can send heartbeat from a request, and it will send the good requestId
    ✓ can send manual heartbeat for an unknown request
    when timeout exceeded with heartbeat short 
      ✓ emit an error when timeout exceeded (42ms)
    when timeout exceeded with heartbeat long 
      ✓ wait for the heartbeat to expire before the error is returned (60ms)
    when heartbeat is setted
      ✓ send heartbeat regularly (61ms)

  Worker
    ✓ connect to a zmq endpoint and call callback once ready made round trip
    ✓ connect to a zmq endpoint and emit 'connect' once ready made round trip
    ✓ emit 'connect' at reception of first ZMQ message (even if it is not a READY message)
    ✓ emit an error events when receiving a request with CLIENT as header
    ✓ emit request events with no data when receiving a request with nothing
    ✓ emit request events with no data when receiving a request with a JSON String
    ✓ emit request events with no data when receiving a request with an empty JSON object
    ✓ emit request events with no data when receiving a request with an complexe JSON object
    ✓ messages sended to another worker is not received/handled
    emit hearbeat regularly
      ✓ emit hearbeat regularly 
    For an request exchange
      ✓ response keep the same clientId
      ✓ empty is empty string 
      ✓ empty is empty string event when not empty was sended
      ✓ response keep the same requestId
      ✓ answer status 0 for a response with no problem
      ✓ string data are correctly sended
      ✓ object data are correctly sended
    when I set Concurency
      ✓ send the defined concurrency in the hearbeat message
      ✓ is not impacted by the current requests
    when I send more request in // than conf.concurency
      ✓ requests are still handled in //

  Worker Disconnection
    when stop is called when connected
      ✓ send Disconnect
    when Broker doesn't anwser to heartbeat
      1) send 3 Heartbeat messages then reconnect
      ✓ emit Disconnect when detecting it, after sending it (61ms)
Assertion failed: ok (mailbox.cpp:82)
Aborted
starfall@nx:~/workspace/pigato$ 

Environment:

  • pigato cabbc60 (latest master, as of issue creation)
  • Ubuntu 14.04
  • libzmq3 4.0.4+dfsg-2

Broker can't be totally clean

Hi, when starting and closing a broker, node.js doesn't finish
You can reproduce with the following sample :

var zmq = require('zmq');

var bhost= "inproc://#titit";

var PIGATO = require('./');
var broker = new PIGATO.Broker(bhost);
broker.start(
    function(){
    console.log("started");
    broker.stop( function(err){
      console.log("stopped", err);
      } );}, 200);
$node testBroker.js 
started
stopped undefined

This is because here in the start method, we create a setInterval, but don't save the intervalId, and don't delete it in the stop method.

Multiples services in a Worker

Hi,

Currently, a Worker can deal with only one service.
I wish a node could easily provide several services, without the overhead of having multiples Workers ( impact in terms of memory, open files (new socket) ... )

Would you be open for a contribution implementing this ?

ZAP

I know adding zap support is on the roadmap and that zmq-zap is still in the unstable state, But I would like to give a crack at implementing it.
I have a basic PLAIN and NULL implementation working and have been using it in production.
I would like to know if you have any recommendations for implementation, my current method is to configure the broker as the zap server passing a zap object in the config which also includes the zapHandler and passing zap: true on the config of client and worker.
If you have any better suggestions please let me know.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.