Giter VIP home page Giter VIP logo

live-mutex's Introduction

Travis Build Status
CircleCI Build Status
Latest NPM version


Live-Mutex / LMX ๐Ÿ”’ + ๐Ÿ”“

Disclaimer

Tested on *nix and MacOS - (probably will work on Windows, but not tested on Windows).
Tested and proven on Node.js versions >= 8.0.0.

Simple Working Examples:

See: https://github.com/ORESoftware/live-mutex-examples

Installation

For usage with Node.js libraries:

$ npm i live-mutex

For command line tools:

$ npm i -g live-mutex

Docker image for the broker:
  docker pull 'oresoftware/live-mutex-broker:0.2.24'
  docker run --rm -d -p 6970:6970 --name lmx-broker 'oresoftware/live-mutex-broker:0.2.24'  
  docker logs -f lmx-broker

About

  • Written in TypeScript for maintainability and ease of use.
  • Live-Mutex is a non-distributed networked mutex/semaphore for synchronization across multiple processes/threads.
  • Non-distributed means no failover if the broker goes down, but the upside is higher-performance.
  • By default, a binary semaphore, but can be used to create a non-binary semaphore, where multiple lockholders can hold a lock, for example, to do some form of rate limiting.
  • Live-Mutex can use either TCP or Unix Domain Sockets (UDS) to create an evented (non-polling) networked mutex API.
  • Live-Mutex is significantly (orders of magnitude) more performant than Lockfile and Warlock for high-concurrency locking requests.
  • When Warlock and Lockfile are not finely/expertly tuned, 5x more performant becomes more like 30x or 40x.
  • Live-Mutex should also be much less memory and CPU intensive than Lockfile and Warlock, because Live-Mutex is fully evented, and Lockfile and Warlock use a polling implementation by nature.

This library is ideal for use cases where a more robust distributed locking mechanism is out-of-reach or otherwise inconvenient. You can easily Dockerize the Live-Mutex broker using: https://github.com/ORESoftware/dockerize-lmx-broker


On a single machine, use Unix Domain Sockets for max performance. On a network, use TCP. To use UDS, pass in "udsPath" to the client and broker constructors. Otherwise for TCP, pass a host/port combo to both.


Basic Metrics

On Linux/Ubuntu, if we feed live-mutex 10,000 lock requests, 20 concurrently, LMX can go through all 10,000 lock/unlock cycles in less than 2 seconds, which means at least 5 lock/unlock cycles per millisecond. That's with TCP. Using Unix Domain Sockets (for use on a single machine), LMX can reach at least 8.5 lock/unlock cycles per millisecond, about 30% more performant than TCP.


Rationale

I used a couple of other libraries and they required manual retry logic and they used polling under the hood to acquire locks. It was difficult to finetune those libraries and they were extremely slow for high lock request concurrency.
Other libraries are stuck with polling for simple reasons - the filesystem is dumb, and so is Redis (unless you write some
Lua scripts that can run on there - I don't know of any libraries that do that).


If we create an intelligent broker that can enqueue locking requests, then we can create something that's both more performant and more developer friendly. Enter live-mutex.

In more detail: See: docs/detailed-explanation.md and docs/about.md



Simple Example

Locking down a particular route in an Express server:

import {LMXClient} from 'live-mutex';
const client = new LMXClient();
const app = express();

app.use((req,res,next) => {           
   
    if(req.url !== '/xyz'){
      return next();
    }
   
     // the lock will be automatically unlocked after 8 seconds
    client.lock('foo', {ttl: 8000, retries: 2}, (err, unlock) => {
    
      if(err){
        return next(err); 
      }

      res.once('finish', () => {
        unlock();
      });
      
      next();

    });

});

Basic Usage and Best Practices

The Live-Mutex API is completely asynchronous and requires usage of async initialization for both the client and broker instances. It should be apparent by now that this library requires a Node.js process to run a server, and that server stores the locking info, as a single source of truth. The broker can be within one of your existing Node.js processes, or more likely launched separately. In other words, a live-mutex client could also be the broker, there is nothing wrong with that. For any given key there should be only one broker. For absolute speed, you could use separate brokers (in separate Node.js processes) for separate keys, but that's not really very necessary. Unix Domain Sockets are about 10-50% faster than TCP, depending on how well-tuned TCP is on your system.

Things to keep in mind:

  1. You need to initialize a broker before connecting any clients, otherwise your clients will pass back an error upon calling connect().
  2. You need to call ensure()/connect() on a client or use the asynchronous callback passed to the constructor, before calling client.lock() or client.unlock().
  3. Live-Mutex clients and brokers are not event emitters.
    The two classes wrap Node.js sockets, but the socket connections are not exposed.
  4. To use TCP and host/port use {port: <number>, host: <string>}, to use Unix Domain Sockets, use {udsPath: <absoluteFilePath>}.
  5. If there is an error or Promise rejection, the lock was not acquired, otherwise the lock was acquired. This is nicer than other libraries that ask that you check the type of the second argument, instead of just checking for the presence of an error.
  6. The same process that is a client can also be a broker. Live-Mutex is designed for this. You probably only need one broker for any given host, and probably only need one broker if you use multiple keys, but you can always use more than one broker per host, and use different ports. Obviously, it would not work to use multiple brokers for the same key, that is the one thing you should not do.

Client Examples

Using shell / command line:

Example

(First, make sure you install the library as a global package with NPM). The real power of this library comes with usage with Node.js, but we can use this functionality at the command line too:

#  in shell 1, we launch a live-mutex server/broker
$ lmx start            # 6970 is the default port


#  in shell 2, we acquire/release locks on key "foo"
$ lmx acquire foo      # 6970 is the default port
$ lmx release foo      # 6970 is the default port

To set a port / host / uds-path in the current shell, use

$ lmx set host localhost
$ lmx set port 6982
$ lmx set uds_path "$PWD/zoom"

If uds_path is set, it will override host/port. You must use $ lmx set a b, to change settings. You can elect to use these environment variables in Node.js, by using {env: true} in your Node.js code.


Using Node.js

Importing the library using Node.js

// alternatively you can import all of these directly
import {Client, Broker} from 'live-mutex';

// aliases of the above;
import {LMXClient, LMXBroker} from 'live-mutex';

Simple example

To see a complete and simple example of using a broker and client in the same process, see: => docs/examples/simple.md


A note on default behavior

By default, a lock request will retry 3 times, on an interval defined by opts.lockRequestTimeout, which defaults to 3 seconds. That would mean that the a lock request may fail with a timeout error after 9 seconds. To change the number of retries: to use zero retries, use either {retry: false} or {maxRetries: 0}.

There is a built-in retry mechanism for locking requests. On the other hand for unlock requests - there is no built-in retry functionality. If you absolutely need an unlock request to succeed, use opts.force = true. Otherwise, implement your own retry mechanism for unlocking. If you want the library to implement automatic retries for unlocking, please file an ticket.

As explained in a later section, by default this library uses binary semaphores, which means only one lockholder per key at a time. If you want more than one lockholder to be able hold the lock for a certain key at time, use {max:x} where x is an integer greater than 1.


Using the library with Promises (recommended usage)

Example
const opts = {port: '<port>' , host: '<host>'};
// check to see if the websocket broker is already running, if not, launch one in this process

const client = new Client(opts);

// calling ensure before each critical section means that we ensure we have a connected client
// for shorter lived applications, calling ensure more than once is not as important

return client.ensure().then(c =>  {   // (c is the same object as client)
 return c.acquire('<key>').then(({key,id}) => {
    return c.release('<key>', id);
 });
});

Using async/await

Example
    const times = 10000;
    const start = Date.now();
    
    async.timesLimit(times, 25, async n => {
      
      const {id, key} = await c.acquire('foo');
      // do your thing here
      return await c.release(key, id);  // or just return w/o await, since await is redundant in the return statement
      
    }, err => {
      
      if (err) {
        throw err;
      }
      
      const diff = Date.now() - start;
      console.log('Time required for live-mutex:', diff);
      console.log('Lock/unlock cycles per millisecond:', Number(times / diff).toFixed(3));
      process.exit(0);
      
    });

Using vanilla callbacks (higher performance + easy to use convenience unlock function)

client.ensure(err => {
   client.lock('<key>', (err, unlock) => {
       unlock(err => {  // unlock is a convenience function, bound to the correct key + request uuid

       });
   });
});

If you want the key and request id, use:

client.ensure(err => {
   client.lock('<key>', (err, {id, key}) => {
       client.unlock(key, id, err => {

           // note that if we don't use the unlock convenience callback,
           // that we should definitely pass the id of the original request.
           // this is for safety - we only want to unlock the corresponding lock,
           // which is defined not just by the right key, but also the right request id.

       });
   });
});

note: using the id ensures that the unlock call corresponds with the original corresponding lock call otherwise what could happen in your program is that you could call unlock() for a key/id that was not supposed to be unlocked by your current call.


Using the unlock convenience callback with promises:

We use a utility method on Client to promisify and run the unlock convenience callback.

 return client.ensure().then(c =>  {   // (c is the same object as client)
    return c.acquire('<key>').then(unlock => {
        return c.execUnlock(unlock);
     });
 });

As you can see, before any client.lock() call, we call client.ensure()...this is not imperative, but it is a best practice.
client.ensure() only needs to be called once before any subsequent client.lock() call. However, the benefit of calling it before every time, is that it will allow a new connection to be made if the existing one has a bad state.

Any locking errors will mostly be due to the failure to acquire a lock before timing out, and should very rarely happen if you understand your system and provide good settings/options to live-mutex.

Unlocking errors should be very rare, and most likely will happen if the process running the broker goes down or is overwhelmed. You can simply log unlocking errors, and otherwise ignore them.


You must use the lock id, or {force:true} to reliably unlock

You must either pass the lock id, or use force, to unlock a lock:

works:

 return client.ensure().then(c =>  {   // (c is the same object as client)
    return c.acquire('<key>').then(({key,id}) => {
        return c.release(key, id);
     });
 });

works:

 return client.ensure().then(c =>  {   // (c is the same object as client)
    return c.acquire('<key>').then(({key,id}) => {
        return c.release(key, {force:true});
     });
 });

will not work:

 return client.ensure().then(c =>  {   // (c is the same object as client)
    return c.acquire('<key>').then(({key,id}) => {
        return c.release(key);
     });
 });

If it's not clear, the lock id is the id of the lock, which is unique for each and every critical section.

Although using the lock id is preferred, {force:true} is acceptable, and imperative if you need to unlock from a different process, where you won't easily have access to the lock id from another process.


Client constructor and client.lock() method options

lock() method options There are some important options. Most options can be passed to the client constructor instead of the client lock method, which is more convenient and performant:
const c = new Client({port: 3999, ttl: 11000, lockRequestTimeout: 2000, maxRetries: 5});

c.ensure().then(c => {
    // lock will retry a maximum of 5 times, with 2 seconds between each retry
   return c.acquire(key);
})
.then(({key, id, unlock}) => {

   // we have acquired a lock on the key, if we don't release the lock after 11 seconds
   // it will be unlocked for us.

   // note that if we want to use the unlock convenience function, it's available here

   // runUnlock/execUnlock will return a promise, and execute the unlock convenience function for us
   return c.execUnlock(unlock);
});

The current default values for constructor options:

  • env => false, if you set env to true, then Node.js lib will default to settings set from process.env (when you called: $ lmx set port 5000);
  • port => 6970
  • host => localhost
  • ttl => 4000ms. If 4000ms elapses, if the lock still exists, the lock will be automatically released by the broker.
  • maxRetries => 3. A lock request will be sent to the broker 3 times before an error is called back.
  • lockRequestTimeout => 3000ms. For each lock request, it will timeout after 3 seconds. Upon timeout, it will retry until maxRetries is reached.
  • keepLocksOnExit => false. If true, locks will not be deleted if a connection is closed.
  • noDelay => true. By default true, if true, will use the TCP_NODELAY setting (this option is for both broker constructor and client constructor).

As already stated, unless you are using different options for different lock requests for the same client,
simply pass these options to the client constructor which allows you to avoid passing an options object for each
client.lock/unlock call.


Usage with Promises and RxJS5 Observables:

This library conciously uses a CPS interface as this is the most primitive and performant async interface. You can always wrap client.lock and client.unlock to use Promises or Observables etc. In the docs directory, I've demonstrated how to use live-mutex with ES6 Promises and RxJS5 Observables. Releasing the lock can be implemented with (1) the unlock() convenience callback or with (2) both the lockName and the uuid of the lock request.

With regard to the Observables implementation, notice that we just pass errors to sub.next() instead of sub.error(), but that's just a design decision.

Usage with Promises:

see: docs/examples/promises.md

Usage with RxJS5 Observables

see: docs/examples/observables.md


Non-binary mutex/semaphore

By default, only one lockholder can hold a lock at any moment, and that means {max:1}. To change a particular key to allow more than one lockholder, use {max:x}, like so:

c.lock('<key>', {max:12}, (err,val) => {
   // using the max option like so, now as many as 12 lockholders can hold the lock for key '<key>'
});

Non-binary semaphores are well-supported by live-mutex and are a primary feature.


Live-Mutex utils

Use lmx utils to your advantage To launch a broker process using Node.js:
const {lmUtils} = require('live-mutex');

lmUtils.conditionallyLaunchSocketServer(opts, function(err){

    if(err) throw err;

      // either this process now owns the broker, or it's already running in a different process
      // either way, we are good to go
      // you don't need to use this utility method, you can easily write your own

      // * the following is our recommended usage* =>
      // for convenience and safety, you can use the unlock callback, which is bound
      // to the right key and internal call-id

  });

To see examples of launching a broker using Node.js code, see:

src/lm-start-server.ts

To check if there is already a broker running in your system on the desired port, you can use a tcp ping utility to see if the web-socket server is running somewhere. I have had a lot of luck with tcp-ping, like so:

const ping = require('tcp-ping');

ping.probe(host, port, function (err, available) {

    if (err) {
        // handle it
    }
    else if (available) {
        // tcp server is already listening on the given host/port
    }
    else {
       // nothing is listening so you should launch a new server/broker, as stated above
       // the broker can run in the same process as a client, or a separate process, either way
    }
});

Live-Mutex supports Node.js-core domains

To see more, see: docs/examples/domains.md


Creating a simple client pool

Example In most cases, a single client is sufficient, and this is true many types of networked clients using async I/O. You almost certainly do not need more than one client. However, if you do some empirical test, and find a client pool might be beneficial/faster, etc. Try this:
const {Client} = require('live-mutex');

exports.createPool = function(opts){
  
  return Promise.all([
     new Client(opts).connect(),
     new Client(opts).connect(),
     new Client(opts).connect(),
     new Client(opts).connect()
  ]);
}

User notes

  • if the major or minor version differs between client and broker, an error will be thrown in the client process.

Testing

Look at test/readme.md


Using Docker + Unix Domain Sockets

In short, you almost certainly can't do this, because apparently sockets cannot be shared between host and container. Meaning if your client is running on the host machine (or other container), but your broker is running in a container, it will likely not be possible, but that's ok, since you can just use TCP/ports.

Example of an attempt

You almost certainly don't want to do this, as using UDS is for one machine only, and this technique only works on Linux it does not work on MacOS (sharing sockets between host and container).

When running on a single machine, here's how you do use UDS with Docker:

my_sock="$(pwd)/foo/uds.sock";
rm -f "$my_sock"
docker run -d -v "$(pwd)/foo":/uds 'oresoftware/live-mutex-broker:latest' --use-uds

The above passed the --use-uds boolean flag to the launch process, which tells the broker to use UDS instead of listening on a port. The -v option allows the host and container to share a portion of the filesystem. You should probably just delete the socket file before starting the container, in case the file already exists. '/uds/uds.sock' is the path in the container that points to the socket file, it's a hardcoded fixed path.

When connecting to the broker with the Node.js client, you would use:

 const client = new Client({udsPath: 'foo/uds.sock'});

live-mutex's People

Contributors

oresoftware avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

live-mutex's Issues

Broker: Memory leak when sockets never disconnect

Describe the bug

This map:

this.wsToUUIDs = new Map(); // keys are ws objects, values are lock key maps {uuid: true}

is cleaned only on socket disconnect, end or error events.
If a socket is never closed, that map grows indefinitely.
This could happen for example when using Unix Domain Sockets in long living applications, when there is no reason to disconnect clients from broker, causing massive memory leaks over time.

I noted that Broker never actively uses "wsToUUIDs" map besides value initialization, so I don't get the point of having it at all.

I have temporarily patched my application commenting the line that populates the map:

this.wsToUUIDs.get(ws)[uuid] = true;

and runned my personal tests after that, with positive outcome.
This seem to confirm that "wsToUUIDs" map is not really useful at this stage.

To Reproduce
Steps to reproduce the behavior:

  1. Set up a Broker using a Unix Domain Socket
  2. Set up a Client using a Unix Domain Socket
  3. Launch a NodeJS application that performs lots of locks and unlocks (using flag --inspect)
  4. Inspect Memory Heap usage evolution in time using Chrome tools (connect to ws exposed by NodeJS).
  5. Note that "wsToUUIDs" map is growing in size and its values are never collected by V8 Garbage Collector

Expected behavior
Please consider either removing that Map from code or implementing a better strategy for cleaning it up, to avoid Memory Leaks on applications using clients that never close connections.

Add option for IPC?

Can you add an option for Unix Domain Sockets / Windows IPC? I feel using TCP will introduce intolerable latency by TCP overhead when communicating between local processes.

use in cluster server

if i have a cluster sever where is the best place for creating and initialize client object ?
i think the only place is master and not workers
because of in worker listening tcp port become "in used port" if one cluster initialize its client

Some questions

Thanks for this great library!

  1. [live-mutex client] recommends you attach a process.on('error') event handler
    How to do so after lmUtils.conditionallyLaunchSocketServerp

  2. How to reference the broker after lmUtils.conditionallyLaunchSocketServerp? The return promise is broker?

  3. any possible way to shutdown the broker? need this to end the mocha test

  4. what is the difference of the following?
    launchSocketServer (error if another process already launch the server?)
    conditionallyLaunchSocketServer (no error if another process already launch the server?)
    launchBrokerInChildProcess (what exactly does child process mean here? another node process? when to use this?)

  5. suppose we have 4 processes (A, B, C, D) and A is holding the broker. If A crashes, where should we create the broker again?

[feature] ttl per lock

Is your feature request related to a problem? Please describe.

I have different locks with different TTL values. Currently, live-mutex only allows to set ttl on connection level, not on per lock level. Is there a reason for that?

Describe the solution you'd like

this.client.acquire('my-id', {ttl: 5000}) means it lives max 5 seconds, if not extended (which isn't possible either at the moment afaik).

Describe alternatives you've considered
Alternative is to create a new connection for keys with different ttl. Not so practical.

Additional context

I'm writing a software with a lot of locks involved. Most of the time, the TTL differs heavily. Also most of the time, I have a long running commands that automatically extend/prolong the acquired lock, which isn't possible since the ttl is per connection.
I'd really try out this wonderful project, but can't since those fundamentals are missing currently.

Also the generated client.d.ts is not correct.

For example:
acquire(key: string, opts?: Partial<LMXClientLockOpts>): any;

But LMXClientLockOpts is empty (LMXClientUnlockOpts as well)

export interface LMXClientLockOpts {
}

implement tls

using net package, should allow uses to use tls package

Don't need semicolons

@ORESoftware Just a reminder, with NodeJS/Javascript you do not need semicolons. I made sure my entire platform went semicolon free a year ago, it's just wonderful. Reduces the file size, code faster, code looks cleaner, also easier to remember not putting a semicolon rather than putting one.

Just my two cents. :-)

{ TypeError: assert.strict is not a function

lmx broker error: Uncaught Exception event occurred in Broker process: { TypeError: assert.strict is not a function
    at new Broker (/usr/local/bin/proxy/node_modules/live-mutex/dist/broker.js:65:16)
    at Object.<anonymous> (/usr/local/bin/proxy/app/index.js:17:16)
    at Module._compile (module.js:652:30)
    at Object.Module._extensions..js (module.js:663:10)
    at Module.load (module.js:565:32)
    at tryModuleLoad (module.js:505:12)
    at Function.Module._load (module.js:497:3)
    at Module.require (module.js:596:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (/usr/local/bin/proxy/bin/www:4:26)
  [stack]: 'TypeError: assert.strict is not a function\n    at new Broker (/usr/local/bin/proxy/node_modules/live-mutex/dist/broker.js:65:16)\n    at Object.<anonymous> (/usr/local/bin/proxy/app/index.js:17:16)\n    at Module._compile (module.js:652:30)\n    at Object.Module._extensions..js (module.js:663:10)\n    at Module.load (module.js:565:32)\n    at tryModuleLoad (module.js:505:12)\n    at Function.Module._load (module.js:497:3)\n    at Module.require (module.js:596:17)\n    at require (internal/module.js:11:18)\n    at Object.<anonymous> (/usr/local/bin/proxy/bin/www:4:26)',
  [message]: 'assert.strict is not a function' }

if a client already has a lock on a key

If a certain client already has a lock on a key, e.g.:

client.lock('foo', ...);
client.lock('foo', ...);

then why send it to the broker? just queue it in the client until the client hears back from the broker

or perhaps queue things after 5 requests have already been sent out to broker

Can't unlock a key

The lib is failing in the most basic way for me. All I'm trying to achieve is lock a route on my express app so that only one person can use it at a time.

This is how I'm creating the Broker/Client:

const client = new Client();
client.ensure((err, c) => {
    c.emitter.on('warning', function () {
        console.error(...arguments);
    });
}).catch((err) => {
    console.error('(MUTEX CLIENT): ', err);
});

const broker = new Broker();
broker.ensure((err, b) => {
    b.emitter.on('warning', function () {
        console.error(...arguments);
    });
}).catch((err) => {
    console.error('(MUTEX BROKER): ', err);
});

and this is how I'm locking/unlocking a key:

app.get('/answer', function (req, res) {
client.ensure().then(c => {
        return c.lockp('status');   // c.acquire and c.acquireLock are aliases to c.lockp
    })
    .then(({id, key}) => {
        console.log('Lock acquired key: ', key, '\tid:', id);
        return *some work and more promises*
    })
    .then(() => {
        return client.ensure();
    })
    .then(c => {
        console.log('Releasing key: ', 'status');
        return c.unlockp('status');   // c.release and c.releaseLock are aliases to c.unlockp
    })
}

This only works the first time I'm calling the route, after that each call results in an error:

retrying lock request for key 'status', on host:port 'localhost:6970', attempt # 1
retrying lock request for key 'status', on host:port 'localhost:6970', attempt # 2
retrying lock request for key 'status', on host:port 'localhost:6970', attempt # 3
Error:  Live-Mutex client lock request timed out after 9000 ms, 3 retries attempted to acquire lock for key status.

I'm working on a Raspberry Pi and the work consists of some GPIO pins that are turning on and off and some delays. Also, I've clearly waited the more than 4000ms and the Broker didn't release the lock.

write-preferring-rw-lock does not work with nested read locks.

Hi, just a comment about write-preferring-rw-lock.md,

The logic fails if you permit nested read locks.

e.g. if thread A goes:
(1) acquires read lock
(2) acquires read lock (nested)
(3) release read lock
(4) release read lock

If at (1.5) thread B tries to acquire the write lock, then both thread A and thread B hang forever.

Unfortunately I have no suggestion on how to modify the algorithm so that it works with nested read locks.

https://github.com/ORESoftware/live-mutex/blob/dev/docs/write-preferring-rw-lock.md

Broker on different instance than Client

I was trying to create a separate instance for both Broker and the Client. Being that the Broker is in, let say, an EC2 instance. Then the Client is part of a monolith application. Now seeing thru code, it seems that you don't use the host config in the Client constructor to point to the Broker (or I'm just stupid to look at trivial code). Can you point me to the right direction? I keep on having the ECONNREFUSED => 127.0.0.1:6970 even tho I specified in the Client opts the host and the port of the Broker.

Edit:
Here is the line that I'm talking about.

Client: cannot set no retries or zero retries

Hello,

Current implementation doesn't really allow using zero retries.
I saw that "opts.retry" is assigned to a const never used, so it has no effects.

const noRetry = opts.retry === false;

Also, using this kind of assignment:

this.lockRetryMax = opts.lockRetryMax || opts.maxRetries || opts.retryMax || 3;

and this:

const maxRetries = opts.maxRetry || opts.maxRetries || this.lockRetryMax;

results in using 3 as retries value, because "0 || integer > 0" always returns passed integer.
I suggest using the following statements instead, or something similar that suits your stile best:

this.lockRetryMax = opts.lockRetryMax || opts.maxRetries || opts.retryMax;
this.lockRetryMax = this.lockRetryMax >= 0 ? this.lockRetryMax : 3;

and

const tempRetries = opts.maxRetry || opts.maxRetries;
const maxRetries = tempRetries >= 0 ? tempRetries : this.lockRetryMax;  

I am aware that this solution is not as succinct as the original solution, but it allows using zero retries on Client constructor and on acquire calls.
Or you could simply implement behaviour for "opts.retry = false" and avoid doing those changes.

Thanks

timeout not taking into effect?

const broker = new LMXBroker();
const mutex = new LMXClient({
    lockRequestTimeout: 10000
});

const mutexKey = '123';

// Connect to mutex broker
await Promise.all([broker.ensure(), mutex.ensure()]).catch(error => {
    // This is where the error happens
    log.debug(`Couldn't connect to broker.`, error);
});

// Lock mutex
const lock = await mutex.acquireLock(mutexKey, {
    lockRequestTimeout: 10000
}).catch(error => {
    log.debug(`Couldn't acquire lock for "${mutexKey}".`, error);
});

I still get this error though. Did I miss an option somewhere?

Couldn't connect to broker. lmx client err: client connection timeout after 3000ms.

new Client().ensure() should have alternate API

We have this currently:

     c.ensure().then(function () {
            c.lock('z', function (err) {
                if (err) return t(err);
                c.unlock('z', t);
            });
        });

but we should also be able to support

     c.ensure(function (c) {
            c.lock('z', function (err) {
                if (err) return t(err);
                c.unlock('z', t);
            });
        }, function(err){

       });

Mutex betwen processes

Can this be used as a mutex between two processes which would want to require access to let's say resource X? I can implement what I need with lockfile but I'm also interested to see whether I can find a working solution with your live-mutex implementation

From your readme "Live-Mutex is a non-distributed mutex for synchronization across multiple processes/threads"

Can you provide an example for that?

API change recommendation

You don't need 2 methods for lock with cb and promise. You can check whether the cb is present and if not, then return a promise. This is a little bit more error prone, but makes the api simpler. For example it is used here: https://github.com/jprichardson/node-fs-extra Another idea is to use utils.promisify if you keep the current api. https://nodejs.org/api/util.html#util_util_promisify_original And ofc it would be nice to see async-await examples with these promises...

[feature] strict mode & API

Currently, the documentation is only for non-strict mode in Typescript.

For example

client.ensure(err => {
   client.lock('<key>', (err, unlock) => {
       unlock(err => {  // unlock is a convenience function, bound to the correct key + request uuid

       });
   });
});

results in

Screenshot 2019-08-31 at 17 00 34

API

Also I have to say the API is not very good. A lot of any and a lot of methods that appear to do the same, yet most common use cases require own wrapper around the Client class. The API of Client is:

    requestLockInfo(key: string, opts?: any, cb?: EVCb<any>): void;
    lockp(key: string, opts?: Partial<LMXClientLockOpts>): Promise<LMLockSuccessData>;
    unlockp(key: string, opts?: Partial<LMXClientUnlockOpts>): Promise<LMUnlockSuccessData>;
    acquire(key: string, opts?: Partial<LMXClientLockOpts>): any;
    release(key: string, opts?: Partial<LMXClientUnlockOpts>): any;
    acquireLock(key: string, opts?: Partial<LMXClientLockOpts>): any;
    releaseLock(key: string, opts?: Partial<LMXClientUnlockOpts>): any;
    run(fn: LMLockSuccessData): Promise<unknown>;
    runUnlock(fn: LMLockSuccessData): Promise<any>;
    execUnlock(fn: LMLockSuccessData): Promise<any>;
    protected cleanUp(uuid: string): void;
    protected fireUnlockCallbackWithError(cb: LMClientUnlockCallBack, err: LMXClientUnlockException): void;
    protected fireLockCallbackWithError(cb: LMClientLockCallBack, err: LMXClientLockException): void;
    protected fireCallbackWithError(cb: EVCb<any>, err: LMXClientException): void;
    ls(cb: EVCb<any>): void;
    ls(opts: any, cb?: EVCb<any>): void;
    parseLockOpts(key: string, opts: any, cb?: any): [string, any, LMClientLockCallBack];
    parseUnlockOpts(key: string, opts?: any, cb?: any): [string, any, LMClientUnlockCallBack];
    _simulateVersionMismatch(): void;
    _invokeBrokerSideEndCall(): void;
    _invokeBrokerSideDestroyCall(): void;
    _makeClientSideError(): void;
    lock(key: string, cb: LMClientLockCallBack): void;
    lock(key: string, opts: any, cb: LMClientLockCallBack): void;
    on(): any;
    once(): any;
    private lockInternal;
    noop(err?: any): void;
    getPort(): number;
    getHost(): string;
    unlock(key: string): void;
    unlock(key: string, opts: any): void;
    unlock(key: string, opts: any, cb: LMClientUnlockCallBack): void;

With a lot of any, which is not best practise. I also don't know what to use to acquire a lock, there are multiple plausible definitions:

lockp(key: string, opts?: Partial<LMXClientLockOpts>): Promise<LMLockSuccessData>;
acquire(key: string, opts?: Partial<LMXClientLockOpts>): any;
acquireLock(key: string, opts?: Partial<LMXClientLockOpts>): any;
lock(key: string, cb: LMClientLockCallBack): void;

Only one appears to work with await, but I wonder why define an additional method for Promise result type? lock could do both, accept a cb and return a Promise. I also don't know what to use now, lock or acquire?
Same with release/unlock. Since there's no docblock above the methods, it's not very clear what methods to use.

common use-case

I actually find an API like that more valuable:

const lock = await this.client.lock('key');
try {
    //do stuff
} finally {
    lock.release();
}

I actually also would find it useful to define a timeout for .lock() as I often have the use-case to actually check if a check is currently locked (either without locking it or unlock it immediately). Like so

//either build-in method
const exits: boolean = await this.client.isLocked('key');

//or the verbose way
function isLocked(key: string): boolean {
    try {
        const lock = await this.client.lockp('key', {timeout: 0})
        await lock.unlock();
        return true;
    } catch (error) {
        return false;
    }
}

error in "npm i live-mutex"

Getting following error on windows 10 after I run "npm i live-mutex".

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] postinstall: ./assets/postinstall.sh
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\GS-1567\AppData\Roaming\npm-cache_logs\2018-11-12T10_08_06_121Z-debug.log

Windows

Any news about this lib running on windows ?

EADDRINUSE: address already in use udsPath

I had a server started with /tmp/lmx.sock as the udsPath and it died unexpectedly(I killed it). When the server came back up it wouldn't reconnect.

Maybe add a way to have the server recover from this? You could have an option to enable this.
Catch EADDRINUSE, delete socket file, retry connection. If all good then no worries. Otherwise throw error like normal. That way permissions errors, etc. still get propagated but this doesn't have to be solved by the user.

Couldn't connect to broker. Error: listen EADDRINUSE: address already in use /tmp/lmx.sock
    at Server.setupListenHandle [as _listen2] (net.js:1211:19)
    at listenInCluster (net.js:1276:12)
    at Server.listen (net.js:1375:5)
    at /Users/xo/code/_/proxy/node_modules/live-mutex/dist/broker.js:306:31
    at new Promise (<anonymous>)
    at Broker.ensure.start (/Users/xo/code/_/proxy/node_modules/live-mutex/dist/broker.js:300:36)
    at handleServerConnecting (/Users/xo/code/_/proxy/app/index.js:239:31)
    at processTicksAndRejections (internal/process/task_queues.js:82:5) {
  code: 'EADDRINUSE',
  errno: 'EADDRINUSE',
  syscall: 'listen',
  address: '/tmp/lmx.sock',
  port: -1
}

EADDRINUSE

I'm getting an error like this. The code below is a small example but should show the setup I'm using. I noticed around 5k/s connections I'm seeing the error in 1/5000 connections.

(node:7495) UnhandledPromiseRejectionWarning: Error: listen EADDRINUSE 127.0.0.1:6970
    at Object._errnoException (util.js:1022:11)
    at _exceptionWithHostPort (util.js:1044:20)
    at Server.setupListenHandle [as _listen2] (net.js:1367:14)
    at listenInCluster (net.js:1408:12)
    at doListen (net.js:1517:7)
    at _combinedTickCallback (internal/process/next_tick.js:141:11)
    at process._tickCallback (internal/process/next_tick.js:180:9)
import { LMXBroker, LMXClient } from 'live-mutex';

const wrappedLog = (name: string, level = 'debug') => (...args: any[]) => (log[level] || log.debug)(name, ...args);

let mutexClient;

const getMutex = async () => {
    if (mutexClient) {
        return mutexClient;
    }

    const [broker, client] = await Promise.all([
        new LMXBroker().ensure(),
        new LMXClient().connect()
    ]);

    broker.emitter.on('warning', wrappedLog('broker', 'warn'));
    client.emitter.on('warning', wrappedLog('client', 'warn'));

    mutexClient = client;

    return client;
};

// This function will be called by multiple requests
// I've just simplified to show the issue
const main = async () => {
    // Connect to mutex broker
    const mutex = await getMutex();

    // Lock mutex
    const lock = await mutex.acquireLock('registered').catch(error => {
        log.error(error);
        server.close();
    });

    if (!lock) {
        return;
    }

    // Unlock mutex
    await mutex.releaseLock('registered', { id: lock.id });
};

main().catch(error => {
    console.error(error);
});

implement lock({force:true});

still need to implement:

client.lock({force:true}, function(){});

alternatively, this is simply:

client.lock(true, function(){});

Client is not a constructor Error

when i require two section of lib and i want to make instance of Client i get this error!
is anything change!?

const lmUtils = require('live-mutex/utils');
const Client= require('live-mutex/client');
//...
// some inits 
//...  
lmUtils.launchBrokerInChildProcess(config, function () {
            
            client = new Client(config);

            client.ensure().then(function () {
                  c = client;
            })
      })

but in execute....

         client = new Client(config);
                     ^

TypeError: Client is not a constructor

On some systems UDS socket opening fails with ENOENT exception

Describe the bug
On some machines live-mutex fails to create a socket. This happens to two people on my team, both running Linux. When giving a path to the LMXClient constructor a NOENT exception is thrown. After going through code it seems like this is the offending line in src/client.ts:

ws = net.createConnection(...cnkt, () => {

I opened up node and tried running something like that explicitly:

net.createConnection('/tmp/temp.sock', (err) => console.error(err))

That fails all of the time, no matter the path. I, honestly, do no understand IPC sockets and Unix enough to understand why that's happening. This is code that runs in production with no problem. Nobody on OSX has any issues either, just me and another dev on desktop Linux.

To Reproduce
Steps to reproduce the behavior:

const udsPath = resolve('mysock')
const gLmxClient = new LMXClient({ udsPath, connectTimeout: 5000 }, (err) => {
  // This gets executed
  console.error('FAILED');
  console.error(err);
});

Desktop (please complete the following information):
Arch Linux

If you have any idea, would love it if you could share it. Thanks! =) And thanks for the library, it's been driving the product for a while now.

[Help wanted] An extremely basic and trivial example for locking a file?

Description

I found this library when I stumbled across a question on stackoverflow asking for how to handle file lockingin in node, where the author of this package answered with "for even this simple use case, live-mutex serves you better than alternatives like lockfile and warlock" (I'm paraphrasing). When I visited the github to read up on the docs I don't really see anything about file locking. Only heavy mutex-lingo which I don't really understand at this point.

If this is could be used as an alternative to lockfile, then could you provide a simple, super trivial example of just locking a single file so no other processes can write on it?

Confusing warning message

{"i":969,"l":true,"exports":{}} is being emitted and logged from the live-mutex client and I'm struggling to understand what this corresponds to.

It specifically seems to be emitted when the mutex is under high load or is unable to reach the broker due to a port not being accessable.

I've had a dig in the source code but still haven't found a source/reason for it being emitted.

Could you provide some insight, please?

Version: 0.1.1054

Docker image base from node:alpine

First - Thanks creating this package! We are using as part of a lightweight session mgr with locking support for an LMS.

Just a heads-up, the postinstall.sh blows up when bash isn't available. I am going to add bash via apk add bash for now. Just adds a bit more bloat when trying to keep images small with Alpine. If I get a moment I can take a stab making the script ash compliant also - and use /bin/sh in the shell directive.

client will stuck after trying to acquire the lock 5 times

What I am trying to do is to write a higher level function to lock an array of keys and if one of them fails, unlock all acquired locks and sleep 1 second and retry. I created a client with lockRetryMax=2 and reuse the client in the same function. But when it tries to acquire the same lock 5 times, it just hang.

I found that in src/client.ts, it is hardcoded to 5 in the following line, any reason for that? It doesn't resolve/reject the promise after 5 attempts right now. It seems deadlock right now.

    if (rawLockCount - unlockCount > 5) {
      this.lockQueues[key].unshift(arguments);
    }
    else {
      this.lockInternal.apply(this, arguments);
    }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.