Giter VIP home page Giter VIP logo

deepstream.io-client-js's People

Contributors

adriantx avatar benjaminvadon avatar charles-toller avatar datasage avatar demux avatar dependabot[bot] avatar derrandz avatar enigmacurry avatar greenkeeperio-bot avatar jaime-ez avatar jdmnd avatar jensvdh avatar jonerer avatar lalem001 avatar m15h avatar mhaagens avatar msmarks avatar murilofrade avatar ralphtheninja avatar rbarroetavena avatar ronag avatar schulz3000 avatar sejh avatar svenskunganka avatar timaschew avatar tombyrer avatar vortex375 avatar wolframhempel avatar yasserf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepstream.io-client-js's Issues

records need to be deleted from internal cache straight away after discard

Currently the record waits for an ACK after discarding before it is removed from the client's cache. If discard and getRecord is called in the same thread for the same record, the current record is returned, but no updates are received

myRecord = ds.record.getRecord( 'someRecord' );
myRecord.discard();
myRecord = ds.record.getRecord( 'someRecord' );
//returns the discarded record that won't receive updates from the server

AnomynousRecord is missing whenReady event

The AnomynousRecord should raise a whenReady event to have the same behaviour as a normal Record.

It's worth noting that since setName effectively changes the record used underneath which means whenReady can be called more than once, which should be explicitly stated in the docs.

Auto-resubscribe events and record listens after connection drop

Re-send subscriptions and listens for Events after a connection drop. (Records do it already)

Records used to override the data when resubscribing with the read event. Now it checks if a version already exists, and if so will treat the read as an update to allow for merging strategies to occur

Define merge behaviour / strategies

What should happen when a version conflict occurs? Possible solutions:

  • Client gets the latest record from the server and merges its changes on top
  • Client forces its changes
  • Client accepts changes from server
  • User can register a callback and implement its own merge strategy

set isDestroyed = true and isReady flag before delete event

At the point in time the record emits a delete event, the isReady flag should already be set to false.

The isDestroyed flag is a bit trickier - currently it is used by the record internally to check if it still needs to be destroyed - we might need to introduce an additional internal flag for that so that

delete message received:

  • isDestroyed: true
  • isReady: false

Record.subscribe not triggered if result is empty

When I call Record.subscribe(callback), it is not triggered if either of the following conditions are true:

  1. The record did not previously existing
  2. The record's value is an empty object.

These two conditions are probably identical, since a newly created record contains no data.

automated deltas

Use the algorithm that's used to find tree conflicts within merges ( #82 ) to automatically check objects provided by record.set( obj ) against the previous version to establish if only a path within the record has changed. If so, send a patch rather than an update

Sporadic "VERSION_EXISTS" Error - Possible Fix?

Occasionally, I get a "VERSION_EXISTS" error saying something like user.1 tried to update record unread_counts.1&2 to version 3 but it already was 3. Due to the nature of my code, this record will get updated quite frequently. I assume that this is happening because when the update is sent, the client has not yet received the updated version from, in my case, user.2.

Without knowing enough about how the server manages version numbers, would it be possible to handle version increments on the server side in order to prevent this? I can't see how a client jumping from version 1 to version 3 of a document could cause any serious issues if updates are processed so quickly.

By the way, don't hate on my user ID scheme. Users are not stored in the DS Storage at all, and is only used as an identifier to the user object in my application's database.

Force disconnect/login a client that is connected

We have an Android application that is showing our web-application in a web view (we're using Crosswalk's webview, https://crosswalk-project.org/). When the application is loosing its focus (i.e. the user minimizes the application) we need to disconnect our socket, and when the application is regaining its focus we need to login with the socket again. Or is this the wrong way to do it?

We know when the application is loosing focus and when it's regaining focus.

We've tried to close the socket (with deepstreamClient.close()), and then login with it again. However, this gives us the error: this client's connection was closed, which is wanted since we actually closed the socket.

What we would need is a deepstreamClient.disconnect() or some kind of re-login on the current socket.

Is this possible to implement?
I can make a PR if I could get some hints where do add this functionality so you guys don't have to implement this =)

getList returns a new instance every time it's called

        let list = ds.record.getList("test/messages");

        list.on("entry-added", (entry) => {
           console.log("entry:: ", entry);
        });

        list.subscribe((l) => {
            console.log("subscribed: ", l);
        });
let list = ds.record.getList("test/messages");
let record = ds.record.getRecord(id)
record.set(msg);
list.addEntry(id);

The entry-added won't be called because the list instances are dfiferent

Promises API

Please consider making an alternative(new?) promise-based API that could be easier to wrap head around and use with upcoming(supported in Babel already) async\await.

It would be great to use deepstream basic get\set APIs this way in future:

try {
  const record = ds.record.getRecord( 'someUser' );
  const recordData = await record.get();
  // do smth with data...
  record.set({new: 'data'});
} catch(err) {
  console.log(err); // getting or setting failed
}

Possibility to send messages between WebRTC clients

Hi,
I haven't found anything like that in your docs, therefore I'd suggest, that one should be able to implement plain text messaging along with the video/audio streaming in calls.
Or is this already possible?

Thank you!

client does not get a close event from engine.io when device sleeps

Mostly on android and iOS (and desktop when computer goes to sleep). The client pauses and after a minute the client is logged out on the Deepstream server. When the client is waken up again it (the Deepstream client) doesn’t know it is disconnected. After a minute of a weird in-between-state it realizes it is disconnected and reconnects.

Following is some code that forces a reconnect ( won't run copy/paste )


var latestHeartBeat = Date.now();
var engine = dsClient._connection.endpoint;
if (!
.isUndefined(engine.onHeartbeat)) {
var onHeartBeat = function () {
latestHeartBeat = Date.now();
};
engine.on('heartbeat', onHeartBeat);

var browserAwake = function () {
if (Date.now() - latestHeartBeat > 30000) {
dsClient._connection._endpoint.onClose('ping timeout');
}
};
bus.on( 'browser_awake', browserAwake );

Cannot resolve module 'net'

I'm getting this error when trying to bundle deepstream client using webpack

ERROR in .//deepstream.io-client-js/src/tcp/tcp-connection.js
Module not found: Error: Cannot resolve module 'net' in c:_projects\bnwan\showmyphotos\node_modules\deepstream.io-client-js\src\tcp
@ ./
/deepstream.io-client-js/src/tcp/tcp-connection.js 1:10-26

Automatically call discard() when all subscriptions are unsubscribed.

Whenever record.unsubscribe() is called, could the client detect if it has any more subscriptions remaining after that one, and if not, inform the server it is no longer interested in the record anymore (i.e. what record.discard() does?

It seems like this should be a transparent occurance, for performance and for preventing users having to re-invent the same logic all the time. Would a client every want the server to still notify record changes for a record it has 0 subscriptions to?

'Delete of an unqualified identifier in strict mode' introduced in 0.3.7

This issue was introduced in 0.3.7 (does not error in 0.3.6).

When passing --use-strict to node, the following line:
https://github.com/hoxton-one/deepstream.io-client-js/blob/master/src/utils/ack-timeout-registry.js#L62

generates the following error:

x/node_modules/deepstream.io-client-js/src/utils/ack-timeout-registry.js:62
                delete timeout;
                       ^^^^^^^

SyntaxError: Delete of an unqualified identifier in strict mode.

I believe it would be a shame to leave this in since, prior to this addition, this module was perfectly compatible with strict-mode.

/cc @yasserf as it's your line addition

Thanks

Record.get does not allow for integer "paths"

I'm attempting to use a record for an "unread count" tracker in my new chat app. The desired format is something like this:

var myId = 1; // for example
var OtherUserId = 2; // for example
var conversationId = '1&2'; // for example
var counts = ds.record.getRecord('unread_counts.' + conversationId);

// "whenReady" is respected in the following examples

function incrementForOtherUser() {
  var newCount = counts.get(otherUserId) + 1;
  counts.set(otherUserId, newCount);
}

Essentially, the record is a key: value array with two keys. Both the keys and the values are integers.

When I try to do this, I get an exception this._path.split is not a function. Trace below (forgive the Angular bits).

TypeError: this._path.split is not a function
    at 49.JsonPath._tokenize (http://bridgeconx.app/build/js/core-bd895c80.js:36701:25)
    at new 49.JsonPath (http://bridgeconx.app/build/js/core-bd895c80.js:36640:7)
    at 53.Record._getPath (http://bridgeconx.app/build/js/core-bd895c80.js:37585:25)
    at 53.Record.get (http://bridgeconx.app/build/js/core-bd895c80.js:37248:16)
    at http://bridgeconx.app/build/js/core-bd895c80.js:38907:58
    at 53.Record.whenReady (http://bridgeconx.app/build/js/core-bd895c80.js:37417:3)
    at incrementUnreadCountForOtherUser (http://bridgeconx.app/build/js/core-bd895c80.js:38906:25)
    at Scope.link.scope.sendMessage (http://bridgeconx.app/build/js/core-bd895c80.js:38832:15)
    at $parseFunctionCall (http://bridgeconx.app/build/js/core-bd895c80.js:12415:18)
    at ngEventDirectives.(anonymous function).compile.element.on.callback (http://bridgeconx.app/build/js/core-bd895c80.js:21577:17)

It seems as though JsonPath._tokenize doesn't respect integer "keys". This could possibly be fixed by casting the provided path to a string before passing it to the _tokenize method.

handle version conflicts

Merge changes on top of incoming update and re-issue

What if both changes are for the same path? Set merge strategy as option? On Record or on client level?

Dump to local storage

In the event that a connection is unrecoverable, make the content of Connection._queuedMessages available to be saved locally until the connection comes back. Do this either by

a) providing a method to access the queued messages which can be used in conjunction with connection state change events

ds.on( 'connectionStateChanged', function( connectionState ) {
    if( connectionState === 'ERROR' ) {
        var queuedMessages = ds.getQueuedMessages();
       // do something with the queued messages, e.g.
      localStorage.setItem( 'backup' queuedMessages );
   }
});

this would require another hook to send an array of queued messages via the client, e.g.

if( localStorage.queuedMessages ) {
    ds.sendMessages( queuedMessages );
}

which might be a bit confusing to users as sendMessages sounds like a central concept, yet is only used in rare recovery cases

b) add a backupToLocalStorage: true option to the client (false by default) - which would be simpler, but can't be used in nodeJs - which could write it to a file...but that's where it gets complicated again

Thoughts?

Not working with webpack

I'm trying to use this client with webpack, but failing because of require('net') in TcpConnection. This happens because webpack tries to resolve all requires statically as opposed to browserify which does it lazily.

I can see two ways to solve this:

  1. Provide separate entry points for browser- and node environments
  2. Use require.ensure to require engine.io/net. This is a better solution, but would make some of the constructors asynchronous and break the api.

Thoughts? With webpack quickly becoming the standard bundler I don't think this is something that can be ignored.

add advanced merge strategies

Add merge strategies that will be executed when the client encounters a version exists error. The following strategies will be supported

C.MERGE_IF_NO_CONFLICT (default) current and incoming version will be merged if neither a value nor a tree conflict exists.

C.LOCAL_WINS the clients current version will be used

C.REMOTE_WINS the incoming update will be used

function(){} - a custom function with the following signature

function( localValue, remoteValue, recordName, recordVersion, callback ) {
    callback( error, mergedVersion );
}

merge strategies can be set in two places:
global as part of the client options

var ds = deepstream( 'localhost:6020', {
    /**
   * @ function mergeStrategy
   */
    mergeStrategy: C.REMOTE_WINS
});

per record on a per record basis

var rec = ds.record.getRecord( 'some/record' );
rec.setMergeStrategy( function(){...});

Subscribing to an empty list returns an object rather than an empty array

For example:

var shrubs =ds.record.getList('shrubs');
*wait a bit, or wrap in .whenReady()*
shrubs.subscribe(function(shrubEntries) { console.log('shrubEntries', shrubEntries) }, true);

Produces an empty object rather than an empty array:
shrubEntries Object {}

Digging into the Record code which backs a List I tried initially setting this._$data to [] but it seems to be overwritten in the _onRead handler here:
https://github.com/hoxton-one/deepstream.io-client-js/blob/master/src/record/record.js#L347

I don't understand why a message reading the record is setting its data, or am I misinterpreting something?

What I would like to be able to do (for a potentially new list 'shrubs') is:

ds.record.getList('shrubs').subscribe(function(shrubEntries) { console.log('shrubEntries', shrubEntries) }, true);

But this doesn't log anything unless the subscribe is separated and wrapped in a .whenReady().

The way I'm using deepstream at the moment I was hoping to rely on an empty array turning up, to tell the application that the data has loaded but there is nothing there, so that ie. a loading spinner can be replaced with a blank list. Are there any situations where you would pass true to subscribe's triggerNow and not want an initial subscription?

I know I can wrap things in .whenReady() but I'm not sure why you wouldn't want this behaviour as default if you have passed true to triggerNow.

Thanks for open sourcing this excellent framework, I have been enjoying experimenting with it.

Record listens should return the amount of subscribers interested in the record

Given the ds-demo-provider, generator will never stop generating after first subscription.

Stepping through the logic:

  • Server and provider are started
  • Client connects and subscribes to FX/* record
  • onSubscription is called
    • isSubscribed is true
    • fx-data-provider subscribes to FX/* record
    • There are now two subscribers!
  • Client disconnects.
  • One subscriber (fx-data-provider) remains, onSubscription is not called.

I was hoping to have on-demand providers in my current project.
Any suggestions on how to resolve this issue?

record.subscribe without a path doesn't trigger for changes to nested objects

The problem is that record._beginChange uses record.get to retrieve this._oldValue, but record.get only uses utils.shallowCopy. Therefor utils.deepEquals( this._oldValue, this._$data ) in record._completeChange returns true and the listeners are not notified.

I'll change it to deepCopy records. This is less performant, but also prevents hard to diagnose bugs such as unconsciously manipulating deepstream's internal values.

connection proxy

Hi, just wondering if it's possible to piggyback Deepstream Client connections over the same port as the server, as is done with socket.io and engine.io.

i.e: instead of:
deepstream_client('localhost:6021')

I'd like to just use this on the client:
deepstream_client()

and have server-side middleware register on a known (configurable) path, e.g: /deepstream/, with the port being taken from the current url.

i.e: http communication with DS would piggy-back on the same port.

That way I can be sure that corporate customers can just use one port to connect to the backend?

Dave

Provide failover/load balancing connection mechanism

We need a way to allow us to provide multiple deepstream connection points to the client to increase resilience.

Given different nodes can be created and shutdown automatically when auto scaling this will preferably not be something that is hard coded in the client, but rather uses a discovery mechanism.

Note: Only api change is when connecting to deepstream initially
Note: Heartbeat on servers is configurable

The two requirements are:

  1. Must work in a distributed fashion ( No central control )
  2. Server lists should not be stored in cache and storage, instead just in memory ( This helps with distribution by avoiding concurrency issues )
  3. Deepstreams that are no longer available should be deleted from lists in a reliable manner and time

Client

1: Provide multiple initial deepstream connection endpoints

    deepstream( [ 'localhost', 'deepstream1', 'deepstream2' ], options )

2: Request from a deepstream a list of all current deepstream nodes available ( Using a http get request )

    get( 'http://deepstream1/available-servers' ) // Returns array of all deepstream urls

3: Figure out which one has best score
A score is currently a combination of:
* Latency
* Resource usage on deepstream machine ( CPU, memory, logged in users, etc )

    calculateScore( 'http://deepstream1/calculate-score' )

4: Connect to best deepstream
5: On disconnect, repeat step 2 onwards

Server

1: On server being started, request all deepstreams to notify existence via event bus,

    messageBus.on( I_EXIST, function( serverConfig ) {
       // Store server config
    } );
    messageBus.emit( WHO_EXISTS )

2: When receiving existence request, respond with event,

    messageBus.on( WHO_EXISTS, function() {
        messageBus.emit( I_EXIST );
    } );

3: Send existence heartbeat every x minutes

    setInterval( function() {
        messageBus.emit( I_EXIST );
    }, x );

4: Delete servers from list if heartbeat is not received in y minutes ( where y > x )

    setInterval( function() {
          //Remove server config
    }, y );

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.