Giter VIP home page Giter VIP logo

soketi / soketi Goto Github PK

View Code? Open in Web Editor NEW
4.5K 20.0 248.0 62.64 MB

Next-gen, Pusher-compatible, open-source WebSockets server. Simple, fast, and resilient. ๐Ÿ“ฃ

Home Page: https://soketi.app/

License: GNU Affero General Public License v3.0

JavaScript 2.01% TypeScript 97.22% Dockerfile 0.28% PHP 0.45% Shell 0.04%
pusher websocket ws broadcasting real-time realtime nodejs docker hacktoberfest javascript

soketi's Introduction

soketi

CI codecov Latest Stable Version Total Downloads License

Artifact Hub

Discord

Next-gen, Pusher-compatible, open-source WebSockets server. Simple, fast, and resilient. ๐Ÿ“ฃ

๐Ÿค Supporting

Soketi is meant to be open source, forever and ever. It solves issues that many developers face - the one of wanting to be limitless while testing locally or performing benchmarks. More than that, itt is also suited for production usage, either it is public for your frontend applications or internal to your team.

The frequency of releases and maintenance is based on the available time, which is tight as hell. Recently, there were issues with the maintenance and this caused infrequent updates, as well as infrequent support.

To cover some of the expenses of handling new features or having to maintain the project, we would be more than happy if you can donate towards the goal. This will ensure that Soketi will be taken care of at its full extent.

๐Ÿ’ฐ Sponsor the development via Github Sponsors

Logos from Sponsors

Soketi

Blazing fast speed โšก

The server is built on top of uWebSockets.js - a C application ported to Node.js. uWebSockets.js is demonstrated to perform at levels 8.5x that of Fastify and at least 10x that of Socket.IO. (source)

Cheaper than most competitors ๐Ÿค‘

For a $49 plan on Pusher, you get a limited amount of connections (500) and messages (30M).

With Soketi, for the price of an instance on Vultr or DigitalOcean ($5-$10), you get virtually unlimited connections, messages, and some more!

Soketi is capable to hold thousands of active connections with high traffic on less than 1 GB and 1 CPU in the cloud. You can also get free $100 on Vultr to try out soketi โ†’

Easy to use ๐Ÿ‘ถ

Whether you run your infrastructure in containers or monoliths, soketi is portable. There are multiple ways to install and configure soketi, from single instances for development, to tens of active instances at scale with hundreds or thousands of active users.

Pusher Protocol ๐Ÿ“ก

soketi implements the Pusher Protocol v7. Your existing projects that connect to Pusher requires minimal code change to make it work with Soketi - you just add the host and port and swap the credentials.

App-based access ๐Ÿ”

Just like Pusher, you can access the API and WebSockets through the apps you define. Store the data with the built-in support for static arrays, DynamoDB and SQL-based servers like Postgres.

Production-ready! ๐Ÿค–

In addition to being a good companion during local development, soketi comes with the resiliency and speed required for demanding production applications. At scale with Redis, you get the breeze of scaling as you grow.

Built-in monitoring ๐Ÿ“ˆ

You just have to scrape the Prometheus metrics. Soketi offers a lot of metrics to monitor the deployment and

See it in action

Deployments

Community projects

๐Ÿ“ƒ Documentation

The entire documentation is available on Gitbook ๐ŸŒ

๐ŸŒŸ Stargazers

We really appreciate how this project turned to be such a great success. It will always remain open-source, free, and maintained. This is the real-time as it should be.

Stargazers over time

๐Ÿค Contributing

Please see CONTRIBUTING for details.

โ‰ Ideas or Discussions?

Have any ideas that can make into the project? Perhaps you have questions? Jump into the discussions board or join the Discord channel

๐Ÿ”’ Security

If you discover any security related issues, please email [email protected] instead of using the issue tracker.

๐ŸŽ‰ Credits

soketi's People

Contributors

alexatnewton avatar aprivette avatar atymic avatar blazheiko avatar daynnnnn avatar dependabot[bot] avatar dkulyk avatar erikn69 avatar henriquespin avatar hjbdev avatar iquadrat avatar jdanino avatar jelleroorda avatar mattoz0 avatar mlnkrish avatar namoshek avatar nsmith5 avatar rennokki avatar rolandstarke avatar soloradish avatar stayallive avatar wodka avatar xico2k avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

soketi's Issues

Unable to use in Kubernetes

I first tried to deploy the helm chart and get an error that version 2.2.0 does not exist.

I then tried to test by running a container individually and am still getting an error. Am I missing something?

kubectl run -i --rm --tty debug --image=quay.io/soketi/pws:0.8-16-alpine -- sh    
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: Internal error occurred: error attaching to container: container not running (166086ad6351e1ddd044f69aecf4166d6e20e923b48c100f5a9e385f4710f6fc)
/app/node_modules/prom-client/lib/metric.js:36
			throw new Error('Invalid metric name');
			      ^

Error: Invalid metric name
    at new Metric (/app/node_modules/prom-client/lib/metric.js:36:10)
    at new Gauge (/app/node_modules/prom-client/lib/gauge.js:19:1)
    at PrometheusMetricsDriver.registerMetrics (/app/dist/metrics/prometheus-metrics-driver.js:65:31)
    at new PrometheusMetricsDriver (/app/dist/metrics/prometheus-metrics-driver.js:12:14)
    at new Metrics (/app/dist/metrics/metrics.js:10:27)
    at Server.start (/app/dist/server.js:165:31)
    at Cli.start (/app/dist/cli/cli.js:109:28)
    at Function.start (/app/dist/cli/cli.js:97:26)
    at /app/dist/cli/index.js:6:63
    at M.applyBuilderUpdateUsageAndParse (/app/node_modules/yargs/build/index.cjs:1:7359)
pod "debug" deleted

[bug] Server crashes while benchmarking with Artillery

Hi,

The problem

I was testing this project with Artillery to see how much connections and messages is could handle, but the server kept on crashing on me. I used the following config for artillery:

config:
  target: "ws://localhost:6001/app/app_key?protocol=7&client=js&version=7.0.3&flash=false"
  phases:
    - duration: 30
      arrivalRate: 10
scenarios:
  - engine: "ws"
    flow:
      - send: '{"event":"pusher:subscribe","data":{"auth":"","channel":"chanel_name"}}'  # Subscribe to the public channel
      - think: 60 # Every connection will remain open for 15s

With this, every 10 seconds, 30 new clients will connect and stay connected for 60 seconds. The subscribe event to a channel is what I needed to test messages (by sending them from another script).

However, every time I started, the server crashes with the following error:

/Users/[app_path]/node_modules/@soketi/soketi/dist/channels/public-channel-manager.js:11
        return this.server.adapter.getNamespace(ws.app.id).addToChannel(ws, channel).then(connections => {
                                                       ^

TypeError: Cannot read property 'id' of undefined
    at PublicChannelManager.join (/Users/[app_path]/node_modules/@soketi/soketi/dist/channels/public-channel-manager.js:11:56)
    at WsHandler.subscribeToChannel (/Users/[app_path]/node_modules/@soketi/soketi/dist/ws-handler.js:233:24)
    at WsHandler.onMessage (/Users/[app_path]/node_modules/@soketi/soketi/dist/ws-handler.js:121:22)
    at message (/Users/[app_path]/node_modules/@soketi/soketi/dist/server.js:345:68)

The cause

After some testing and digging, I've discovered that my test script for Artillery sends the pusher:subscribe event before the connection is fully established. Normally the connection log will look something like this:

[Sat Jan 15 2022 17:01:18 GMT+0100 (Central European Standard Time)] ๐Ÿ‘จโ€๐Ÿ”ฌ New connection:
{
  ws: uWS.WebSocket {
    ip: '127.0.0.1',
    ip2: '',
    appKey: 'app_key'
  }
}
[Sat Jan 15 2022 17:01:18 GMT+0100 (Central European Standard Time)] โœˆ Sent message to client:
{
  ws: uWS.WebSocket {
    ip: '127.0.0.1',
    ip2: '',
    appKey: 'app_key',
    sendJson: [Function (anonymous)],
    id: '2037856788.7732658047',
    subscribedChannels: Set(0) {},
    presence: Map(0) {},
    app: App {
      id: 'app_name',
      key: 'app_key',
      secret: 'app_secret',
      //..
    },
    timeout: //..
  },
  data: {
    event: 'pusher:connection_established',
    data: '{"socket_id":"2037856788.7732658047","activity_timeout":30}'
  }
}
[Sat Jan 15 2022 17:01:18 GMT+0100 (Central European Standard Time)] โšก New message received:
{
  message: {
    event: 'pusher:subscribe',
    data: {
      auth: '',
      channel: '[channel_name]'
    }
  },
  isBinary: false
}

As you can see the Websocket interfase has an app property which contains an id, key, secret etc.

However, with artillery the log looks as follows:

[Sat Jan 15 2022 16:59:23 GMT+0100 (Central European Standard Time)] ๐Ÿ‘จโ€๐Ÿ”ฌ New connection:
{
  ws: uWS.WebSocket {
    ip: '127.0.0.1',
    ip2: '',
    appKey: 'app_key'
  }
}
[Sat Jan 15 2022 16:59:23 GMT+0100 (Central European Standard Time)] โšก New message received:
{
  message: {
    event: 'pusher:subscribe',
    data: {
      auth: '',
      channel: '[channel_name]'
    }
  },
  isBinary: false
}

It misses the pusher:connection_established messages from the server to the client and as you can see the Websocket interface does not yet have a app property before it receives a pusher:subscribe event. Because of this the app crashes, since it expects the app property when handling the subscribe event.

Possible solution

First solution from my side is very simple: wait with Artillery for the pusher:connection_established event before sending data. This works. However, I don't think the server should crash when you do by accident send data from a client before this.

Maybe adding something like the following to dist/ws-handler.js at line 256 (version: 0.26.3 of soketi)

channelManager.join(ws, channel, message).then((response) => {
            if (!response.success) {
                // ..
            }
            // fix example
            if(!ws.app) {
                return ws.sendJson({
                        event: 'pusher:subscription_error',
                        channel,
                        data: {
                            type: 'ConnectionError',
                            error: "Subscribe request sent before connection fully established. Wait for connection to be completed",
                            status: 405,
                        },
                    });
            }
            // end fix example
            if (!ws.subscribedChannels.has(channel)) {
                ws.subscribedChannels.add(channel);
            }
            this.server.adapter.getNamespace(ws.app.id).addSocket(ws);
            // ..
}

Some other places I think the app can crash because of the same issue are at the following methods:

  • unsubscribeFromChannel() at line 321
  • handleClientEvent() at line 354
  • checkAppConnectionLimit() at line 431

I hope this is helpful for your awesome project!

One of the dependencies is likely compromised - do not update

Hello, this morning I tried updating to the latest version of soketi and the next thing I know my logs are full of garbage, GB's of garbage. The system storage filled up within seconds. I then stopped the process cleared the logs and downgraded to the previous version I was using (0.21.0) and yet again I am met with a full filesystem. I created a fresh ubuntu docker image, installed fresh node and @soketi/soketi version 0.26.0 and was met with the same issue.

Example output after the upgrade:
image

I think it would be a good habit to start hard locking top-level dependencies. That doesn't fully prevent issues like this but it could catch some. For now I would discourage anyone from updating this package.

mysql driver issue

when i use array all thing is ok but when i chnage driver to mysql i get error in console

code: 'ER_ACCESS_DENIED_ERROR',
errno: 1045,
sqlState: '28000',
sqlMessage: "Access denied for user 'root'@'localhost' (using password: YES)",
sql: undefined

i use xampp and php 7.4

.env =

APP_MANAGER_DRIVER=mysql
DB_MYSQL_DATABASE=pusher
APP_MANAGER_MYSQL_TABLE=apps
DB_MYSQL_HOST=127.0.0.1
DB_MYSQL_PORT=3306
DB_MYSQL_USERNAME=root
DB_MYSQL_PASSWORD=null

my database username in locale is root and not have password so i set DB_MYSQL_PASSWORD to null

whats wrong with this ?

Helm chart 0.2.2 indentation issue

It appears there is an indentation issue in the deployment yaml.

Error: YAML parse error on pws/templates/deployment.yaml: error converting YAML to JSON: yaml: line 80: did not find expected key
helm.go:88: [debug] error converting YAML to JSON: yaml: line 80: did not find expected key
YAML parse error on pws/templates/deployment.yaml

image

Video tutorial and nodejs

Hello
I am not that smart in websocket topic yet. Studying nowadays this topic. I found this github from laravel echo server and it looks promising.
I tried the documentation and the serverside works however configuration side is not that perfect. What I mean, I found this https://docs.soketi.app/getting-started/environment-variables and it mentions this: https://github.com/soketi/soketi/blob/master/src/options.ts . Ok, but it would be nice if there would be a full example configuration and if someone wants to delete some parts then they could. Because it can be confusing.

Is there a video tutorial which explains and give in example much as possible everything about soketi, laravel?

I am planning to use this with laravel of course plus with nodejs client. Is it enough to use pusher-client library at nodejs side or I need something else too?

Sorry for the beginner questions. No idea where I can ask these stuffs. Thank you in advance the answers.

[question] How to change listening host?

Sorry if this is a dumb question, but I'm trying to set up soketi on a test server in our environment. Using Laravel 8 pusher replacement with Echo and Vue. Setup for local dev with the sail container, and it works great.

On the server though, not so much. Got it working with the self-signed cert using the workarounds, and supervisor indicates that soketi is running. Used the CLI install.

However, I'm not sure what env variables to set in order to change the listening host (logs indicate it's running at 127.0.0.1, but Vue is trying to connect to 127.0.0.1, which obviously didn't work). So I changed that to https://my.server.url, which results in a 404.

Is there a list somewhere of all the .env variables socketi itself can use? Would like to change the app ID/pw, listening URL, etc.

Any help would be greatly appreciated. Thanks!

Laravel Broadcasting not working

Im trying to broadcast via laravel and it seems not sendting to soketi server.

broadcasting.php

   'connections' => [

        'pusher' => [
            'driver' => 'pusher',
            'key' => 'app_key',
            'secret' =>  'app_secret',
            'app_id' =>  'app_id',
            'options' => [
                'host' => '127.0.0.1',
                'port' =>  '6001',
                'scheme' => 'http',
                'encrypted' => true,
                'useTLS' => false,
            ],
        ],
      ....

Then on the client side

//app.js
window.Echo = new Echo({
    broadcaster: 'pusher',
    key: '7d4a17fff484acabb9c7',
    wsHost: '127.0.0.1',
    wsPort: '6001',
    wssPort: '6001',
    forceTLS: false,
    encrypted: true,
    disableStats: true,
    enabledTransports: ['ws', 'wss'],
    cluster: 'ap1',
 
    Echo.channel('channel')
            .listen('.test', (e)=>{
                console.log(e)
            })

});

TestEvent.php


class TestEvent implements ShouldBroadcast
{
    use Dispatchable, InteractsWithSockets, SerializesModels;

    /**
     * Create a new event instance.
     *
     * @return void
     */
    public $data;
    public function __construct($message = 'Hello World')
    {
        $this->data = ['message'=>$message];
        // Log::info('testing redis queue');
    }

    /**
     * Get the channels the event should broadcast on.
     *
     * @return \Illuminate\Broadcasting\Channel|array
     */
    public function broadcastOn()
    {
        return new Channel('channel');

    }

    public function broadcastAs()
    {
        return 'test';
    }
}

I started the soketi server by running soketi start
image

And for broadcasting im using tinker via

event(new TestEvent('Hello'))

I cant find where i'm wrong.

Scrollbar flickering on certain screen sizes

When the screen has a certain size, the page starts flickering.

I tested this on Chrome and Firefox. Both had the problem.

From what I could see: The problem occurs on the desktop version of the site. When the height of the browser window is just as long as the website, the

scrollbar keeps appearing and disappearing, which moves the content of the page.

Untitled.mov

Can't load config from dynamodb, server crashes on closed connection

Hi!

Trying to run latest soketi (0.22) in docker. I've configured the dynamodb table, as well as the AWS creds. I get no error related to the creds, so I assume they are correct, but in the apps section i see the default json config?

When connecting with pusher, get the following message

{"event":"pusher:error","data":{"code":4001,"message":"App key <mykey> does not exist."}}

This key matches the key in dynamodb. Thinking it might not be loading correctly, I tried the default key, which also fails:

{"event":"pusher:error","data":{"code":4001,"message":"App key app-key does not exist."}}

Any ideas? Logs below with debug=1 ๐Ÿ‘‡

[29/Dec/2021:01:48:33] {
[29/Dec/2021:01:48:33] adapter: { driver: 'local', redis: { prefix: '' } },
[29/Dec/2021:01:48:33] appManager: {
[29/Dec/2021:01:48:33] driver: 'dynamodb',
[29/Dec/2021:01:48:33] array: {
[29/Dec/2021:01:48:33] apps: [
[29/Dec/2021:01:48:33] {
[29/Dec/2021:01:48:33] id: 'app-id',
[29/Dec/2021:01:48:33] key: 'app-key',
[29/Dec/2021:01:48:33] secret: 'app-secret',
[29/Dec/2021:01:48:33] maxConnections: -1,
[29/Dec/2021:01:48:33] enableClientMessages: false,
[29/Dec/2021:01:48:33] enabled: true,
[29/Dec/2021:01:48:33] maxBackendEventsPerSecond: -1,
[29/Dec/2021:01:48:33] maxClientEventsPerSecond: -1,
[29/Dec/2021:01:48:33] maxReadRequestsPerSecond: -1,
[29/Dec/2021:01:48:33] webhooks: []
[29/Dec/2021:01:48:33] }
[29/Dec/2021:01:48:33] ]
[29/Dec/2021:01:48:33] },
[29/Dec/2021:01:48:33] dynamodb: { table: 'soketi-apps', region: 'ap-southeast-2', endpoint: '' },
[29/Dec/2021:01:48:33] mysql: { table: 'apps', version: '8.0', useMysql2: false },
[29/Dec/2021:01:48:33] postgres: { table: 'apps', version: '13.3' }
[29/Dec/2021:01:48:33] },
[29/Dec/2021:01:48:33] channelLimits: { maxNameLength: 200 },
[29/Dec/2021:01:48:33] cors: {
[29/Dec/2021:01:48:33] credentials: true,
[29/Dec/2021:01:48:33] origin: [ '*' ],
[29/Dec/2021:01:48:33] methods: [ 'GET', 'POST', 'PUT', 'DELETE', 'OPTIONS' ],
[29/Dec/2021:01:48:33] allowedHeaders: [
[29/Dec/2021:01:48:33] 'Origin',
[29/Dec/2021:01:48:33] 'Content-Type',
[29/Dec/2021:01:48:33] 'X-Auth-Token',
[29/Dec/2021:01:48:33] 'X-Requested-With',
[29/Dec/2021:01:48:33] 'Accept',
[29/Dec/2021:01:48:33] 'Authorization',
[29/Dec/2021:01:48:33] 'X-CSRF-TOKEN',
[29/Dec/2021:01:48:33] 'XSRF-TOKEN',
[29/Dec/2021:01:48:33] 'X-Socket-Id'
[29/Dec/2021:01:48:33] ]
[29/Dec/2021:01:48:33] },
[29/Dec/2021:01:48:33] database: {
[29/Dec/2021:01:48:33] mysql: {
[29/Dec/2021:01:48:33] host: '127.0.0.1',
[29/Dec/2021:01:48:33] port: 3306,
[29/Dec/2021:01:48:33] user: 'root',
[29/Dec/2021:01:48:33] password: 'password',
[29/Dec/2021:01:48:33] database: 'main'
[29/Dec/2021:01:48:33] },
[29/Dec/2021:01:48:33] postgres: {
[29/Dec/2021:01:48:33] host: '127.0.0.1',
[29/Dec/2021:01:48:33] port: 5432,
[29/Dec/2021:01:48:33] user: 'postgres',
[29/Dec/2021:01:48:33] password: 'password',
[29/Dec/2021:01:48:33] database: 'main'
[29/Dec/2021:01:48:33] },
[29/Dec/2021:01:48:33] redis: {
[29/Dec/2021:01:48:33] host: '127.0.0.1',
[29/Dec/2021:01:48:33] port: 6379,
[29/Dec/2021:01:48:33] db: 0,
[29/Dec/2021:01:48:33] username: null,
[29/Dec/2021:01:48:33] password: null,
[29/Dec/2021:01:48:33] keyPrefix: '',
[29/Dec/2021:01:48:33] sentinels: null,
[29/Dec/2021:01:48:33] sentinelPassword: null,
[29/Dec/2021:01:48:33] name: 'mymaster'
[29/Dec/2021:01:48:33] }
[29/Dec/2021:01:48:33] },
[29/Dec/2021:01:48:33] databasePooling: { enabled: false, min: 0, max: 7 },
[29/Dec/2021:01:48:33] debug: 1,
[29/Dec/2021:01:48:33] eventLimits: { maxChannelsAtOnce: 100, maxNameLength: 200, maxPayloadInKb: 100 },
[29/Dec/2021:01:48:33] httpApi: { requestLimitInMb: 100 },
[29/Dec/2021:01:48:33] instance: { process_id: 1 },
[29/Dec/2021:01:48:33] metrics: {
[29/Dec/2021:01:48:33] enabled: false,
[29/Dec/2021:01:48:33] driver: 'prometheus',
[29/Dec/2021:01:48:33] prometheus: { prefix: 'soketi_' },
[29/Dec/2021:01:48:33] port: 9601
[29/Dec/2021:01:48:33] },
[29/Dec/2021:01:48:33] port: 6001,
[29/Dec/2021:01:48:33] pathPrefix: '',
[29/Dec/2021:01:48:33] presence: { maxMembersPerChannel: 100, maxMemberSizeInKb: 2 },
[29/Dec/2021:01:48:33] queue: { driver: 'sync', redis: { concurrency: 1 } },
[29/Dec/2021:01:48:33] rateLimiter: { driver: 'local' },
[29/Dec/2021:01:48:33] ssl: { certPath: '', keyPath: '', passphrase: '' }
[29/Dec/2021:01:48:33] }
[29/Dec/2021:01:48:33] ๐Ÿ“ก soketi initialization....
[29/Dec/2021:01:48:33] โšก Initializing the HTTP API & Websockets Server...
[29/Dec/2021:01:48:33] โšก Initializing the Websocket listeners and channels...
[29/Dec/2021:01:48:33] โšก Initializing the HTTP webserver...
[29/Dec/2021:01:48:33] ๐Ÿ•ต๏ธโ€โ™‚๏ธ Initiating metrics endpoints...
[29/Dec/2021:01:48:33] ๐ŸŽ‰ Server is up and running!
[29/Dec/2021:01:48:33] ๐Ÿ“ก The Websockets server is available at 127.0.0.1:6001
[29/Dec/2021:01:48:33] ๐Ÿ”— The HTTP API server is available at http://127.0.0.1:6001
[29/Dec/2021:01:48:33] ๐ŸŽŠ The /usage endpoint is available on port 9601.
[29/Dec/2021:01:49:28] [deployment:4] Reached a steady state
[29/Dec/2021:01:50:07] ๐Ÿ‘จโ€๐Ÿ”ฌ New connection:
[29/Dec/2021:01:50:07] {
[29/Dec/2021:01:50:07] ws: uWS.WebSocket {
[29/Dec/2021:01:50:07] ip: '172.26.15.238',
[29/Dec/2021:01:50:07] ip2: '',
[29/Dec/2021:01:50:07] appKey: '<key>'
[29/Dec/2021:01:50:07] }
[29/Dec/2021:01:50:07] }
[29/Dec/2021:01:50:17] โŒ Connection closed:
[29/Dec/2021:01:50:17] {
[29/Dec/2021:01:50:17] ws: uWS.WebSocket {
[29/Dec/2021:01:50:17] ip: '172.26.15.238',
[29/Dec/2021:01:50:17] ip2: '',
[29/Dec/2021:01:50:17] appKey: '<key>',
[29/Dec/2021:01:50:17] sendJson: [Function (anonymous)],
[29/Dec/2021:01:50:17] id: '7254798529.1462499284',
[29/Dec/2021:01:50:17] subscribedChannels: Set(0) {},
[29/Dec/2021:01:50:17] presence: Map(0) {}
[29/Dec/2021:01:50:17] },
[29/Dec/2021:01:50:17] code: 1005,
[29/Dec/2021:01:50:17] message: ArrayBuffer { [Uint8Contents]: <>, byteLength: 0 }
[29/Dec/2021:01:50:17] }
[29/Dec/2021:01:50:18] ๐Ÿ‘จโ€๐Ÿ”ฌ New connection:
[29/Dec/2021:01:50:18] {
[29/Dec/2021:01:50:18] ws: uWS.WebSocket {
[29/Dec/2021:01:50:18] ip: '172.26.15.238',
[29/Dec/2021:01:50:18] ip2: '',
[29/Dec/2021:01:50:18] appKey: '<key>'
[29/Dec/2021:01:50:18] }
[29/Dec/2021:01:50:18] }
[29/Dec/2021:01:50:36] /app/dist/ws-handler.js:26
[29/Dec/2021:01:50:36] if (ws.send(JSON.stringify(data))) {
[29/Dec/2021:01:50:36] ^
[29/Dec/2021:01:50:36] Error: Invalid access of closed uWS.WebSocket/SSLWebSocket.
[29/Dec/2021:01:50:36] at uWS.WebSocket.ws.sendJson (/app/dist/ws-handler.js:26:20)
[29/Dec/2021:01:50:36] at /app/dist/ws-handler.js:52:20
[29/Dec/2021:01:50:36] at processTicksAndRejections (node:internal/process/task_queues:96:5)
[29/Dec/2021:01:50:42] ๐Ÿšซ New users cannot connect to this instance anymore. Preparing for signaling...
[29/Dec/2021:01:50:42] โšก The server is closing and signaling the existing connections to terminate.
[29/Dec/2021:01:50:42] โšก All sockets were closed. Now closing the server.

[bug] Forward slashes are not escaped

It looks like forward slashes are not escaped when passing array via broadcastWith function.

The following works:

public function broadcastWith()
{
    return ['a' => 'abc']; // client receives this correctly
}

Received payload: {"event":"updated","channel":"resources","data":"{\"a\":\"abc\"}"}

while this does not:

public function broadcastWith()
{
    return ['a' => 'abc/d']; // does not even send it to the client
}

No payload received.

Archived echo-server escapes it correctly:
42/echo-key,["updated","resources","{\"a\":\"abc\\/d\"}"]

Laravel and WebSocket (pWS) server return no errors.

SSL Server immediatley stopping

I'm trying to run this on Kubernetes using a cert generated by mkcert. The server initializes but then immediately stops and the pod fails into a crash loop. The server works fine with the same settings if I omit the SSL env vars. Does the server not work with a self signed cert or is there another option to debug why it is stopping?

Cert generated using mkcert {Ingres...}.elb.amazonaws.com localhost 127.0.0.1 ::1

The debug console displays:

{                                                                                                                                                          
   adapter: { driver: 'local', redis: { prefix: '' } },                                                                                                     
   appManager: {                                                                                                                                            
     driver: 'array',                                                                                                                                       
     array: {                                                                                                                                               
       apps: [                                                                                                                                              
         {                                                                                                                                                  
           id: 'app-id',                                                                                                                                    
           key: 'app-key',                                                                                                                                  
           secret: 'app-secret',                                                                                                                            
           maxConnections: -1,                                                                                                                              
           enableClientMessages: false,                                                                                                                     
           enabled: true,                                                                                                                                   
           maxBackendEventsPerSecond: -1,                                                                                                                   
           maxClientEventsPerSecond: -1,                                                                                                                    
           maxReadRequestsPerSecond: -1,                                                                                                                    
           webhooks: []                                                                                                                                     
         }                                                                                                                                                  
       ]                                                                                                                                                    
     },                                                                                                                                                     
     dynamodb: { table: 'apps', region: 'us-east-1', endpoint: '' },                                                                                        
     mysql: { table: 'apps', version: '8.0' },                                                                                                              
     postgres: { table: 'apps', version: '13.3' }                                                                                                           
   },                                                                                                                                                       
   channelLimits: { maxNameLength: 200 },                                                                                                                   
   cors: {                                                                                                                                                  
     credentials: true,                                                                                                                                     
     origin: [ '*' ],                                                                                                                                       
     methods: [ 'GET', 'POST', 'PUT', 'DELETE', 'OPTIONS' ],                                                                                                
     allowedHeaders: [                                                                                                                                      
       'Origin',                                                                                                                                            
       'Content-Type',                                                                                                                                      
       'X-Auth-Token',                                                                                                                                      
       'X-Requested-With',                                                                                                                                  
       'Accept',                                                                                                                                            
       'Authorization',                                                                                                                                     
       'X-CSRF-TOKEN',                                                                                                                                      
       'XSRF-TOKEN',                                                                                                                                        
       'X-Socket-Id'                                                                                                                                        
     ]                                                                                                                                                      
   },                                                                                                                                                       
   database: {                                                                                                                                              
     mysql: {                                                                                                                                               
       host: '127.0.0.1',                                                                                                                                   
       port: 3306,                                                                                                                                          
       user: 'root',                                                                                                                                        
       password: 'password',                                                                                                                                
       database: 'main'                                                                                                                                     
     },                                                                                                                                                     
     postgres: {                                                                                                                                            
       host: '127.0.0.1',                                                                                                                                   
       port: 5432,                                                                                                                                          
       user: 'postgres',                                                                                                                                    
       password: 'password',                                                                                                                                
       database: 'main'
},                                                                                                                                                     
     redis: {                                                                                                                                               
       host: '127.0.0.1',                                                                                                                                   
       port: 6379,                                                                                                                                          
       db: 0,                                                                                                                                               
       username: null,                                                                                                                                      
       password: null,                                                                                                                                      
       keyPrefix: '',                                                                                                                                       
       sentinels: null,                                                                                                                                     
       sentinelPassword: null,                                                                                                                              
       name: 'mymaster'                                                                                                                                     
     }                                                                                                                                                      
   },                                                                                                                                                       
   databasePooling: { enabled: false, min: 0, max: 7 },                                                                                                     
   debug: true,                                                                                                                                             
   eventLimits: { maxChannelsAtOnce: 100, maxNameLength: 200, maxPayloadInKb: 100 },                                                                        
   httpApi: { requestLimitInMb: 100 },                                                                                                                      
   instance: { node_id: null, process_id: 1, pod_id: null },                                                                                                
   metrics: {                                                                                                                                               
     enabled: false,                                                                                                                                        
     driver: 'prometheus',                                                                                                                                  
     prometheus: { prefix: 'pws_' }                                                                                                                         
   },                                                                                                                                                       
   port: 6001,                                                                                                                                              
   pathPrefix: '',                                                                                                                                          
   presence: { maxMembersPerChannel: 100, maxMemberSizeInKb: 2 },                                                                                           
   queue: { driver: 'sync', redis: { concurrency: 1 } },                                                                                                    
   rateLimiter: { driver: 'local' },                                                                                                                        
   ssl: {                                                                                                                                                   
     certPath: '/app/cert.pem',                                                                                                                             
     keyPath: '/app/key.pem',                                                                                                                               
     passphrase: ''                                                                                                                                         
   }                                                                                                                                                        
 }                                                                                                                                                          
 ๐Ÿ“ก pWS Server initialization started.                                                                                                                      
 โšก Initializing the HTTP API & Websockets Server...                                                                                                        
 โšก Initializing the Websocket listeners and channels...                                                                                                    
 โšก Initializing the HTTP webserver...                                                                                                                      
 ๐ŸŽ‰ Server is up and running!                                                                                                                               
 ๐Ÿ“ก The Websockets server is available at 127.0.0.1:6001                                                                                                    
 ๐Ÿ”— The HTTP API server is available at http://127.0.0.1:6001                                                                                               
 ๐Ÿšซ New users cannot connect to this instance anymore. Preparing for signaling...                                                                           
 โšก The server is closing and signaling the existing connections to terminate.                                                                              
 โšก All sockets were closed. Now closing the server.

Network watcher Unable to subscribe to signal events

I'm able to get the pws container up and running without issue. When I try to enable the network watcher the container is throwing the below error and never getting to a ready state.

Unable to subscribe to signal events. Make sure that the `pcntl` extension                                                                                                                                      
is installed and that "pcntl_*" functions are not disabled by your php.ini'                                                                                                          
  s "disable_functions" directive.

The deployment looks like:

# Source: pws/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pws
  labels:
    helm.sh/chart: pws-0.2.0
    app.kubernetes.io/name: pws
    app.kubernetes.io/instance: pws
    app.kubernetes.io/version: "0.8.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: pws
      app.kubernetes.io/instance: pws
  template:
    metadata:
      labels:
        app.kubernetes.io/name: pws
        app.kubernetes.io/instance: pws
        pws.soketi.app/accepts-new-connections: "yes"
    spec:
      terminationGracePeriodSeconds: 30
      serviceAccountName: pws
      securityContext:
        {}

      containers:
        - name: network-watcher
          securityContext:
            {}
          image: "quay.io/soketi/network-watcher:4.2"
          imagePullPolicy: IfNotPresent
          env:
            - name: KUBE_CONNECTION
              value: cluster
            - name: SERVER_PORT
              value: "6001"
            - name: MEMORY_PERCENT
              value: "85"
            - name: CHECKING_INTERVAL
              value: "1"
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
          resources:
            limits:
              cpu: 100m
              memory: 128Mi
            requests:
              cpu: 100m
              memory: 64Mi
        - name: pws
          securityContext:
            {}
          image: "quay.io/soketi/pws:latest-16-alpine"
          imagePullPolicy: IfNotPresent
          ports:
            - name: pws
              containerPort: 6001
              protocol: TCP
          command:
            - node
            - --max-old-space-size=256
            - --max_old_space_size=256
            - --optimize_for_size
            - --optimize-for-size
            - --trace-warnings
            - /app/bin/server.js
            - start
          envFrom:
            - configMapRef:
                name: pws
          livenessProbe:
            httpGet:
              path: /
              port: 6001
              httpHeaders:
              - name: X-Kube-Healthcheck
                value: "Yes"
            initialDelaySeconds: 5
            periodSeconds: 1
            failureThreshold: 1
            successThreshold: 1
          readinessProbe:
            httpGet:
              path: /
              port: 6001
              httpHeaders:
              - name: X-Kube-Healthcheck
                value: "Yes"
            initialDelaySeconds: 5
            periodSeconds: 1
            failureThreshold: 1
            successThreshold: 1
          resources:
            limits:
              cpu: 250m
              memory: 256Mi

Webhooks not being processed when using Redis?

So I am basically seeing what #66 is seeing... when using the Redis queue worker webhooks are not being processed.

I have been pulling my hair out all night trying to figure this out and adding console.log everywhere I can think of... it looks like the queue workers are started and events are queue in Redis but they are not being processed for some reason... they just build up in Redis (I'm checking with an Redis client) never being sent, the queue callback is never executed.

If you have any clue what I can try or how I can debug that would be awesome!

(switching queue driver to sync works like a charm, so config is good)

cURL error 60: SSL certificate: unable to get local issuer certificate

This issue popup's after upgrading Pusher, downgrading solves this issue:

$ composer install
Downgrading pusher/pusher-php-server (7.0.1 => 5.0.3): Extracting archive

I'm using Soketi 0.20.0 on my LAN for testing, meaning the certificates are not signed by any issuer.

This is a workaround I'm using on Pusher 5.0.3 to get around the self-signed issue (not recommend on production):

// config/broadcasting.php
'pusher' => [
            'driver' => 'pusher',
            'key' => env('PUSHER_APP_KEY', 'app-key'),
            'secret' => env('PUSHER_APP_SECRET', 'app-secret'),
            'app_id' => env('PUSHER_APP_ID', 'app-id'),
            'options' => [
                'host' => env('PUSHER_HOST', '127.0.0.1'),
                'port' => env('PUSHER_PORT', 6001),
                'scheme' => env('PUSHER_SCHEME', 'http'),
                'encrypted' => true,
                'useTLS' => 'https' === env('PUSHER_SCHEME'),
                'curl_options' => [
                    CURLOPT_SSL_VERIFYHOST => 0,
                    CURLOPT_SSL_VERIFYPEER => 0, // This doesn't seem to work anymore after upgrading?
                ],
            ],
        ],

Do you have any tips? :)

Thanks!

[improvement] Prometheus instrument should drop "infras related tags"

In
https://github.com/soketi/pws/blob/5f3979ebd305d4a54cc5463b9f1250906bb79167/src/metrics/prometheus-metrics-driver.ts#L7

interface PrometheusMetrics {
    connectedSockets?: prom.Gauge<'app_id'|'node_id'|'pod_id'>;
    newConnectionsTotal?: prom.Counter<'app_id'|'node_id'|'pod_id'>;
    newDisconnectionsTotal?: prom.Counter<'app_id'|'node_id'|'pod_id'>;
    socketBytesReceived?: prom.Counter<'app_id'|'node_id'|'pod_id'>;
    socketBytesTransmitted?: prom.Counter<'app_id'|'node_id'|'pod_id'>;
    httpBytesReceived?: prom.Counter<'app_id'|'node_id'|'pod_id'>;
    httpBytesTransmitted?: prom.Counter<'app_id'|'node_id'|'pod_id'>;
    httpCallsReceived?: prom.Counter<'app_id'|'node_id'|'pod_id'>;
}

app's instrument should not have infras related tags, leave these to discovery & relabel phase.

connectedSockets?: prom.Gauge<'app_id'>; will be enough.

Presence / Private channel events are not forwarded to channel subscribers

I'm using a basic setup for local development using Laravel as backend. Broadcasting regular public channels works fine, but when I try to use private or presence channels, the events don't seem to be forwarded to the subscribers. When I replace the credentials with a real pusher app everything works fine, this together with public channels functioning verifies that it's not a misconfigured app issue. I also have DEBUG enabled in my pWS server but it's not logging connections or events, which would really help with debugging.

My Laravel pusher config looks like this (default values are used):

        'pusher' => [
            'driver' => 'pusher',
            'key' => env('PUSHER_APP_KEY', 'dev'),
            'secret' => env('PUSHER_APP_SECRET', 'dev'),
            'app_id' => env('PUSHER_APP_ID', '1'),
            'options' => [
                'cluster' => env('PUSHER_APP_CLUSTER', 'eu'),
                'host' => '127.0.0.1',
                'port' => 6001,
                'scheme' => 'http'
            ],
        ],

My client code looks like this:

        this.#socket = new Echo({
            broadcaster: 'pusher',
            cluster: 'eu',
            key: 'dev',
            wsHost: '127.0.0.1',
            wsPort: 6001,
            forceTLS: false,
            auth: {
                headers: {
                    'X-GPCHAT-UUID': this.#uuid,
                    'X-GPCHAT-NAME': this.#userName,
                }
            }
        });

        this.#socket.channel('test-channel').listen('TestEvent', console.log) // This works

        this.#socket
            .join(`App.Models.Chat.${this.#chatId}`) // This works
            .here(participants => {) // This works
                console.log(participants);
            })
            .joining(participant => {) // This works
                console.log(participant);
            })
            .leaving(participant => {) // This works
                console.log(participant);
            })
            .listen('MessageSent', (e) => { // This is never called
                console.log(e)
            })

I also tried different variations of the listeners:
MessageSent, .MessageSent, App\\Events\\MessageSent or .App\\Events\\MessageSent.

I also tried using the Pusher-js client library directly to listen for Events, no success:

        let client = new Pusher('dev', {
            wsHost: '127.0.0.1',
            wsPort: 6001,
            forceTLS: false,
            disableStats: true,
            enabledTransports: ['ws'],
            authEndpoint: '/broadcasting/auth',

            auth: {
                headers: {
                    'X-GPCHAT-UUID': this.#uuid,
                    'X-GPCHAT-NAME': this.#userName,
                }
            }
        });

        client.subscribe('presence-App.Models.Chat.3').bind('App\\Events\\MessageSent', (message) => { // Doesn't work
            console.log(message)
        });
        client.subscribe('test-channel').bind('App\\Events\\TestEvent', (message) => { // Works
            console.log(message)
        });

Keep in mind that in both examples the authentication for the presence channel is successful and transmitted to the pWS server:
image

Any ideas how to correctly implement presence/private channels?

[request] Custom Redis client settings using environment variables

We are using a managed database by Digital Ocean and they require an SSL connection. AWS ElastiCache and many other managed solutions have the same requirement.

This is "easy" to enable by adding a tls: {} to the options passed to new Redis({...}). However I have a hard time figuring out how to do this nicely so we can have a DB_REDIS_TLS=true and it doing tls: {} or tls: undefined (which is the default) based on that value. Any help would be appreciated :)

I also noticed that when Redis has a connection problem it spins out of control (consuming ~1 core of CPU time) in an infinite loop barfing this when in debug mode:

Error: write EPIPE
    at afterWriteDispatched (node:internal/stream_base_commons:164:15)
    at writeGeneric (node:internal/stream_base_commons:155:3)
    at Socket._writeGeneric (node:net:795:11)
    at Socket._write (node:net:807:8)
    at writeOrBuffer (node:internal/streams/writable:389:12)
    at _write (node:internal/streams/writable:330:10)
    at Socket.Writable.write (node:internal/streams/writable:334:10)
    at Redis.sendCommand (/usr/lib/node_modules/@soketi/soketi/node_modules/ioredis/built/redis/index.js:672:33)
    at /usr/lib/node_modules/@soketi/soketi/node_modules/ioredis/built/redis/event_handler.js:256:26
    at Socket.<anonymous> (/usr/lib/node_modules/@soketi/soketi/node_modules/ioredis/built/redis/event_handler.js:46:39) {
  errno: -32,
  code: 'EPIPE',
  syscall: 'write'
}
Error: write ECONNRESET
    at afterWriteDispatched (node:internal/stream_base_commons:164:15)
    at writeGeneric (node:internal/stream_base_commons:155:3)
    at Socket._writeGeneric (node:net:795:11)
    at Socket._write (node:net:807:8)
    at writeOrBuffer (node:internal/streams/writable:389:12)
    at _write (node:internal/streams/writable:330:10)
    at Socket.Writable.write (node:internal/streams/writable:334:10)
    at Redis.sendCommand (/usr/lib/node_modules/@soketi/soketi/node_modules/ioredis/built/redis/index.js:672:33)
    at /usr/lib/node_modules/@soketi/soketi/node_modules/ioredis/built/redis/event_handler.js:256:26
    at Socket.<anonymous> (/usr/lib/node_modules/@soketi/soketi/node_modules/ioredis/built/redis/event_handler.js:46:39) {
  errno: -104,
  code: 'ECONNRESET',
  syscall: 'write'
}

Maybe we should have a little more conservatie retry strategy.

installing pws - cli not working

running npm install -g @soketi/pws or yarn global add @soketi/pws does not install the cli.
yarn global add returns this error:

[3/4] Linking dependencies...
[4/4] Building fresh packages...
[-/5] โ ˆ waiting...
[-/5] โ ˆ waiting...
[-/5] โ ˆ waiting...
[-/5] โ ˆ waiting...
error /home/catalinb/.config/yarn/global/node_modules/msgpack: Command failed.
Exit code: 1
Command: node-gyp rebuild
Arguments: 
Directory: /home/catalinb/.config/yarn/global/node_modules/msgpack
Output:
gyp info it worked if it ends with ok
gyp info using [email protected]
gyp info using [email protected] | linux | x64
gyp info find Python using Python version 3.9.7 found at "/usr/bin/python3"
gyp info spawn /usr/bin/python3
gyp info spawn args [
gyp info spawn args   '/usr/lib/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args   'binding.gyp',
gyp info spawn args   '-f',
gyp info spawn args   'make',
gyp info spawn args   '-I',
gyp info spawn args   '/home/catalinb/.config/yarn/global/node_modules/msgpack/build/config.gypi',
gyp info spawn args   '-I',
gyp info spawn args   '/usr/lib/node_modules/node-gyp/addon.gypi',
gyp info spawn args   '-I',
gyp info spawn args   '/home/catalinb/.cache/node-gyp/16.10.0/include/node/common.gypi',
gyp info spawn args   '-Dlibrary=shared_library',
gyp info spawn args   '-Dvisibility=default',
gyp info spawn args   '-Dnode_root_dir=/home/catalinb/.cache/node-gyp/16.10.0',
gyp info spawn args   '-Dnode_gyp_dir=/usr/lib/node_modules/node-gyp',
gyp info spawn args   '-Dnode_lib_file=/home/catalinb/.cache/node-gyp/16.10.0/<(target_arch)/node.lib',
gyp info spawn args   '-Dmodule_root_dir=/home/catalinb/.config/yarn/global/node_modules/msgpack',
gyp info spawn args   '-Dnode_engine=v8',
gyp info spawn args   '--depth=.',
gyp info spawn args   '--no-parallel',
gyp info spawn args   '--generator-output',
gyp info spawn args   'build',
gyp info spawn args   '-Goutput_dir=.'
gyp info spawn args ]
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
make: Entering directory '/home/catalinb/.config/yarn/global/node_modules/msgpack/build'
  CC(target) Release/obj.target/libmsgpack/deps/msgpack/objectc.o
  CC(target) Release/obj.target/libmsgpack/deps/msgpack/unpack.o
In file included from ../deps/msgpack/unpack.c:276:
../deps/msgpack/msgpack/unpack_template.h: In function โ€˜template_executeโ€™:
../deps/msgpack/msgpack/unpack_template.h:258:17: warning: this statement may fall through [-Wimplicit-fallthrough=]
  258 |                 ++p;
      |                 ^~~
../deps/msgpack/msgpack/unpack_template.h:260:13: note: here
  260 |             default:
      |             ^~~~~~~
  CC(target) Release/obj.target/libmsgpack/deps/msgpack/vrefbuffer.o
  CC(target) Release/obj.target/libmsgpack/deps/msgpack/zone.o
  CC(target) Release/obj.target/libmsgpack/deps/msgpack/version.o
  AR(target) Release/obj.target/deps/msgpack/msgpack.a
  COPY Release/msgpack.a
  CXX(target) Release/obj.target/msgpackBinding/src/msgpack.o
In file included from /home/catalinb/.cache/node-gyp/16.10.0/include/node/v8.h:30,
                 from ../src/msgpack.cc:1:
/home/catalinb/.cache/node-gyp/16.10.0/include/node/v8-internal.h: In function โ€˜void v8::internal::PerformCastCheck(T*)โ€™:
/home/catalinb/.cache/node-gyp/16.10.0/include/node/v8-internal.h:489:38: error: โ€˜remove_cv_tโ€™ is not a member of โ€˜stdโ€™; did you mean โ€˜remove_cvโ€™?
  489 |             !std::is_same<Data, std::remove_cv_t<T>>::value>::Perform(data);
      |                                      ^~~~~~~~~~~
      |                                      remove_cv
/home/catalinb/.cache/node-gyp/16.10.0/include/node/v8-internal.h:489:38: error: โ€˜remove_cv_tโ€™ is not a member of โ€˜stdโ€™; did you mean โ€˜remove_cvโ€™?
  489 |             !std::is_same<Data, std::remove_cv_t<T>>::value>::Perform(data);
      |                                      ^~~~~~~~~~~
      |                                      remove_cv
/home/catalinb/.cache/node-gyp/16.10.0/include/node/v8-internal.h:489:50: error: template argument 2 is invalid
  489 |             !std::is_same<Data, std::remove_cv_t<T>>::value>::Perform(data);
      |                                                  ^
/home/catalinb/.cache/node-gyp/16.10.0/include/node/v8-internal.h:489:63: error: โ€˜::Performโ€™ has not been declared
  489 |             !std::is_same<Data, std::remove_cv_t<T>>::value>::Perform(data);
      |                                                               ^~~~~~~
In file included from ../src/msgpack.cc:2:
../src/msgpack.cc: At global scope:
/home/catalinb/.cache/node-gyp/16.10.0/include/node/node.h:821:7: warning: cast between incompatible function types from โ€˜void (*)(Nan::ADDON_REGISTER_FUNCTION_ARGS_TYPE)โ€™ {aka โ€˜void (*)(v8::Local<v8::Object>)โ€™} to โ€˜node::addon_register_funcโ€™ {aka โ€˜void (*)(v8::Local<v8::Object>, v8::Local<v8::Value>, void*)โ€™} [-Wcast-function-type]
  821 |       (node::addon_register_func) (regfunc),                          \
      |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/catalinb/.cache/node-gyp/16.10.0/include/node/node.h:855:3: note: in expansion of macro โ€˜NODE_MODULE_Xโ€™
  855 |   NODE_MODULE_X(modname, regfunc, NULL, 0)  // NOLINT (readability/null_usage)
      |   ^~~~~~~~~~~~~
../src/msgpack.cc:351:1: note: in expansion of macro โ€˜NODE_MODULEโ€™
  351 | NODE_MODULE(msgpackBinding, init);
      | ^~~~~~~~~~~
make: *** [msgpackBinding.target.mk:122: Release/obj.target/msgpackBinding/src/msgpack.o] Error 1
make: Leaving directory '/home/catalinb/.config/yarn/global/node_modules/msgpack/build'
gyp ERR! build error 
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack     at ChildProcess.onExit (/usr/lib/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack     at ChildProcess.emit (node:events:390:28)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)
gyp ERR! System Linux 5.14.9-zen2-1-zen
gyp ERR! command "/usr/bin/node" "/usr/lib/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/catalinb/.config/yarn/global/node_modules/msgpack

npm install -g returns:

code EACCES
npm ERR! syscall mkdir
npm ERR! path /usr/lib/node_modules/@soketi
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, mkdir '/usr/lib/node_modules/@soketi'
npm ERR!  [Error: EACCES: permission denied, mkdir '/usr/lib/node_modules/@soketi'] {
npm ERR!   errno: -13,
npm ERR!   code: 'EACCES',
npm ERR!   syscall: 'mkdir',
npm ERR!   path: '/usr/lib/node_modules/@soketi'
npm ERR! }
npm ERR! 
npm ERR! The operation was rejected by your operating system.
npm ERR! It is likely you do not have the permissions to access this file as the current user
npm ERR! 
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/catalinb/.npm/_logs/2021-10-12T12_53_02_175Z-debug.log

sudo yarn global add returns the same error as the command without sudo

sudo npm install -g asks for the shh key passphrase, after entering the passphrase it returns this error:

โธจโ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โ ‚โธฉ โ  idealTree:lib: sill idealTree buildDeps
npm ERR! code 128
npm ERR! An unknown git error occurred
npm ERR! command git --no-replace-objects clone -b v20.0.0 ssh://[email protected]/uNetworking/uWebSockets.js.git /root/.npm/_cacache/tmp/git-cloneEsVguT --recurse-submodules --depth=1
npm ERR! fatal: could not create leading directories of '/root/.npm/_cacache/tmp/git-cloneEsVguT': Permission denied

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2021-10-12T12_56_13_982Z-debug.log
โžœ  graphql-broadcast-test 

Socket auth with custom data: unauthorized

When using custom data from the socketAuth, the channel subscription is always unauthorized.

Sample:

<?php // php -S localhost:8080 index.php

use Psr\Http\Message\ResponseInterface as Response;
use Psr\Http\Message\ServerRequestInterface as Request;
use Slim\Factory\AppFactory;
use Pusher\Pusher;

require __DIR__ . '/vendor/autoload.php';

$app = AppFactory::create();
$app->add(function ($request, $handler) {
    $response = $handler->handle($request);
    return $response
        ->withHeader('Access-Control-Allow-Origin', '*')
        ->withHeader('Access-Control-Allow-Headers', '*')
        ->withHeader('Access-Control-Allow-Methods', '*');
});

$app->post('/broadcasting/auth', function (Request $request, Response $response) {
    $body = $request->getParsedBody();
    $pusher = new Pusher('app-key', 'app-secret', 'app-id');
    $socketAuth = $pusher->socketAuth($body['channel_name'], $body['socket_id'], 'my-custom-data');
    $response->getBody()->write($socketAuth);
    return $response->withHeader('Content-Type', 'application/json');
});

$app->run();
<script src="https://js.pusher.com/7.0/pusher-with-encryption.min.js"></script>
<script>
var pusher = new Pusher('app-key', {
    authEndpoint: 'http://localhost:8080/broadcasting/auth',
    wsHost: '127.0.0.1',
    wsPort: 6001,
    forceTLS: false,
    encrypted: true,
    disableStats: true,
    enabledTransports: ['ws', 'wss'],
})

pusher.connection.bind('connected', function() {
    console.log('connected')
})

var channel = pusher.subscribe(`private-mychannel`)
channel.bind('pusher:subscription_succeeded', function() {
    console.log('subscription_succeeded')
})

channel.bind('pusher:subscription_error', function(data) {
    console.log('subscription_error', data)
})
</script>

Result:

> connected
> subscription_error { type: "AuthError", error: "The connection is unauthorized.", status: 401 }

Removing the third parameter from socketAuth, the channel subscription is authorized.

// ...

$socketAuth = $pusher->socketAuth($body['channel_name'], $body['socket_id']);

// ...
> connected
> subscription_succeeded

Webhooks not loading from MySQL driver (maybe all SQL drivers)

I have this row in my MySQL database:

name id key secret max_connections enable_client_messages enabled max_backend_events_per_sec max_client_events_per_sec max_read_req_per_sec webhooks
[redacted] [redacted] [redacted] [redacted] -1 0 1 -1 -1 -1 [{"url": "[redacted]", "event_types": ["channel_vacated"]}]

But when running DEBUG=1 pws-server start I see the following:

๐Ÿ‘จโ€๐Ÿ”ฌ New connection:
{
  ws: uWS.WebSocket {
    ip: '127.0.0.1',
    ip2: '',
    appKey: '[redacted]'
  }
}
โœˆ Sent message to client:
{
  ws: uWS.WebSocket {
    ip: '127.0.0.1',
    ip2: '',
    appKey: '[redacted]',
    sendJson: [Function (anonymous)],
    id: '[redacted]',
    subscribedChannels: Set(0) {},
    presence: Map(0) {},
    app: App {
      id: '[redacted]',
      key: '[redacted]',
      secret: '[redacted]',
      maxConnections: -1,
      enableClientMessages: 0,
      enabled: 1,
      maxBackendEventsPerSecond: -1,
      maxClientEventsPerSecond: -1,
      maxReadRequestsPerSecond: -1,
      webhooks: []
    },
    timeout: Timeout {
      _idleTimeout: 120000,
      _idlePrev: [TimersList],
      _idleNext: [TimersList],
      _idleStart: 7392,
      _onTimeout: [Function (anonymous)],
      _timerArgs: undefined,
      _repeat: null,
      _destroyed: false,
      [Symbol(refed)]: true,
      [Symbol(kHasPrimitive)]: false,
      [Symbol(asyncId)]: 1885,
      [Symbol(triggerId)]: 0
    }
  },
  data: {
    event: 'pusher:connection_established',
    data: '{"socket_id":"[redacted]","activity_timeout":30}'
  }
}

All data is there except that webhooks is empty, so they don't seem to be loaded from the SQL database.

And webhooks are also not delivered so it doesn't looks to be broken debug output.

I tried restarting the server and clearing Redis before starting the server all to no avail.

Possibly #66 experienced this too.

[bug] `client.moveToActive is not a function` when using the Redis queue driver

My .env:

APP_MANAGER_DRIVER=mysql

DB_MYSQL_USERNAME=soketi
DB_MYSQL_PASSWORD=soketi
DB_MYSQL_DATABASE=soketi

SOKETI_DEBUG=1

QUEUE_DRIVER=redis

soketi version: @soketi/[email protected]
redis version: 6.2.1
node version: v16.13.1

I run soketi start in the directory of the above .env and this is my output (stuck in infinite loop it seems):
image

Everything works fine when disabling the redis queue driver. Any help with this?

Invalid access of closed uWS.WebSocket/SSLWebSocket.

We have had some issues with the server throwing these exceptions in the CLI from the server it is running from, while these errors were being thrown we had issues with clients not receiving messages.

The client could subscribe to the channel however would not receive any messages while other clients on the same channel would. If the same client joined another channel they would be getting messages. This was completely random on what clients would receive which messages, we have since restarted the pws-server and now cannot replicate the errors again.

Any help would be much appreciated.

(node:3267) UnhandledPromiseRejectionWarning: Error: Invalid access of closed uWS.WebSocket/SSLWebSocket.
at uWS.WebSocket.ws.sendJson (/usr/lib/node_modules/@soketi/pws/dist/ws-handler.js:22:20)
at /usr/lib/node_modules/@soketi/pws/dist/ws-handler.js:44:20
at runMicrotasks ()
at processTicksAndRejections (internal/process/task_queues.js:95:5)
(Use node --trace-warnings ... to show where the warning was created)
(node:3267) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:3267) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Install Error

I'm getting this message when I try to install the package:

npm install -g @soketi/soketi
npm ERR! code 128
npm ERR! An unknown git error occurred
npm ERR! command git --no-replace-objects ls-remote ssh://[email protected]/uNetworking/uWebSockets.js.git
npm ERR! command-line line 0: unsupported option "accept-new".
npm ERR! fatal: Could not read from remote repository.
npm ERR! 
npm ERR! Please make sure you have the correct access rights
npm ERR! and the repository exists.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2021-12-13T18_15_10_616Z-debug-0.log

Can someone help-me?

[improvement] Secure or document which endpoints to not publicly serve

Right now /usage & /metrics (if enabled) are publicly available if you run the server.

Although they don't seem like an issue to open up, it would probably be better to keep these private or at least internal.

There are a few routes that come to mind:

  • Run those endpoints on another "internal" port and keep 6001 as the "public" server
  • Add some kind of simple authentication token that needs to be passed as header/query to the endpoint before they respond
  • Document that those endpoints exists and that you might not want to expose them publicly

I'm not 100% what the best route here would be, but figured I put my thoughts here to start the discussion.

How to add a config file with the supervisor?

I want to add a config file with a config argument using the supervisor. I created "soketi.conf" and I defined a path using config argument then I executed "sudo supervisorctl start soketi:*" command, but it did not work and throw an error soketi:soketi_00: ERROR (no such file)"

[program:soketi]
process_name=%(program_name)s_%(process_num)02d
command=soketi start --config="/home/ubuntu/soketi/config.json"
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=ubuntu
numprocs=1
redirect_stderr=true
stdout_logfile=/var/log/soketi-supervisor.log
stopwaitsecs=60
stopsignal=sigint
minfds=10240

[bug] Webhook payload's format is not matching with Pusher Protocol specification

Looking at the docs: https://pusher.com/docs/channels/server_api/webhooks/

This is what a payload (without batching) is supposed to look:

{
    "time_ms": 1327078148132,
    "events": [
        {
            "name": "event_name",
            "some": "data"
        }
    ]
}

However (using webhook.site) I am receiving:

{
    "name": "channel_vacated",
    "channel": "name-of-channel",
    "time_ms": 1636573105985
}

So it seems the format is flattened instead of always wrapped in an array.

[request] Add automated (prettier?) formatting

I noticed some inconsistencies in the formatting of the code in places and wondered if introducing prettier or eslint (I believe it can format too) to enforce formatting might be a good idea to make sure it can be automated and even added as a PR step to ensure all contributors write similar formatted code.

This might be a bit invasive so I'm holding out on a PR for this since you might want to do it yourself so it's formatted to your standards instead of mine ๐Ÿ˜‰

[bug] Numeric app IDs cause communication issues.

If you use a numeric App_id the server will accept it, however the front end application will not receive broadcasts.
A minimum replicable setup looks like this:

test('check server can handle a numeric app id', done => {
        Utils.newServer({
            'appManager.array.apps.0.id': 40000
        }, (server: Server) => {
            let client = Utils.newClient();
            let backend = Utils.newBackend("40000");
            let channelName = Utils.randomChannelName();

            client.connection.bind('connected', () => {
                let channel = client.subscribe(channelName);

                channel.bind('greeting', e => {
                    expect(e.message).toBe('hello');
                    expect(e.weirdVariable).toBe('abc/d');
                    client.disconnect();
                    done();
                });

                channel.bind('pusher:subscription_succeeded', () => {
                    Utils.sendEventToChannel(backend, channelName, 'greeting', { message: 'hello', weirdVariable: 'abc/d' })
                        .catch(error => {
                            throw new Error(error);
                        });
                });
            });
        });
    });

We stumbled upon this one while using it with Laravel Echo and docker. If you set the docker container to an integer it will cause this issue. We found it because we used the same app_id we had on pusher.com.

I've got a fix ill create a PR soon!

webhooks not working

hi
i not understand webhooks how working
i set this enviroments:

            DEBUG: '1'
            DEFAULT_APP_ENABLE_CLIENT_MESSAGES: 1
            DB_MYSQL_USERNAME: '${DB_USERNAME}'
            DB_MYSQL_PASSWORD: '${DB_PASSWORD}'
            DB_MYSQL_DATABASE: '${DB_DATABASE}'
            QUEUE_DRIVER: 'redis'
            ADAPTER_DRIVER: 'redis'
            RATE_LIMITER_DRIVER: 'redis'
            DB_POOLING_ENABLED: 'true'
            METRICS_ENABLED: 'true'
            DB_REDIS_HOST: 'redis'
            DB_REDIS_PORT: '6379'
            DEFAULT_APP_WEBHOOKS: '[{"url": "http://example-app.test/webhooks", "event_types": ["client_event"]}]'```
can you help me about webhooks on laravel?

[feature] Webhooks debugging

I used Laravel Sail and config:

pws:
        image: "quay.io/soketi/pws:latest-16-alpine"
        environment:
            DEBUG: "1"
            DEFAULT_APP_WEBHOOKS: '[{"url": "http://laravel.test/webhooks", "event_types": ["client_event"]}]'
        ports:
            - "${PWS_PORT:-6001}:6001"
        networks:
            - sail

But unfortunately it doesn't work - request not send (or not receive). I don't know how to debug this. Maybe a need DEBUG_WEBHOOKS env?

pws_1           | {
pws_1           |   adapter: { driver: 'local', redis: { prefix: '' } },
pws_1           |   appManager: {
pws_1           |     driver: 'array',
pws_1           |     array: {
pws_1           |       apps: [
pws_1           |         {
pws_1           |           id: 'app-id',
pws_1           |           key: 'app-key',
pws_1           |           secret: 'app-secret',
pws_1           |           maxConnections: -1,
pws_1           |           enableClientMessages: false,
pws_1           |           enabled: true,
pws_1           |           maxBackendEventsPerSecond: -1,
pws_1           |           maxClientEventsPerSecond: -1,
pws_1           |           maxReadRequestsPerSecond: -1,
pws_1           |           webhooks: [
pws_1           |             {
pws_1           |               url: 'http://laravel.test/webhooks',
pws_1           |               event_types: [ 'client_event' ]
pws_1           |             }
pws_1           |           ]
pws_1           |         }
pws_1           |       ]
pws_1           |     },
pws_1           |     dynamodb: { table: 'apps', region: 'us-east-1', endpoint: '' },
pws_1           |     mysql: { table: 'apps', version: '8.0' },
pws_1           |     postgres: { table: 'apps', version: '13.3' }
pws_1           |   },
pws_1           |   channelLimits: { maxNameLength: 200 },
pws_1           |   cors: {
pws_1           |     credentials: true,
pws_1           |     origin: [ '*' ],
pws_1           |     methods: [ 'GET', 'POST', 'PUT', 'DELETE', 'OPTIONS' ],
pws_1           |     allowedHeaders: [
pws_1           |       'Origin',
pws_1           |       'Content-Type',
pws_1           |       'X-Auth-Token',
pws_1           |       'X-Requested-With',
pws_1           |       'Accept',
pws_1           |       'Authorization',
pws_1           |       'X-CSRF-TOKEN',
pws_1           |       'XSRF-TOKEN',
pws_1           |       'X-Socket-Id'
pws_1           |     ]
pws_1           |   },
pws_1           |   database: {
pws_1           |     mysql: {
pws_1           |       host: '127.0.0.1',
pws_1           |       port: 3306,
pws_1           |       user: 'root',
pws_1           |       password: 'password',
pws_1           |       database: 'main'
pws_1           |     },
pws_1           |     postgres: {
pws_1           |       host: '127.0.0.1',
pws_1           |       port: 5432,
pws_1           |       user: 'postgres',
pws_1           |       password: 'password',
pws_1           |       database: 'main'
pws_1           |     },
pws_1           |     redis: {
pws_1           |       host: '127.0.0.1',
pws_1           |       port: 6379,
pws_1           |       db: 0,
pws_1           |       username: null,
pws_1           |       password: null,
pws_1           |       keyPrefix: '',
pws_1           |       sentinels: null,
pws_1           |       sentinelPassword: null,
pws_1           |       name: 'mymaster'
pws_1           |     }
pws_1           |   },
pws_1           |   databasePooling: { enabled: false, min: 0, max: 7 },
pws_1           |   debug: 1,
pws_1           |   eventLimits: { maxChannelsAtOnce: 100, maxNameLength: 200, maxPayloadInKb: 100 },
pws_1           |   httpApi: { requestLimitInMb: 100 },
pws_1           |   instance: { process_id: 1 },
pws_1           |   metrics: {
pws_1           |     enabled: false,
pws_1           |     driver: 'prometheus',
pws_1           |     prometheus: { prefix: 'pws_' }
pws_1           |   },
pws_1           |   port: 6001,
pws_1           |   pathPrefix: '',
pws_1           |   presence: { maxMembersPerChannel: 100, maxMemberSizeInKb: 2 },
pws_1           |   queue: { driver: 'sync', redis: { concurrency: 1 } },
pws_1           |   rateLimiter: { driver: 'local' },
pws_1           |   ssl: { certPath: '', keyPath: '', passphrase: '' }
pws_1           | }
pws_1           | 
pws_1           | ๐Ÿ“ก pWS Server initialization started.
pws_1           | 
pws_1           | โšก Initializing the HTTP API & Websockets Server...
pws_1           | 
pws_1           | โšก Initializing the Websocket listeners and channels...
pws_1           | 
pws_1           | โšก Initializing the HTTP webserver...
pws_1           | 
pws_1           | ๐ŸŽ‰ Server is up and running!
pws_1           | 
pws_1           | ๐Ÿ“ก The Websockets server is available at 127.0.0.1:6001
pws_1           | 
pws_1           | ๐Ÿ”— The HTTP API server is available at http://127.0.0.1:6001
pws_1           | 

Host Config & Reverse Proxy

I've tried this package, but it seems I can't decide which interface I want to listen to when starting the service like what if I want to listen for private IP only?
And just want to know if this package would be better with or without reverse proxy like Nginx

WebSocket connection failed - Laravel 8 + pusher-php-server 7.0

I install soketi and change my config to :

broadcasting.php >

 'pusher' => [
            'driver' => 'pusher',
            'key' => env('PUSHER_APP_KEY'),
            'secret' => env('PUSHER_APP_SECRET'),
            'app_id' => env('PUSHER_APP_ID'),
            'options' => [
                'host' => env('PUSHER_HOST', '127.0.0.1'),
                'port' => env('PUSHER_PORT', 6001),
                'scheme' => env('PUSHER_SCHEME', 'http'),
                'encrypted' => true,
                'useTLS' => env('PUSHER_SCHEME') === 'https',
            ],

FrontEnd :

window.Echo = new Echo({
        broadcaster: 'pusher',
        key: process.env.MIX_PUSHER_APP_KEY,
        cluster: process.env.MIX_PUSHER_APP_CLUSTER,
        forceTLS: process.env.MIX_PUSHER_FORCE_TLS,
        encrypted: true,
        wsHost: process.env.MIX_PUSHER_HOST,
        wsPort: process.env.MIX_PUSHER_PORT,
        wssPort: process.env.MIX_PUSHER_PORT,
        disableStats: true,
        enabledTransports: ["ws", "wss"],
    });

.env =


PUSHER_APP_ID=app-id
PUSHER_APP_KEY=app-key
PUSHER_APP_SECRET=app-secret
PUSHER_APP_CLUSTER=ap2

PUSHER_HOST=127.0.0.1
PUSHER_PORT=6001
PUSHER_SCHEME=http

MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_HOST="${PUSHER_HOST}"
MIX_PUSHER_PORT="${PUSHER_PORT}"
MIX_PUSHER_FORCE_TLS=false
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
MIX_PUSHER_SERVICE="${PUSHER_SERVICE}"

and all of this run successfull

NPM RUN WATCH ==== success

php artisan serve === success http://127.0.0.1:8000

soketi start ==== success http://127.0.0.1:6001/ => response " OK"

but in the console my app cant connect to the websocket

Screenshot 2022-01-11 004137

Subscribe and unsubscribing events

Hey there,

Is there a way for the server to only send subscribing and unsubscribing events for the same user once?
So if the same user subscribes to a private channel from 3 devices it only sends the subscribing event for the first device connected and then only sends the unsubscribe event once when the last client disconnects?

Thanks

Joe

Is there a way to include the CA Bundle with the SSL certs?

When setting up SSL on our servers, we usually have to use the .crt file, the ca-bundle, and the .key. Following the documentation at https://rennokki.gitbook.io/soketi-docs/getting-started/ssl-configuration, I'm able to set the certificate and key and the server is running. We're using Laravel Echo and that can connect to the soketi server, but when using Laravel to broadcast an event through soketi, we get an error of: curl: (60) SSL certificate problem: unable to get local issuer certificate

I'm thinking we need to also provide the ca-bundle somehow. We also get the error when we curl the web socket server. For example, curl https://my.site.com:6001.

The Laravel broadcasting.php file is configured following the information here: https://rennokki.gitbook.io/soketi-docs/getting-started/backend-configuration/laravel-broadcasting. For the pusher/pusher-php-server package, we're using version 7.0.2.

I was wondering if there was a way to include the ca-bundle or if there was another suggestion on what we could do. Thanks in advance!

How to keep Soketi alive?

Thanks for creating this package, going to try as Laravel Websockets replacement which seems to be unmaintained. :)

I was browsing the documentation (which need a fix for the Github repo link btw.) and I was wondering what you think is the best solution to keep the (socket) process alive? I'm using Supervisor, but as this is a NPM solution it may require something different?

deploy on server CentOS

when i rum soketi start :

/usr/lib/node_modules/@soketi/soketi/node_modules/uWebSockets.js/uws.js:22
                throw new Error('This version of ยตWS is not compatible with your Node.js build:\n\n' + e.toString());
                ^

Error: This version of ยตWS is not compatible with your Node.js build:

Error: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by /usr/lib/node_modules/@soketi/soketi/node_modules/uWebSockets.js/uws_linux_x64_102.node)
    at /usr/lib/node_modules/@soketi/soketi/node_modules/uWebSockets.js/uws.js:22:9
    at Object.<anonymous> (/usr/lib/node_modules/@soketi/soketi/node_modules/uWebSockets.js/uws.js:24:3)
    at Module._compile (node:internal/modules/cjs/loader:1097:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1149:10)
    at Module.load (node:internal/modules/cjs/loader:975:32)
    at Function.Module._load (node:internal/modules/cjs/loader:822:12)
    at Module.require (node:internal/modules/cjs/loader:999:19)
    at require (node:internal/modules/cjs/helpers:102:18)
    at Object.<anonymous> (/usr/lib/node_modules/@soketi/soketi/dist/server.js:17:13)
    at Module._compile (node:internal/modules/cjs/loader:1097:14)

Node.js v17.3.0

How to implement support for Redis Sentinel?

As I've written on Reddit already, it would be nice to have support for Redis Sentinel. But before we dig into details, a bit of background about what Redis Sentinel is:

Redis Sentinel provides high availability for Redis. In practical terms this means that using Sentinel you can create a Redis deployment that resists without human intervention certain kinds of failures.

Redis Sentinel also provides other collateral tasks such as monitoring, notifications and acts as a configuration provider for clients.

This is the full list of Sentinel capabilities at a macroscopical level (i.e. the big picture):

  • Monitoring. Sentinel constantly checks if your master and replica instances are working as expected.
  • Notification. Sentinel can notify the system administrator, or other computer programs, via an API, that something is wrong with one of the monitored Redis instances.
  • Automatic failover. If a master is not working as expected, Sentinel can start a failover process where a replica is promoted to master, the other additional replicas are reconfigured to use the new master, and the applications using the Redis server are informed about the new address to use when connecting.
  • Configuration provider. Sentinel acts as a source of authority for clients service discovery: clients connect to Sentinels in order to ask for the address of the current Redis master responsible for a given service. If a failover occurs, Sentinels will report the new address.

In other words, Redis Sentinel ensures there is always a master available. In a failover scenario, this is ensured by promoting a replica to be the new master.

In terms of "connecting to Redis", we are essentially talking about a proxy which needs to be queried for its current master before we can connect to the actual Redis instance. Luckily, ioredis has this built-in already.


So my questions regarding a potential implementation are:

  • Currently, the environment variables are basically the same options which are passed to ioredis. Do we want to use the same configuration style for Redis Sentinel as well? It would for sure be nice, but I see two problems here:

    • ioredis allows configuring multiple Sentinels, which seems hard/cumbersome to implement with statically mapped environment variables (DB_REDIS_SENTINEL_0_HOST, DB_REDIS_SENTINEL_1_HOST, ...).

    • There would be room for interpretation if both options (non-sentinel and sentinel) are configured. And if both were made optional, it would be a breaking change in my opinion.

  • Would it be ok to have a DB_REDIS_SENTINEL=true/false setting to allow switching to Sentinel mode? We could then simply use a different connection configuration internally, based on the already existing environment variables DB_REDIS_HOST and DB_REDIS_PORT. This would not allow defining multiple Sentinels though. In my case, this is not an issue, because we are connecting to the Sentinels through a Kubernetes Service and a short outage of a few seconds is acceptable (and better for us than a complex configuration).

  • Would it be ok to have a DB_REDIS_URLS option which allows to pass multiple host:port combinations in a comma separated fashion (DB_REDIS_URLS=localhost:6379,localhost:6380)? This solves some of the problems and introduces new ones (e.g. precedence of configuration options).

I think it is better to discuss this before starting an implementation. Eager to hear your feedback on this ideas!

How to use the REST API?

How to authenticate on these endpoints?

  • /apps/:appId/channels
  • /apps/:appId/channels/:channelName
  • /apps/:appId/channels/:channelName/users
  • /apps/:appId/events

Docker images not working (incompatible node.js version)

I tried to run two versions of the docker image, soketi/pws:0.1.0-14-alpine and soketi/pws:latest-14-alpine, both did produce the following error with the default environment and DEBUG=true:

Error: This version of ยตWS is not compatible with your Node.js build:

Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /app/node_modules/uWebSockets.js/uws_linux_x64_83.node)
at /app/node_modules/uWebSockets.js/uws.js:22:9
at Object. (/app/node_modules/uWebSockets.js/uws.js:24:3)
at Module._compile (internal/modules/cjs/loader.js:1085:14)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
at Module.load (internal/modules/cjs/loader.js:950:32)
at Function.Module._load (internal/modules/cjs/loader.js:790:14)
at Module.require (internal/modules/cjs/loader.js:974:19)
at require (internal/modules/cjs/helpers.js:92:18)
at Object. (/app/dist/server.js:14:13)
at Module._compile (internal/modules/cjs/loader.js:1085:14)
/app/node_modules/uWebSockets.js/uws.js:22
throw new Error('This version of ยตWS is not compatible with your Node.js build:\n\n' + e.toString());


I also could not find a node.js 16 image which is advertised on the Docker Hub page:

The following versions are available:

  • [git_version]-16-alpine
  • [git_version]-14-alpine

So I went ahead and tried building with the following images:

  • 16-alpine
  • 16-alpine3.11
  • 16-alpine3.12
  • 16-buster (failed because of apk which is to be expected)

All unfortunately without luck. Also installing the apk package libc6-compat which contains ld-linux-x86-64.so.2 didn't help.

[bug] Python 3 needed to install

Hello
I am running this command npm install -g @soketi/pws and I get these errors but I don't know why:

npm WARN deprecated [email protected]: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
npm WARN deprecated [email protected]: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm ERR! code 1
npm ERR! path C:\......\AppData\Roaming\npm\node_modules\@soketi\pws\node_modules\msgpack
npm ERR! command failed
npm ERR! command C:\WINDOWS\system32\cmd.exe /d /s /c node-gyp rebuild
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp info using [email protected]
npm ERR! gyp info using [email protected] | win32 | x64
npm ERR! gyp ERR! find Python
npm ERR! gyp ERR! find Python Python is not set from command line or npm configuration
npm ERR! gyp ERR! find Python Python is not set from environment variable PYTHON
npm ERR! gyp ERR! find Python checking if "python3" can be used
npm ERR! gyp ERR! find Python - "python3" is not in PATH or produced an error
npm ERR! gyp ERR! find Python checking if "python" can be used
npm ERR! gyp ERR! find Python - "python" is not in PATH or produced an error
npm ERR! gyp ERR! find Python checking if "python2" can be used
npm ERR! gyp ERR! find Python - "python2" is not in PATH or produced an error
npm ERR! gyp ERR! find Python checking if Python is C:\Python37\python.exe
npm ERR! gyp ERR! find Python - "C:\Python37\python.exe" could not be run
npm ERR! gyp ERR! find Python checking if Python is C:\Python27\python.exe
npm ERR! gyp ERR! find Python - "C:\Python27\python.exe" could not be run
npm ERR! gyp ERR! find Python checking if the py launcher can be used to find Python
npm ERR! gyp ERR! find Python - "py.exe" is not in PATH or produced an error
npm ERR! gyp ERR! find Python
npm ERR! gyp ERR! find Python **********************************************************
npm ERR! gyp ERR! find Python You need to install the latest version of Python.
npm ERR! gyp ERR! find Python Node-gyp should be able to find and use Python. If not,
npm ERR! gyp ERR! find Python you can try one of the following options:
npm ERR! gyp ERR! find Python - Use the switch --python="C:\Path\To\python.exe"
npm ERR! gyp ERR! find Python   (accepted by both node-gyp and npm)
npm ERR! gyp ERR! find Python - Set the environment variable PYTHON
npm ERR! gyp ERR! find Python - Set the npm configuration variable python:
npm ERR! gyp ERR! find Python   npm config set python "C:\Path\To\python.exe"
npm ERR! gyp ERR! find Python For more information consult the documentation at:
npm ERR! gyp ERR! find Python https://github.com/nodejs/node-gyp#installation
npm ERR! gyp ERR! find Python **********************************************************
npm ERR! gyp ERR! find Python
npm ERR! gyp ERR! configure error
npm ERR! gyp ERR! stack Error: Could not find any Python installation to use
npm ERR! gyp ERR! stack     at PythonFinder.fail (C:\.......\AppData\Roaming\npm\node_modules\npm\node_modules\node-gyp\lib\find-python.js:302:47)
npm ERR! gyp ERR! stack     at PythonFinder.runChecks (C:\.......\AppData\Roaming\npm\node_modules\npm\node_modules\node-gyp\lib\find-python.js:136:21)
npm ERR! gyp ERR! stack     at PythonFinder.<anonymous> (C:\......\AppData\Roaming\npm\node_modules\npm\node_modules\node-gyp\lib\find-python.js:200:18)
npm ERR! gyp ERR! stack     at PythonFinder.execFileCallback (C:\.....\AppData\Roaming\npm\node_modules\npm\node_modules\node-gyp\lib\find-python.js:266:16)
npm ERR! gyp ERR! stack     at exithandler (child_process.js:315:5)
npm ERR! gyp ERR! stack     at ChildProcess.errorhandler (child_process.js:327:5)
npm ERR! gyp ERR! stack     at ChildProcess.emit (events.js:315:20)
npm ERR! gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:275:12)
npm ERR! gyp ERR! stack     at onErrorNT (internal/child_process.js:465:16)
npm ERR! gyp ERR! stack     at processTicksAndRejections (internal/process/task_queues.js:80:21)
npm ERR! gyp ERR! System Windows_NT 10.0.19043
npm ERR! gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\......\\AppData\\Roaming\\npm\\node_modules\\npm\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild"
npm ERR! gyp ERR! cwd C:\.....\AppData\Roaming\npm\node_modules\@soketi\pws\node_modules\msgpack
npm ERR! gyp ERR! node -v v14.16.1
npm ERR! gyp ERR! node-gyp -v v7.1.2
npm ERR! gyp ERR! not ok

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\......s\2021-08-02T22_05_11_926Z-debug.log

I am a newbie so no idea what can be the problem. I see something python related but no idea if I need to install and if I need to install then which version I need.

Thank you in advance the help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.