Giter VIP home page Giter VIP logo

Comments (16)

heapwolf avatar heapwolf commented on June 7, 2024

Im curious if you're looking at the result of the GC not being able to collect because you're recursing into oblivion.

from eventemitter2.

tpetry avatar tpetry commented on June 7, 2024

It's kind of a recursive solution, correct. But the process.nextTick function of nodejs "transforms" it into an imperative solution. One RPC-Reques is done and at the next tick (analogue: next loop iteration) a new request is done. It does not leak memory by it's recursive way. It works perfect with EventEmitter and EventEmitter2 without wildcards, only wildcards are leaking.

Maybe the same could be tested by adding and removing unique listener names in a loop, will test it, if i will have time.

from eventemitter2.

heapwolf avatar heapwolf commented on June 7, 2024

I changed your test code to be simpler a little simpler to demonstrate what is happening. After several seconds you have almost 265877 iterations. Recursing at this depth and creating this many objects will produce significant memory usage, but is not a realistic use case.

var EventEmitter2 = require('../../lib/eventemitter2').EventEmitter2;

var cnt = 0;

var emitter = new EventEmitter2({
    wildcard: true,
    delimiter: '.',
    maxListeners: 100
});

emitter.on('rpc-adder', function(rpcResponseId) {
    emitter.emit(rpcResponseId);
});

function recurse() {

    var rpcResponseId = 'rpc.' + (++cnt);

    emitter.once(rpcResponseId, function() {
        process.nextTick(recurse);
    });

    emitter.emit('rpc-adder', rpcResponseId);
}

process.on('SIGINT', function() {
    console.log(cnt);
    process.exit(0);
});

recurse();

from eventemitter2.

heapwolf avatar heapwolf commented on June 7, 2024

Wild cards aren't really leaking, they cost more in terms of memory allocation because they create a lot more objects. You will see the process' mem usage spike in real uses cases and then drop again, its pretty normal.

from eventemitter2.

tpetry avatar tpetry commented on June 7, 2024

It's NOT an problem of the recursion. Look at this test with a loop:

var EventEmitter2 = require('eventemitter2').EventEmitter2;
var server = new EventEmitter2({
    wildcard: true
});

function dummyListener() {}

for(var i = 0; i <= 1000000; i++) {
    var key = '' + Math.random();
    server.on(key, dummyListener);
    server.off(key, dummyListener);
}

console.log('finished');


setTimeout(function() {
  // sleep some long time to look at memory usage
}, 900 * 1000);

Without wildcards it's working like a charm, constant memory usage all time. But with wildcards the code is leaking memory like hell! And it's not dropping when the for-loop finished, the node process is using about 500M compared to 25M of the run without wildcards.

from eventemitter2.

heapwolf avatar heapwolf commented on June 7, 2024

yeah it is about the loop. think about it like this, with 1000000 iterations not using wildcards, you create about 1000000 x (how ever many objects ee2 creates). If you look at the bench, you can see that ee2 produces way may objects and is about half as fast (node 0.8.12).

EventEmitter x 2,822,904 ops/sec; 
EventEmitter2 x 7,251,227 ops/sec; 
EventEmitter2 (wild) x 3,220,268 ops/sec;

When you invoke wildcards, you create roughly 1000000 x (way more objects, im not exactly sure how many) but at the rate you are creating them (which isnt really very realistic), V8 cant collect them.

If you are convinced that it is a memory leak (which im not saying isn't possible), we need to come up with a more realistic way of measuring it. possibly by dumping the v8-heap and analyzing it.

from eventemitter2.

heapwolf avatar heapwolf commented on June 7, 2024

another way to look at this issue is...

1 million iterations with wildcards is roughly 20 million objects which could be ~500M
1 million iterations without wildcards is roughly 8 million objects which could be to ~25M

from eventemitter2.

tpetry avatar tpetry commented on June 7, 2024

But if you remove the listener for the unique wildcard key the entry in the hidden lookup table is not needed anymore and can be deleted. If it would be deleted (which i am guessing isn't done) the V8 Garbage Collector would at any time collect all these unused objects. But if you run the sample file with the loop you see that the memory usage is constant after the loop, even after a few minutes - but the GC has been working many times.

I know what you are saying: Wildacrds create more objects and the peak memory usage will be higher.
But i say: This is correct! But after a time t these objects should have been garbage collected because the listener has been removed, but this doesn't happen. You can wait hours and the memory usage is the same. With every new unique wildcard key the memory usage rises, permanently. The memory usage is never dropping.

from eventemitter2.

heapwolf avatar heapwolf commented on June 7, 2024

So you suspect that off is leaking/not releasing all objects used when a listener is added?

from eventemitter2.

tpetry avatar tpetry commented on June 7, 2024

Correct. Or do you have any other clarification? Memory usage is steadily increasing in the for-loop-example and never decreases when test is over and the V8 GC would be able to collect all unused objects on heap.

from eventemitter2.

heapwolf avatar heapwolf commented on June 7, 2024

ok. so, I can see that some crazy memory usage is incurred by doing some very crazy thing. But, we need a better way to test this. I'm not convinced creating 5 million or more objects and then adding a large delay is a good use case or test case. It might be time to break out the heap profiler.

from eventemitter2.

cxreg avatar cxreg commented on June 7, 2024

This really is a bug and it's quite simple. "off" has a couple of lines that "delete leaf._listeners" but it never removes the empty {} from the listenerTree, which grows unbounded. The part that deletes this listener structure should also be removing the leaf from the tree.

I'm working on a patch, but the indirection of searchListenerTree and its return value makes it difficult to go back to the point in the tree where the leaf was found. Maybe modifying the return value of this function would help.

from eventemitter2.

cxreg avatar cxreg commented on June 7, 2024

By the way, this is not a fringe case. It can occur in ordinary code. For example, https://github.com/englercj/node-esl uses the once() method to track api command responses from Freeswitch and has a long-ish event name, so after a few hundred thousand of those, the memory has grown to the extent that garbage collection slows down the node process and it needs to be restarted

from eventemitter2.

heapwolf avatar heapwolf commented on June 7, 2024

patch welcome 👍

from eventemitter2.

grantila avatar grantila commented on June 7, 2024

Pull request #132 fixes this, and the test loop above doing a million on()'s and off()'s went from taking 12 seconds to less than 8 seconds. Not garbage collecting eventually results in not only memory leaks, but also causes the code to be slower, due to traversing larger objects.

from eventemitter2.

AVVS avatar AVVS commented on June 7, 2024

There is another issue with https://github.com/asyncly/EventEmitter2/blob/master/lib/eventemitter2.js#L530-L550

It does what it is intended to do - removes listeners, and it correctly uses null to clear references and not invoke hash-map mode of objects, however, when multiple listeners are added and removed that way (in my case that would be around half a billion of those) - memory heap keeps growing until OOM error.

What it should do is keep track of how many event listeners we have nullified and when this number grows to something that we can not skip, it should do a copy of an object (not .delete on the property) with all falsy values removed. Furthermore, I'd suggest it do the same for the garbage collection function you've implemented in PR #132 - when doing gc, it invokes hash-map mode on the listenerTree, which lowers performance.

from eventemitter2.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.