uwiger / gproc Goto Github PK
View Code? Open in Web Editor NEWExtended process registry for Erlang
License: Apache License 2.0
Extended process registry for Erlang
License: Apache License 2.0
Hi,
I noticed gproc_dist:unreg/1
does not work well with a property in the global scope. It seems to me that the leader (master) deletes an entry from its ets, but the slaves do not correctly. They receive remove_globals
message, but do not delete their entries fully.
In gproc_dist.erl
,
delete_globals(Globals) ->
lists:foreach(
fun({{_,g,_},T} = K) when is_atom(T) ->
ets:delete(?TAB, K);
({Key, Pid}) when is_pid(Pid); Pid==shared ->
ets:delete(?TAB, {Pid, Key}); %% here!
({Pid, Key}) when is_pid(Pid); Pid==shared ->
ets:delete(?TAB, {Pid, Key})
end, Globals).
I think the marked line (https://github.com/uwiger/gproc/blob/master/src/gproc_dist.erl#L800) should be
ets:delete(?TAB, {Key, Pid});
or {Key, Pid}
is inverted now.
I modified it and got an expected result in my brunch.
Thank you in advance.
I've had a reproducible crash in gproc:reg_or_locate due to a pattern match failure from the stuff found in the ETS lookup. The failure situation seems to be:
The ETS returns some stuff that gproc:reg_or_lookup doesn't recognize. From inspecting gproc:where1, it looks like the ETS entry is {Key, Waiters} where Waiters is info about the other process awaiting creation of the gproc key. The first element actually looks like {Key,T} rather than Key.
I tried making a local patch to handle this situation the same way as if the key didn't exist, and that stops the crash, though my program still doesn't work. I'm not sure if that's because my gproc patch was wrong, or my application has some other error.
> application:start(gproc).
> gproc_pool:setup_test_pool(p2,round_robin,[]).
add_worker(p2, a) -> 1; Ws = [{a,1}]
add_worker(p2, b) -> 2; Ws = [{a,1},{b,2}]
add_worker(p2, c) -> 3; Ws = [{a,1},{b,2},{c,3}]
add_worker(p2, d) -> 4; Ws = [{a,1},{b,2},{c,3},{d,4}]
add_worker(p2, e) -> 5; Ws = [{a,1},{b,2},{c,3},{d,4},{e,5}]
add_worker(p2, f) -> 6; Ws = [{a,1},{b,2},{c,3},{d,4},{e,5},{f,6}]
[true,true,true,true,true,true]
worker_pool/1 shows the said workers
>gproc_pool:worker_pool(p2).
[{a,1},{b,2},{c,3},{d,4},{e,5},{f,6}]
but defined_workers does not.
> gproc_pool:defined_workers(p2).
** exception error: bad argument
in function ets:lookup_element/3
called as ets:lookup_element(gproc,
{{c,l,{gproc_pool,p2,w,a}},<0.71.0>},
3)
in call from gproc:get_value1/2 (src/gproc.erl, line 1392)
in call from gproc:get_value/1 (src/gproc.erl, line 1365)
in call from gproc_pool:'-defined_workers/1-lc$^0/1-0-'/2 (src/gproc_pool.erl, line
I also noticed though that after the workers pass a few messages around, it does not crash. is some application or operation a pre-requisite before running defined_workers?
The docs say that defined_workers also gives stats about how often a worker is picked. Running pick > length of workers and then querying defined_workers/1 still gave me the error.
In code I very often call gproc
like this:
handle_cast({msg, To, Message}, State) ->
spawn(fun() ->
% Trying to lookup Pid from gproc
case gproc:lookup_local_name(To) of
undefined ->
% player is not loaded, loading the player
player:start(To, Message);
Pid ->
% player is here, passing the message to him
gen_server:cast(Pid, Message)
end
end),
{noreply, State};
handle_cast(_Msg, State) ->
{noreply, State}.
In module player
I work with grpoc
like this:
init([Name]) ->
% Registering in gproc
?LOG("Creating new process", Name),
gproc:add_local_name(Name),
{ok, #state{name = Name, ready = 0}}.
From time to time I see something like this in my logs:
=CRASH REPORT==== 8-Jan-2012::01:50:52 ===
crasher:
initial call: player:init/1
pid: <0.19872.4>
registered_name: []
exception exit: {badarg,[{gproc,chk_reply,
{reg,{n,l,"player560"},undefined}},
{player,init,1},
{gen_server,init_it,6},
{proc_lib,init_p_do_apply,3}]}
in function gen_server:init_it/6
ancestors: [<0.19870.4>]
messages: []
links: []
dictionary: []
trap_exit: false
status: running
heap_size: 377
stack_size: 24
reductions: 164
neighbours:
What could be the reason for this? Is this because of gproc of my code?
There exists get_value/1 and get_value/2.
Should there be a set_value/3 which accepts Key, Value, and Pid?
Might be related to #17 for setting value to a shared counter, for example.
gproc:set_value({c,l,wsCounter}, 100, shared).
I came across this in my server using gproc_pool
server backtrace: [{gproc,reg_shared,
[{p,l,
{gproc_pool,
mondemand_backend_stats_influxdb_worker_pool}},
{0,round_robin}],
[{file,"src/gproc.erl"},{line,1026}]},
{gproc_pool,new_,3,
[{file,"src/gproc_pool.erl"},{line,552}]},
{gproc_pool,handle_call_,3,
[{file,"src/gproc_pool.erl"},{line,501}]},
{gproc_pool,handle_call,3,
[{file,"src/gproc_pool.erl"},{line,493}]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,585}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]
and it ended up in a log I wasn't expecting it in, because it seems to use io:fwrite which bypasses all my lager hooks. Is there any reason not to use error_logger?
Also, any idea why I might see these failures?
Is there any good example of aggregate counters/counters/shared counters usage? I really tried to find some, but nothing that I could work with. Any project using that I would learn from?
I saw the missing wiki page (Counters in Gproc) 😒
Hi, I am experiencing the following issue with 0.2.17, when rebar eunit.
One of tests fails as 'canceled'.
The full eunit output is provided by link: http://susepaste.org/21509142
Hi
You have merged my pull request for select_count but it is not included in the esl/gproc version.
I saw that you wrote that es/gproc is the official version.
Is there a chance you can include it there?
5d07b9d
From now on I'll use the esl/gproc version.
Thanks
Just checking the status of global ("g") registration-- is it broken? I might wrench on it if so.
Currently when I do gproc:reg({n, g, MyUniqueId}),
I get
{{{badmatch,
{error,
{local_only,
[{gproc,reg,[{n,g,<<"SymbolA_1333422920106662">>}]},
which I assume mean the functionality is disabled. If so, where do I dig in? Assessing the current status of the various gen_leaders that are floating around?
Hi,
I have a strange behavior when I'm adding new nodes to a cluster. And It's easy to reproduce:
%% ~/.hosts.erlang
'127.0.0.1'.
Open two terminals (term1, term2).
# term1
git clone https://github.com/uwiger/gproc gproc1
cd gproc1
GPROC_DIST=true make
alias start='erl -pa ebin deps/*/ebin -name [email protected] -eval "application:start(gproc), net_adm:world(), gproc_dist:start_link()."'
start
%% term1 (erl)
Erlang/OTP 17 [erts-6.1] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]
Eshell V6.1 (abort with ^G)
(test1@127.0.0.1)1> nodes().
[]
(test1@127.0.0.1)2> gproc_dist:get_leader().
'[email protected]'
(test1@127.0.0.1)3>
# term2
git clone https://github.com/uwiger/gproc gproc2
cd gproc2
GPROC_DIST=true make
alias start='erl -pa ebin deps/*/ebin -name [email protected] -eval "application:start(gproc), net_adm:world(), gproc_dist:start_link()."'
start
%% term2 (erl)
Erlang/OTP 17 [erts-6.1] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]
Eshell V6.1 (abort with ^G)
(test2@127.0.0.1)1> nodes().
['[email protected]']
(test2@127.0.0.1)2> gproc_dist:get_leader().
** exception exit: {timeout,{gen_leader,local_call,[gproc_dist,get_leader]}}
in function gen_leader:call/2 (src/gen_leader.erl, line 326)
(test2@127.0.0.1)3>
___Shut down test1
node_**
%% term2 (erl)
(test2@127.0.0.1)3> gproc_dist:get_leader().
'[email protected]'
(test2@127.0.0.1)4>
___Start test1
node again_**
%% term1 (erl)
Erlang/OTP 17 [erts-6.1] [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]
Eshell V6.1 (abort with ^G)
(test1@127.0.0.1)1> nodes().
['[email protected]']
(test1@127.0.0.1)2> gproc_dist:get_leader().
'[email protected]'
(test1@127.0.0.1)3>
%% term2 (erl)
(test2@127.0.0.1)4> gproc_dist:get_leader().
'[email protected]'
(test2@127.0.0.1)5>
Tried also with Erlang R16B03.
Hi,
gproc_pool:force_delete has an io:fwrite call. Is that by design? If not, maybe we can remove it since a call to that function may otherwise clog the output with a bunch of db entries.
Let me know what you think. I can submit a pull request if it's deemed a good idea (which I hope it is :-)).
Cheers,
Klas
([email protected])2> K = {n,g,myproc}.
{n,g,myproc}
([email protected])3> gproc:where(K).
undefined
([email protected])4> spawn(fun() -> io:format("awaited: ~p~n", [gproc:await(K)]), exit(normal) end).
<0.190.0>
([email protected])5> ets:tab2list(gproc).
[{{<0.190.0>,g}},
{{<0.190.0>,{n,g,myproc}},[]},
{{{n,g,myproc},n},[{<0.190.0>,#Ref<0.0.0.412>}]}]
([email protected])6> spawn(fun() -> gproc:reg(K), receive stop -> ok end end).
<0.193.0>
awaited: {<0.193.0>,undefined}
([email protected])7> gproc:where(K).
undefined
([email protected])8> ets:tab2list(gproc).
[{{<0.193.0>,g}},{{<0.193.0>,{n,g,myproc}},[]}]
As you can see in the example above, the entry for {{n,g,myproc},n},<0.193.0>,undefined}
has been deleted and the g and notify list entries are still sitting around by themselves and are no use to anyone since the {Key,n}
entry has been deleted.
Here is the same example with some helpful debugging printouts:
([email protected])2> K = {n,g,myproc}.
{n,g,myproc}
([email protected])3> gproc:where(K).
undefined
([email protected])4> spawn(fun() -> io:format("awaited: ~p~n", [gproc:await(K)]), exit(normal) end).
gproc_lib:ensure_monitor(<0.190.0>,g)
<0.190.0>
([email protected])5> ets:tab2list(gproc).
[{{<0.190.0>,g}},
{{<0.190.0>,{n,g,myproc}},[]},
{{{n,g,myproc},n},[{<0.190.0>,#Ref<0.0.0.412>}]}]
([email protected])6> spawn(fun() -> gproc:reg(K), receive stop -> ok end end).
<0.193.0>
gproc_lib:insert_reg({n,g,myproc}, undefined, <0.193.0>, g, registered)
awaited: {<0.193.0>,undefined}
([email protected])7> gproc_lib:ensure_monitor(<0.193.0>,g)
handle_info 'DOWN' <0.190.0>
handle_leader_cast pid_is_DOWN <0.190.0>
Globals [{{n,g,myproc},<0.190.0>}]
handle_leader_cast tab2list [{{<0.190.0>,{n,g,myproc}},[]},
{{<0.193.0>,g}},
{{<0.193.0>,{n,g,myproc}},[]},
{{{n,g,myproc},n},<0.193.0>,undefined}]
Opts []
maybe_failover remove_entry {n,g,myproc} <0.190.0>
handle_leader_cast after tab2list [{{<0.193.0>,g}},
{{<0.193.0>,{n,g,myproc}},[]}]
([email protected])7> gproc:where(K).
undefined
([email protected])8> ets:tab2list(gproc).
[{{<0.193.0>,g}},{{<0.193.0>,{n,g,myproc}},[]}]
I recently started to use dialyzer's unmatched returns feature. There are a number of warnings reported for gproc. I expect this is not an urgent issue. However, I wanted to raise an issue for tracking purposes.
$ dialyzer --plt $(PLT) -Wunmatched_returns -r ./lib
gproc.erl:125: Expression produces a value of type atom() | tid(), but this value is unmatched
gproc.erl:320: Expression produces a value of type 'false' | 'ignore' | integer(), but this value is unmatched
gproc.erl:820: Expression produces a value of type 'ok' | reference(), but this value is unmatched
gproc.erl:896: Expression produces a value of type 'ok' | non_neg_integer(), but this value is unmatched
gproc.erl:1025: Expression produces a value of type [reference()], but this value is unmatched
gproc.erl:1331: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1332: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1333: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1334: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1339: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1341: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1342: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1343: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1357: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1358: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1369: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1378: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1381: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1388: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1389: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1394: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1401: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1410: The variable _ can never match since previous clauses completely covered the type 'false'
gproc.erl:1416: The variable _ can never match since previous clauses completely covered the type 'false'
gproc_dist.erl:186: Expression produces a value of type 'ok' | reference(), but this value is unmatched
gproc_init.erl:41: Expression produces a value of type ['true'], but this value is unmatched
gproc_init.erl:45: Expression produces a value of type ['true'], but this value is unmatched
gproc_lib.erl:85: Expression produces a value of type 'ok' | reference(), but this value is unmatched
gproc_lib.erl:101: Expression produces a value of type 'ok' | reference(), but this value is unmatched
gproc_lib.erl:132: Expression produces a value of type 'ok' | reference(), but this value is unmatched
gproc_lib.erl:137: Expression produces a value of type 'ok' | reference(), but this value is unmatched
gproc_lib.erl:157: Expression produces a value of type ['true'], but this value is unmatched
I wonder if gproc could have a select_dist
that could also "collect" data from other nodes. Is this something desirable? I know it's hard to integrate with ETS select and continuation that gproc:select/1
supports.
If it's not something useful could you point what's the simplest way to do according to current implementation? Use gproc_dist
process on each node to handle this call doing a multi_call
on the requesting node?
My use case is that I want to know how many process are registered with certain property.
I'm trying to adapt the start_slaves/1 API for another purpose. While doing so, I accidentally triggered the following warning:
Warning: net:ping/1: module 'net' obsolete; use 'net_adm'
Possibly, the rpc:calls used in the test should be updated?
Hi Ulf,
I have been using previous releases of gproc successfully with erlang R14*
releases. Today I upgraded groc to HEAD and found included rebar is compiled with R15
release as indicated by the error below:
gproc $ erl1404 make
./rebar get-deps
=ERROR REPORT==== 1-Nov-2012::14:29:23 ===
Loading of /Users/abhinavsingh/Dev/gproc/rebar/rebar.beam failed: badfile
escript: exception error: undefined function rebar:main/1
in function escript:run/2
in call from escript:start/1
in call from init:start_it/1
in call from init:start_em/1
=ERROR REPORT==== 1-Nov-2012::14:29:23 ===
beam/beam_load.c(1365): Error loading module rebar:
use of opcode 153; this emulator supports only up to 152
However this succeeds:
gproc $ erl1501 make
Is there any specific reason for R15
based rebar?
Note: erl1404 and erl1501 are just aliases to different erlang release I have on my dev machine
When get_env
is called with a key that does not exist in the gproc cache the function exits the process with badarg; this is the expected behaviour if there is no {default, Value} strategy.
However, even with a {default,Value}
strategy the function still exits the process without returning the default value.
This happens because lookup_env
is called first (before try_alternatives
) and ets:lookup
throws a badarg in line 529 of gproc.erl:
Line 529 in 46238de
Maybe I am reading the documentation wrongly, but it surely seems that get_env
should return the default value if that strategy is specified.
The current version of reg_or_locate/3
spawns the fun parameter if the name can't be located.
I'd like to use something like reg_or_locate/3
e.g. with a simple_one_for_one supervisor:start_child/2
or to start gen_server
and its ilk.
Is there a deeper reason why something like tis is not in the API? Maybe I missed a easy way to handle this use case or something in gproc prevents implementing this?
If its only omitted because nobody needed it would you add the functionality to reg_or_locate
maybe as variant taking a {M, F, Arg} or a fun + Arglist. Or do you think another function e.g. start_or_locate
would be better?
I would implement it and send pull request for it if you think its feasible.
Hi,
In my server each user has a session pid with a unique gproc name - userid.
Each userid should have a single session.
When a user tries to create a new session, the old pid should be terminated and the.
I couldn't find an easy way to do it with the gproc API.
Is it possible to add something like:
gproc:add_local_name_when_available(Name)
(maybe with a nicer function name:)
I tried to think of a way to do it, and it's hard to get it right.
First we need to check if name is not taken.
If it's taken, monitor the pid and wait until the name is free.
We should make sure that there isn't a race condition and the old pid terminates before we start monitoring it.
When we get a message that the name is available, register it and return.
Again, there might be a race condition if another pid register the name.
Another complication is when several pids are trying to use the same name at the same time. Maybe a timeout can solve this.
Do you think it'll be useful?
Thanks
For example, there is gen_server which handles some data stream, modifies own state and then pub some events. And several consumers, that must sync with this server at the init step and then sub on his events.
Is there way to ask gen_server (via call, for example) process to subscribe caller process on some property?
It seems that "unreg" messages are not sent when an explicit gproc:unreg() is issued for global names, while it works fine for local names:
$ erl -pa gproc/ebin -pa gproc/deps/gen_leader/ebin/ -boot start_sasl -name n1@localhost -setcookie test -gproc gproc_dist "[{[n1@localhost], []}, {bcast_type, all}]"
(n1@localhost)1> gproc:start_link().
{ok,<0.49.0>}
(n1@localhost)2> gproc_dist:start_link().
{ok,<0.51.0>}
(n1@localhost)3>
=PROGRESS REPORT==== 9-Dec-2013::18:47:08 ===
supervisor: {local,kernel_safe_sup}
started: [{pid,<0.54.0>},
{name,timer_server},
{mfargs,{timer,start_link,[]}},
{restart_type,permanent},
{shutdown,1000},
{child_type,worker}]
(n1@localhost)3> gproc:reg({n, l, test}).
true
(n1@localhost)4> gproc:monitor({n, l, test}).
#Ref<0.0.0.75>
(n1@localhost)5> gproc:unreg({n, l, test}).
true
(n1@localhost)6> flush().
Shell got {gproc,unreg,#Ref<0.0.0.75>,{n,l,test}}
ok
while for global names:
(n1@localhost)7> gproc:reg({n, g, test}).
true
(n1@localhost)8> gproc:monitor({n, g, test}).
#Ref<0.0.0.93>
(n1@localhost)9> gproc:unreg({n, g, test}).
true
(n1@localhost)10> flush().
ok
Result: gproc crashes and you see an error:
** exception error: bad argument
in function gproc:reg/1
called as gproc:reg({p,l,{gproc_ps_event,test}})
The supervisor does start the process again, and if you type the subscribe again, you'll get 'true' returned.
Expected result: A tuple like {error, already_subscribed}, should be returned and the gproc process should not die.
gproc:where/1 checks that process is alive before returning value, while gproc:table/0 seems to be do not preform such checking. This introduces a race if process dead and not yet unregistered. So gproc:where(..) could return 'undefined' while process data could appear in gproc:table(). This is rare condition, but i already got it couple of times.
Say, I use a simple_one_for_one supervisor that controls a set of processes. Each of those processes is registered by some non-atom name.
Let's write the following in the init/1 function:
init([Name, Options]) ->
gproc:reg({n, l, Name}, self()),
....
But this code will fail upon restart, since gproc won't allow me to register under an already existing name. Okay, let's unregister this name first:
init([Name, Options]) ->
gproc:unreg({n, l, Name}),
gproc:reg({n, l, Name}, self()),
...
But this will fail too, because if nothing is registered under the given name, gproc:unreg/1 will fail.
So far, I see two ways of working this out, and both seem quite ugly to me:
catch gproc:unreg({n, l, Name}),
....
and
case gproc:where({n, l, Name}) of
undefined -> void;
_ -> gproc:unreg({n, l, Name}
end,
....
The current behavior of gproc:reg/2 and gproc:unreg/1 seems to be actually right, since we gain more control over what we did register and what we didn't. But I still feel that I'm missing something and there is a better way of handling this situation with the existing gproc API.
I noticed that I keep running into problems with distributed gproc from the moment where I fire up two nodes that don't see each other when gproc starts but then are supposed to join together with discovery set to all. I kind of have the feeling it's a known issue but I figured it won't hurt to document.
Example timeline:
n1 - boot
n1 - start gproc
n2 - boot
n2 - start gproc
n1 - net_adm:ping(n2)
-> not joining together propperly.
That kind of is a netsplit issue and it propably can't be resolved in a entirely way since it's not guaranteed that conflicts in the two registreis can be joined automatically but what really would be cool if there would be some kind of callback saying: Hey we've a (re)join from a split with side 1 and 2 so it'd be possible to work out the stuff if possible.
Hi, I am suffering an issue similar to #44 on the latest master.
See the eunit log: http://susepaste.org/27740609
The difference is that failure is reproduced only approx 3 times per 50 runs (on my PC). I've just been running rebar eunit in a loop. Increasing timeout up to 240 seems to make failure probability as low as no failures per 50 runs.
Not sure if that test is deterministic...
When a worker is added to an empty pool, deleted from the pool and then added again, there is an issue with gproc:get_value/2
in https://github.com/uwiger/gproc/blob/master/src/gproc_pool.erl#L613 as a tuple is expected. Instead of {0, round_robin}
just 0
is returned.
1> application:start(gproc).
ok
2> {ok, P1} = srv:start_link().
{ok,<0.43.0>}
3> gproc:reg({n,l,"x"},P1).
true
4> gproc_pool:new("pool").
ok
5> gproc:get_value({p, l, {gproc_pool, "pool"}}, shared).
{0,round_robin}
6> gproc_pool:add_worker("pool","x").
1
7> gproc_pool:connect_worker("pool", "x").
true
8> gproc:get_value({p, l, {gproc_pool, "pool"}}, shared).
{1,round_robin}
9> gproc_pool:disconnect_worker("pool", "x").
true
10> gproc_pool:remove_worker("pool","x").
true
11> gproc_pool:add_worker("pool","x").
server backtrace: [{gproc_pool,add_worker_,2,
[{file,"src/gproc_pool.erl"},{line,613}]},
{gproc_pool,handle_call_,3,
[{file,"src/gproc_pool.erl"},{line,510}]},
{gproc_pool,handle_call,3,
[{file,"src/gproc_pool.erl"},{line,493}]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,585}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]
** exception error: no match of right hand side value 0
in function gproc_pool:call/1 (src/gproc_pool.erl, line 483)
12> gproc:get_value({p, l, {gproc_pool, "pool"}}, shared).
0
The Modules section (https://github.com/uwiger/gproc#modules) has links to docs, but the repository is not @uwiger, but @esl.
Is it correct? I keep changing repositories when I read these links....
Spec for reg_or_locate/1:
reg_or_locate(Key::key()) -> true
Correct would be:
reg_or_locate(Key::key()) -> {pid(), NewValue}
More documentation errors:
where and whereis_name return pid() | undefined
-> doc mentions "Otherwise this function will exit. " this is not the current behavior
While lookup_pid returns pid() or exits with badarg
This is not reflected in the doc so far, exiting is not mentioned
The idea is to allow writing statements like this:
<<"Process1">> ! Message
Erlang compiler does allow such stuff, but of course a badarg expection will be raised.
I've written a sample parse transform that transforms such constructions into
gproc:lookup_local_name(<<"Process1">>) ! Message
https://github.com/doubleyou/ptrans/tree/gproc
Of course, this need to be improved (like handling global process registry as well), but at least it gives a basic idea. I'm also not sure where to place it, maybe it's better as a part of gproc itself rather than a separate module.
Quick question — have you considered making dependencies optional? Ideally both, but at least edown — it is definitely not needed by projects that use gproc. Also, gen_leader might be optional as not everybody uses gproc_dist.
Thoughts?
Hi,
first example:
spawn(fun() -> gproc:reg({n, g, key_2}), gproc:unreg({n, g, key_2}) end).
This calling supposed to be corrent, my 2 node's ets will empty.
second example:
spawn(fun() -> gproc:reg({n, g, key_1}) end).
The second example I didn't call gproc:unreg({n,g,_}), the slave node's ets still has two records below:
---------------------------------- | -------------
{{n,g,key_1},n} | <0.1089>
{<0.1089>,{n,g,key_1}} | []
The first example, the slave node's delete_globals(Globals) function, Globals is [{{n,g,key_2},n},{<0.1123.0>,{n,g,key_2}}]
The second example, however, the slave node's delete_globals(Globals) function, Globals is [{{n,g,key_1},<0.1089.0>}], so this is the reason why the two records will not be deleted.
I simply fix this by edit the delete_globals function.
delete_globals(Globals) ->
lists:foreach(
fun({{_,g,_},T} = K) when is_atom(T) ->
ets:delete(?TAB, K);
({{n,g,_} = Key, Pid}) when is_pid(Pid) -> % add this clause
ets:delete(?TAB, {Key, n}), % add this clause
ets:delete(?TAB, {Pid, Key}); % add this clause
({Key, Pid}) when is_pid(Pid); Pid==shared ->
ets:delete(?TAB, {Key, Pid}); % issues #87, I found this bug too
({Pid, Key}) when is_pid(Pid); Pid==shared ->
ets:delete(?TAB, {Pid, Key})
end, Globals).
I think the real bug is from leader sending Globals, not here, but i m not able to find the real bug, sorry.
Hello,
I ran into an issue when globally registered name would not be automatically cleaned up by gproc if the name owner process crashed at the time when leader was unresponsive. My guess is that this is happening due to lost DOWN notification. Steps to reproduce:
start two nodes dev1 and dev2 (dev1 is the leader):
([email protected])1> application:start(gproc). nodes().
ok
([email protected])2> nodes().
['[email protected]']
([email protected])3>
([email protected])1> application:start(gproc). nodes().
ok
([email protected])2> nodes().
['[email protected]']
([email protected])3>
([email protected])4> gproc_dist:get_leader().
'[email protected]'
start a process on dev2 which would register global name:
([email protected])5> spawn(fun() -> gproc:add_global_name(foobar), timer:sleep(10000) end).
<0.66.0>
within 10 seconds timeout, send dev1 to background by hitting CTRL+Z
registration is there:
([email protected])6> gproc:select({g,n}, [{'_', [], ['$$']}]).
[[{n,g,foobar},<0.66.0>,undefined]]
wait for dev1 to disappear from nodes:
([email protected])10> nodes().
[]
registration is still there:
([email protected])11> gproc:select({g,n}, [{'_', [], ['$$']}]).
[[{n,g,foobar},<0.66.0>,undefined]]
gproc refuses to register it:
([email protected])12> gproc:add_global_name(foobar).
** exception error: bad argument
in function gproc:add_global_name/1
called as gproc:add_global_name(foobar)
where/1 filters it out since it's a local pid:
([email protected])16> gproc:where({n,g,foobar}).
undefined
is this behavior a bug or feature? is there a good way to cope with it?
Thank you!
I've added a package for gproc 0.3.0 to https://hex.pm/ (the Elixir package repository) as a community-maintained package (https://github.com/hexpm/community).
I'm using gproc master right now and it's hex.pm policy to only publish upstream versions, so I'm still stuck with using gproc from github in my project.
Would you consider releasing gproc 0.3.1 with the bugfixes currently in master?
FYI hex.pm doesn't plan on being Elixir-only, work on an Erlang client is in progress and in case you want to take over maintenance of the gproc package on hex.pm that's an easy process.
When trying to use gproc_pool, I'm finding myself having to jump through some hoops and am not sure if I'm doing something wrong or not, so opening an issue so if there is one it can be tracked.
First off, the documentation for gproc_pool:new/1,3 say they return 'true', but they actually return 'ok'.
Next the pick/1,2 functions seem to return an internal representation. For an example,
Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:2:2] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V5.10.4 (abort with ^G)
1> application:start(gproc).
ok
2> gproc_pool:new (pool).
ok
3> gproc_pool:add_worker (pool, a).
1
4> gproc_pool:connect_worker (pool, a).
true
5> gproc_pool:add_worker (pool, b).
2
6> gproc_pool:connect_worker (pool, b).
true
7> gproc_pool:pick (pool).
{n,l,[gproc_pool,pool,1,a]}
8>
This means I can't do the natural sort of thing like
gen_server:cast (gproc_pool:whereis_worker (gproc_pool:pick (pool)), Msg)
but instead need to do something convoluted like
{n,l,[_,_,_,Id]} = gproc_pool:pick (pool),
gen_server:cast (gproc_pool:whereis_worker (Id), Msg)
which while not bad, seems like a poor API, but I might be missing something completely, so please correct me if there is a better way to do this?
Oh, actually, I might have just found it, looks like you can do
gen_server:cast (gproc:where (gproc_pool:pick (pool)), Msg)
is that the expected way to use gproc_pool, or would it be better to have pick/1,3 (or maybe a wrapper pick_worker/1,3) which does the unboxing and returns a worker id. It would make the gproc_pool API a bit cleaner if you didn't actually have to call gproc functions in general use.
Hi,
I discovered that gproc:reg/1 failed with a timeout when I started loading my system a bit. Would infinity be a better timeout? If so I could submit a patch. The functions I have in mind are gproc:call and gproc_pool:call.
/Klas
Hi
I use mreg to register some properties for a process. But when process die, the properties still in table gproc.
I dig into code and find in gproc_lib:insert_many that it call to gproc_lib:ensure_monitor and insert a record {Pid, Scope} into table.
That will make ets:insert_new in gproc:monitor_me fail and not set up monitor.
Right now only option to update property value is to unregister it and register again which is not atomic. ets:update_element/3 can help here
I am not clear with pub/sub with existing doc. I have gen_server named ex_serv which has function dothis/[0,1,2]. I try to subscribe gproc_ps:subscribe(l, ex_serv)/gproc_ps:subscribe(l, {ex_serv, dothis}) and gproc_ps:publish(l, ex_serv, dothis)/gproc_ps:publish(l, {ex_serv, dothis}, MSG) but it did not work. I tried various options and also tried gproc:reg and gproc:send nothing works. I was expecting when message gets published it will call/deliver that message to ex_serv:dothis. Am I doing something wrong?
Not sure if there's something I'm missing, but here's the output with all the commands and info (hopefully).
anthony.molinaro@nymwork:5> git clone [email protected]:uwiger/gproc.git
Cloning into 'gproc'...
remote: Counting objects: 1658, done.
remote: Total 1658 (delta 0), reused 0 (delta 0), pack-reused 1658
Receiving objects: 100% (1658/1658), 1.90 MiB | 81.00 KiB/s, done.
Resolving deltas: 100% (1038/1038), done.
Checking connectivity... done.
anthony.molinaro@nymwork:6> cd gproc
anthony.molinaro@nymwork:7> git checkout 0.3.1
Note: checking out '0.3.1'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b new_branch_name
HEAD is now at 54a3b20... Merge pull request #67 from sebmaynard/master
anthony.molinaro@nymwork:9> make
/usr/local/bin/rebar get-deps
==> gproc (get-deps)
/usr/local/bin/rebar compile
==> gproc (compile)
Compiled src/gproc_pt.erl
Compiled src/gproc_sup.erl
Compiled src/gproc_ps.erl
Compiled src/gproc_monitor.erl
Compiled src/gproc_init.erl
Compiled src/gproc_info.erl
Compiled src/gproc_lib.erl
Compiled src/gproc_bcast.erl
Compiled src/gproc_pool.erl
Compiled src/gproc_app.erl
src/gproc_dist.erl:23: Warning: behaviour gen_leader undefined
Compiled src/gproc_dist.erl
Compiled src/gproc.erl
anthony.molinaro@nymwork:11> erl -pa ebin
Erlang/OTP 17 [erts-6.2] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [
kernel-poll:false] [dtrace]
Eshell V6.2 (abort with ^G)
1> application:start(gproc).
ok
2> gproc_pool:ptest(10, 100, claim, []).
add_worker(897, a) -> 1; Ws = [{a,1}]
add_worker(897, b) -> 2; Ws = [{a,1},{b,2}]
add_worker(897, c) -> 3; Ws = [{a,1},{b,2},{c,3}]
add_worker(897, d) -> 4; Ws = [{a,1},{b,2},{c,3},{d,4}]
add_worker(897, e) -> 5; Ws = [{a,1},{b,2},{c,3},{d,4},{e,5}]
add_worker(897, f) -> 6; Ws = [{a,1},{b,2},{c,3},{d,4},{e,5},{f,6}]
worker stats (897):
[{a,1},{b,1},{c,1},{d,1},{e,1},{f,1}]
=ERROR REPORT==== 12-Mar-2015::15:37:43 ===
Error in process <0.49.0> with exit value: {{badmatch,{14,false}},[{gproc_pool,t
est_run2,5,[{file,"src/gproc_pool.erl"},{line,984}]},{timer,tc,3,[{file,"timer.e
rl"},{line,194}]},{gproc_pool,'-ptest/4-fun-0-',2,[{file,"src/gproc_pool.erl"},{
line,901}]}]}
=ERROR REPORT==== 12-Mar-2015::15:37:43 ===
Error in process <0.50.0> with exit value: {{badmatch,{11,false}},[{gproc_pool,t
est_run2,5,[{file,"src/gproc_pool.erl"},{line,984}]},{timer,tc,3,[{file,"timer.e
rl"},{line,194}]},{gproc_pool,'-ptest/4-fun-0-',2,[{file,"src/gproc_pool.erl"},{
line,901}]}]}
...
=ERROR REPORT==== 12-Mar-2015::15:37:43 ===
Error in process <0.48.0> with exit value: {badarg,[{gproc,set_value,[{n,l,[gpro
c_pool,897,5,e]},0],[{file,"src/gproc.erl"},{line,1313}]},{gproc_pool,try_claim,
3,[{file,"src/gproc_pool.erl"},{line,461}]},{gproc_pool,claim_,2,[{file,"src/gpr
oc_pool.erl"},{line,434}]},{timer,tc,3,[{file...
** exception error: no function clause matching
gproc_pool:'-collect/1-fun-0-'({{badmatch,{11,false}},
[{gproc_pool,test_run2,5,
[{file,
"src/gproc_pool.erl"},
{line,984}]},
{timer,tc,3,
[{file,"timer.erl"},
{line,194}]},
{gproc_pool,
'-ptest/4-fun-0-',2,
[{file,
"src/gproc_pool.erl"},
{line,901}]}]},
{[],[]}) (src/gproc_pool.erl,
line 913)
in function lists:foldr/3 (lists.erl, line 1274)
in call from lists:foldr/3 (lists.erl, line 1274)
in call from gproc_pool:collect/1 (src/gproc_pool.erl, line 913)
in call from gproc_pool:ptest/4 (src/gproc_pool.erl, line 903)
3>
Hi!
I have a supervisor that starts another supervisor by simple_one_for_one strategy. I have a pid runing process. How do I register this pid in gproc from first supervisor?
I noticed gproc's ets table is of type ordered_set
and it sets {write_concurrency, true}
, yet the erlang docs say
In current implementation, table type ordered_set is not affected by this option.
Are the Erlang docs mistaken or is gproc's usage of {write_concurrency, true}
for naught?
Hello.
I think, GProc is used in hundreds of projects, but i cannot find any information about it's license. Can you give some comments about this situation? Are you planning to attach some LICENSE file to project?
Here is one of the opinions i've found in the net:
"Because I did not explicitly indicate a license, I declared an implicit copyright without explaining how others could use my code. Since the code is unlicensed, I could theoretically assert copyright at any time and demand that people stop using my code. Experienced developers won't touch unlicensed code because they have no legal right to use it."
I may be asking something stupid, but I need to know :)
I think I'm getting some race conditions using a shared local counter. Is there any way to register OR update a shared local counter? I imagine that's possible as the shared counter is unique and it must be linked to gproc itself, right?
If it's not possible, how could I work this out or may be add it to gproc? Any thoughts?
The module hyperlinks in README.md are absolute links to esl/gproc. If you were browsing a particular tag of uwiger/gproc and then click on such a link you get put onto master esl/gproc. Rather confusing.
The other day I noticed something similar for plain_fsm.
hello,
I am trying to keep track of the number of connected users to a process inside of cowboy (for a websocket connection).
My code is as follows:
Inside of cowboy websocket_init:
gproc:reg({c,l,wsCounter}),
gproc:update_shared_counter({c,l,wsCounter},1)
Error Stacktrace:
** Stacktrace: [{ets,update_counter,[gproc,{{c,l,wsCounter},shared},{3,1}],[]},
{gproc_lib,update_counter,3,
[{file,"src/gproc_lib.erl"},{line,370}]},
{gproc,update_shared_counter,2,
[{file,"src/gproc.erl"},{line,1299}]},
{erltest1_http_handler,websocket_init,3,
[{file,"src/erltest1_http_handler.erl"},{line,46}]},
Any thoughts?
Thanks
-A
looks like there is a call in gproc_lib to un-available function
gproc_lib.erl:228: Call to missing or unexported function lists:keyreplace/3
keyreplace has arity 4:
keyreplace(Key, N, TupleList1, NewTuple)
Regards, Roman
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.