Giter VIP home page Giter VIP logo

erlmongo's People

Contributors

gotthardp avatar mkrogemann avatar sergejjurecko avatar spoowy avatar tobaloidee avatar zephyrean avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

erlmongo's Issues

Big slowdown when finding documents far from start of collection

Have been measuring performance of getting documents depending on skip parameter with well-known perf-measuring function.

I see great degrade along with skip parameter growth:

([email protected])269> perf(M, find, [<<"test">>, [], undefined, 0, 10], 10).
Range: 0 - 1000 mics
Median: 0 mics
Average: 100 mics
0
([email protected])270> perf(M, find, [<<"test">>, [], undefined, 10, 10], 10).
Range: 0 - 1000 mics
Median: 0 mics
Average: 200 mics
0
([email protected])271> perf(M, find, [<<"test">>, [], undefined, 100, 10], 10).
Range: 0 - 1000 mics
Median: 0 mics
Average: 200 mics
0
([email protected])272> perf(M, find, [<<"test">>, [], undefined, 1000, 10], 10).
Range: 0 - 1000 mics
Median: 0 mics
Average: 300 mics
0
([email protected])273> perf(M, find, [<<"test">>, [], undefined, 10000, 10], 10).
Range: 1000 - 2000 mics
Median: 1000 mics
Average: 1200 mics
1000
([email protected])274> perf(M, find, [<<"test">>, [], undefined, 100000, 10], 10).
Range: 9000 - 10000 mics
Median: 9000 mics
Average: 9400 mics
9000
([email protected])275> perf(M, find, [<<"test">>, [], undefined, 1000000, 10], 10).
Range: 90000 - 92000 mics
Median: 91000 mics
Average: 91300 mics
91000
([email protected])276> 

Similar tests for mongo command-line client gave very little degrade compared to the above.

db.test.find().skip(SKIP).limit(10);

What could be the reason?

multiple mongo servers (not known at compile time)

we have mutliple mongo servers whose IP addresses and host names we do not know at 'compile' time. Wanted to know if it is possible to use this driver to connect to them at run time.

So far it appears that mongodb is expected to be a process, and has to be restarted If I have to switch to a different mongo host. That would mean I cannot simultaneously connect and query multiple mongo DBs running on different hosts.

Would like to ask if I am mis interpreting the functionality available in this driver, and if yes - -how do I manage connections to multiple mongodb hosts.

thank you

How is this installed?

I would appreciate if you can let me know how to install this module in Ubuntu (I use 19.04) I tried running make, but there was no make file.

I got as far as this

rebar co

and it did some compilation. But not sure what else to do (I am a newbie in Erlang)

Find limit & how to get all documents from collection

Hi,

How to set max find limit?I found that when limit set to 0 it will only return 101 of documents. Let say I have a total of 122481 documents from Mong:count("collection"), then this is what I did to get all documents {ok, A} = Mong:find("collection", [], undefined, 0, 0), length(A) will result 101. I tried set limit to 150000 but length(A) not returning the same amount from count which is 13186. I tested from mongo console like this, var x = db.collection.find().limit(0), then x.length() is returning the same amount from db.collection.count() which is 122481.

Thanks,
Chak

not_connected

test() ->
application:start(erlmongo),
mongodb:singleServer(testmongo),
ok = mongodb:connect(testmongo),
M = mongoapi:new(testmongo,<<"test">>),
Data =M:find("mydoc",[{'or',[{"a",1},{"i",11}]}],undefined,0,100),
Data.

this gives me output
not_connected

please help.

How can I update multiple fileds?

I have 4 docs in a collection like following
{ "_id" : 1, "name" : "A", "age" : 10 }
{ "_id" : 2, "name" : "B", "age" : 20 }
{ "_id" : 3, "name" : "C", "age" : 20 }
{ "_id" : 4, "name" : "B", "age" : 40 }

And I want to update age of all to 50 whose name are "B" via multi. Please help.

aggregate

Hi.
Please tell me how to make function" aggregate"
Tnx.

parameterized modules not supported

Eshell V5.10.1 (abort with ^G)

==> erlmongo (compile)
src/mongoapi.erl:1: parameterized modules are no longer supported
src/mongoapi.erl:7: variable 'DB' is unbound
src/mongoapi.erl:7: variable 'Pool' is unbound
src/mongoapi.erl:9: variable 'DB' is unbound
src/mongoapi.erl:9: variable 'Pool' is unbound
src/mongoapi.erl:12: variable 'DB' is unbound
src/mongoapi.erl:12: variable 'Pool' is unbound
src/mongoapi.erl:31: variable 'DB' is unbound
src/mongoapi.erl:37: variable 'Pool' is unbound
src/mongoapi.erl:39: variable 'Pool' is unbound
src/mongoapi.erl:43: variable 'DB' is unbound
src/mongoapi.erl:43: variable 'Pool' is unbound
src/mongoapi.erl:69: variable 'Pool' is unbound
src/mongoapi.erl:76: variable 'Pool' is unbound
src/mongoapi.erl:101: variable 'Pool' is unbound
src/mongoapi.erl:104: variable 'Pool' is unbound
src/mongoapi.erl:113: variable 'Pool' is unbound
src/mongoapi.erl:117: variable 'Pool' is unbound
src/mongoapi.erl:134: variable 'Pool' is unbound
src/mongoapi.erl:136: variable 'Pool' is unbound
src/mongoapi.erl:140: variable 'Pool' is unbound
src/mongoapi.erl:144: variable 'Pool' is unbound
src/mongoapi.erl:206: variable 'Pool' is unbound
src/mongoapi.erl:216: variable 'Pool' is unbound
src/mongoapi.erl:251: variable 'Pool' is unbound
src/mongoapi.erl:277: variable 'Pool' is unbound
src/mongoapi.erl:292: variable 'Pool' is unbound
src/mongoapi.erl:303: variable 'Pool' is unbound
src/mongoapi.erl:314: variable 'Pool' is unbound
src/mongoapi.erl:361: variable 'DB' is unbound
src/mongoapi.erl:361: variable 'Pool' is unbound
src/mongoapi.erl:367: variable 'DB' is unbound
src/mongoapi.erl:367: variable 'Pool' is unbound
src/mongoapi.erl:373: variable 'DB' is unbound
src/mongoapi.erl:373: variable 'Pool' is unbound
src/mongoapi.erl:377: variable 'DB' is unbound
src/mongoapi.erl:377: variable 'Pool' is unbound
src/mongoapi.erl:396: variable 'DB' is unbound
src/mongoapi.erl:398: variable 'DB' is unbound
src/mongoapi.erl:400: variable 'DB' is unbound
src/mongoapi.erl:402: variable 'DB' is unbound
src/mongoapi.erl:404: variable 'Pool' is unbound
src/mongoapi.erl:442: variable 'DB' is unbound
src/mongoapi.erl:442: variable 'Pool' is unbound
src/mongoapi.erl:514: variable 'Pool' is unbound
src/mongoapi.erl:516: variable 'DB' is unbound
src/mongoapi.erl:552: variable 'Pool' is unbound
src/mongoapi.erl:562: variable 'DB' is unbound
src/mongoapi.erl:562: variable 'Pool' is unbound
src/mongoapi.erl:582: variable 'Pool' is unbound
src/mongoapi.erl:244: Warning: variable 'Q' exported from 'case' (line 238)
src/mongoapi.erl:396: Warning: variable 'Col' exported from 'case' (line 386)
src/mongoapi.erl:398: Warning: variable 'Col' exported from 'case' (line 386)
src/mongoapi.erl:400: Warning: variable 'Col' exported from 'case' (line 386)
src/mongoapi.erl:402: Warning: variable 'Col' exported from 'case' (line 386)
src/mongoapi.erl:404: Warning: variable 'Cmd' exported from 'case' (line 394)
src/mongoapi.erl:404: Warning: variable 'DB' exported from 'case' (line 394)

Any chance to get it fixed?

TIA,
--Vladimir

Example save code not work

-record(mydoc, {name, i}).
Mong:save(#mydoc{name = "MyDocument", i = 10}).

** exception error: bad argument
     in function  element/2
        called as element("MyDocument",
                          {[recindex,docid,name,i,{address,2},tags],
                           [recindex,docid,street,city,country],
                           [recindex,docid,filename,contentType,length,chunkSize,
                            uploadDate,aliases,metadata,md5],
                           [recindex,docid,files_id,n,data]})
     in call from mongodb:recfields/1 (src/mongodb.erl, line 874)
     in call from mongodb:encoderec/1 (src/mongodb.erl, line 899)
     in call from mongoapi:save/3 (src/mongoapi.erl, line 79)

Fix for mongodb cursor

There's a minor bug that prevents working with cursors and I have a fix for it:
getMore function clause from mongoapi.erl for the case when Rec is just a collection name

getMore(Rec, Cursor) when is_list(Rec); is_binary(Rec) ->

is identical to another clause and doesn't work.
So line numbers 297-300:

{done, Result} ->
    {done, mongodb:decoderec(Rec, Result)};
{ok, Result} ->
    {ok, mongodb:decoderec(Rec, Result)}

should be replaced with:

{done, Result} ->
    {done, mongodb:decode(Result)};
{ok, Result} ->
    {ok, mongodb:decode(Result)}

Also in erlmongo.hrl there's QUER_OPT_CURSOR constant that is set to 2 which is not correct. This option should only be set for tailable cursors (and by the way in current mongodb version this constant is changed to 1). But tailable cursors are only for capped collections. So to use it with normal collections you need to either set it to 0 or do not add it in mongoapi:cursor functions at all.

Mongo custom _id

Hi, me again...

About the custom _id, I found that only object id is allowed for _id while it should support other custom id. Example user can pass Mong:save("col", [{"_id", "1234567890"}, {"a", 1}]). At the moment it will be duplicated _id. As far as I know mongodb allow that rite?

Overloading the driver

Hi,

The process that actually writes to the database can be easily over loaded. If MongoDB is busy or slow for whatever reason, the messages can back up in the process' queue to the point where it cannot recover. Even a short burst can cause this problem and although MongoDB recovers almost immediately, the driver process (due to the selective receives) never recovers to the point where it can clear out the queue resulting in the app ultimately crashing.

When this occurs, the throughput goes down from around 6,000 messages per second (and being able to handle spikes of 10,000 with no problem) to about 30 per second - far less than the number coming in, which of course ultimately makes things even slower.

If I kill the producer, the process will recover eventually but it can take hours for that to happen.

Not sure what additional information to provide but I think it's caused by a selective receive and the message queue growing too fast.

Cheers,

Pete

Save to an Mongo array

Hi,
I can't find how to format my datas in Erlang and choose the encoding style to be able to save as an Array in a Mongo document.

Example of a Mongo Document containing an array :
{
"_id" : MyId,
datas : [0,1,2,3]
}

This type of document is correct in Mongo format.
But how to create such a document with the erlmongo API ?

Incorect behaviour of mongodb:datetime_to_now function

I came across some strange behaviour while working with date. Following example illustrate 2h difference between input and mongo representation:

1> erlang:now().
{1282,737069,361369}
2> calendar:now_to_local_time({1282,742215,0}).
{{2010,8,25},{15,16,55}}
3> 
3> N=mongodb:datetime_to_now(erlang:localtime()).
{1282,744532,0}
4> calendar:now_to_local_time(N).                
{{2010,8,25},{15,55,32}}
5> erlang:localtime().
{{2010,8,25},{13,56,18}}

I want to check if I am missing something or it is a bug.

Kind Regards,
Martin

Incorrect behaviour on insert with unique index

When trying to insert duplicate data into a collection with a unique index, defined ad db.users.ensureIndex({login:1},{unique:true}) , mongoapi:save returns a different oid inserting same data as it was a correct insert.

MongoDB log prints a exception 11000 E11000 duplicate key error index.

Authentication failed handling?

Hello. Maybe just a question: how shall the authentication failure be handled? Right now I get a badkey error.

00:38:46.147 [error] Error in process <0.4023.0> on node lorawan@debian with exit value:
{{badkey,<<"payload">>},[{maps,get,[<<"payload">>,#{<<"code">> => 18,
    <<"errmsg">> => <<"Authentication failed.">>,<<"ok">> => 0.0}],[]},
    {mongodb,scram_step,2,[{file,"/home/.../erlmongo/src/mongodb.erl"},{line,876}]},
    {mongodb,connection,3,[{file,"/home/.../erlmongo/src/mongodb.erl"},{line,770}]}]}

Is it the intended behaviour?

logo design proposal

Hi @SergejJurecko Good day! I am a graphics artist and would like to contribute a logo design for the project to make it more visible. I just need your permission first before i begin.Hoping for your positive feedback.Best regards! - Tobaloidee

Complex queries with proplists

Hi,

Not an issue per se, but would it be possible to add some examples of advanced queries using proplists? I am trying to do an or comparison (basically if time is between X and Y) but no matter what I try, I can't get it to work.

Basically I'm not sure if I should even be able to do those queries with the proplist format :)

Thanks in advance!

Cheers,

Pete

Unabe to call mongoapi:recinfo/2

Hello, first thank your for this driver, support for grid-fs is great :). I have a problem while calling mongoapi:recinfo/2 before saving my record to db. Erlang runtime is unable to find function with this arity. It gives me only mongoapi:recinfo/3... Please what am I missing?

Regards,
Martin

central module for interfaces

Hi

erlmongo uses a parameterized module so I have to keep a reference to it.
Is it possible to name an interface with an erlang term(not just atoms) and call something like:
erlmongo:findOne(<<"my-interface">>, #mydoc{i = 10}, [#mydoc.name]).

Doe erlmongo uses a connection pool?
Is it possible to configure the size of the pool?

Thanks

Docs

Hi,

Where can I find more documentation and examples of use ?
I'm new to Erlang and I'm having some doubts about the utilization of this module in my programs.

Regards,
Alexandre

Property list + binaries save/find issue

When saving, finding (by oid), saving again (updating), and then finding (by oid) once more, I get an error:

(jurassic_park@snark)44> {oid, Id} = Mong:save("bucket",P).
{oid,<<"00548f3c8f9ca762d8000021">>}
(jurassic_park@snark)45> {ok, Stored} = Mong:findOne("bucket",[{"_id",{oid, Id}}]).
{ok,[{<<"_id">>,{oid,<<"00548f3c8f9ca762d8000021">>}},
 {<<"key1">>,<<"val1">>},
 {<<"key2">>,<<"val2">>}]}
(jurassic_park@snark)46> Mong:save("bucket",Stored).                             
{oid,{oid,<<"00548f3c8f9ca762d8000021">>}}
(jurassic_park@snark)47> {ok, Stored2} = Mong:findOne("bucket",[{"_id",{oid, Id}}]).
** exception error: no match of right hand side value 
                <<118,97,108,49,2,107,101,121,50,0,4,0,0,0,118,97,108,50,0>>
 in function  mongodb:decode_value/2
 in call from mongodb:decode_next/2
 in call from mongodb:decode/2
 in call from mongoapi:find/6
 in call from mongoapi:findOne/3
(jurassic_park@snark)48> P.
[{"key1","val1"},{"key2","val2"}]

Version info
Tue Oct 27 08:45:04 db version v1.1.3-, pdfile version 4.5
Tue Oct 27 08:45:04 git version: 65a1590594ca5b3efd147179e902966982e33eed
Tue Oct 27 08:45:04 sys info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686

find not returning all fields

Hi,

I added some fields to the gfs_file record:

-record(gfs_file, {recindex = 3, docid, filename, contentType, length, chunkSize, uploadDate, aliases, metadata, md5, delId}).

When I find some file using:

Mongo:find(?MONGO_FILES_COLLECTION,#search{criteria = #gfs_file{docid = Docid}})

The filename and metadata fields are never returned.

Any hint about that ?

Reading a file from gridfs which was written with an other client fails

Let there be a file on disk:
/usr/local/.../094509-ac0c558a-6008-11e4-9a5d-e7ef2b7188c7.mp3

This file is uploaded using pymongo or using mongofiles.
When I try to open it using erlmongo/gridfs using the following sequence:

Db1 = mongoapi:new("ouc3", "collection")
Pid = mongoapi:gfsOpen(<<"fs">>, #gfs_file{filename = <<"/usr/local/.../094509-ac0c558a-6008-11e4-9a5d-e7ef2b7188c7.mp3">>}, Db1).
Bin = mongoapi:gfsRead(Pid, 67248, Db1).
=> false

From what I see in the code the 5000 ms timeout is reached and false is returned.

Everything is fine when I try to do the same for a file written using the following sequence:
Pid = Db:gfsNew(?REC_GFS_BUCKET, FilePath, [{}]),
Db:gfsWrite(Pid, BinFile),
Db:gfsClose(Pid),

Also, If I download the file using mongofiles and listen it (it is a wav in my case) it all goes fine.
What I do not understand is why erlmongo fails to read those files

Strange error with new version of mongodb

Hello, I came across a strange error while using erlmongo. After update to latest mongodb (db version v1.6.0 running on Ubuntu Lucid amd64) I noticed that there are problems with storing files into GridFS and my log file contains following errors (trying to read docid of file after storing it to GridFS).

=ERROR REPORT==== 10-Aug-2010::13:59:33 ===
Error in process <0.2686.0> with exit value: {{badmatch,[{<<6 bytes>>,<<179 bytes>>},{<<4 bytes>>,13284},{<<2 bytes>>,0.000000e+00}]},[{mongodb,gfsflush,3}]}

I tried to pull latest version of ermongo and use it, but without any luck. Please do you face same error? I assume that it is related to upgrade to latest mongodb, but I am not sure where to start.

Kind Regards
Martin

About aggregate queries question

mongoapi:runCmd([
{"aggregate", ?ORDER_TABLE},
{"cursor", #{ } },
{"pipeline",
{array,[
[
{"$match", #{status=>0}}
],
[
{"$group",[
{"_id","null"},
{"total", [{"$sum", "$amount"}]}
]}
]
]}
}
], ?MONGO_MASTER),

the query result is:
[{<<"cursor">>,
[{<<"firstBatch">>,
{array,[[{<<"_id">>,<<"null">>},{<<"total">>,1900}]]}},
{<<"id">>,0},
{<<"ns">>,<<"paycenter_db.pay_order">>}]},
{<<"ok">>,1.0}]

Can you give me a correct demonstration?
Cursor is a mandatory parameter, but I am not sure how to use it. Do I need to close cursor?

SSL support?

Hello,
it seems like I am unable to establish a connection and a part of the reason could be that I am using MongoDB Atlas and need to establish a ssl connection.

Although I changed the tcp connections to ssl connections, I cannot get my replicaSet connected. The constant "reconnect-reattempt" stopped though.

Grateful for any advice!

Cheers,
Erwin

Duplicit results calling findOpt

Hello,
I receive duplicate results while calling findOpt. Using mongo console I found no duplicate records using same call.

Db:findOpt(#log_event{}, undefined, [{sort, {#log_event.date, -1}}], 0, 0) 

Is it a bug or I am not using findOpt correctly?

Kind Regards,
Martin

Error when querying oplog.rs

Not sure what's unique about oplog.rs, but querying it is causing errors in erlmongo.

(mynode@127.0.0.1)19> L:findOne(<<"oplog.rs">>, []).              
** exception error: no function clause matching 
                    mongodb:decode_value(17,
                                         <<1,0,0,0,23,64,74,79,18,104,0,0,0,0,
                                           0,0,0,0,0,2,111,112,0,2,0,0,0,...>>) 

Metadata of GridFS binary

Is it possible to retrieve docid/metadata from binary stored in GridFS? I want to use it as a reference between document and binary file.

Regards,
Martin

findandmodify

I'm trying to do

db.runCommand( { findandmodify: 'RandomStrings', query: {name: {'$ne': 'position'}}, sort: {}, remove: 1 });

using erlmong with:

runCmd("{ findandmodify: 'RandomStrings', query: {name: {'$ne': 'position'}}, sort: {}, remove: true}"),

I'm getting the following error:

[[{<<"errmsg">>,<<"no such cmd">>}, {<<"bad cmd">>,
[{<<"findandmodify: 'RandomStrings', query: {name: {'$ne': 'position'}}, sort: {}, remove">>,1}]}

Would you sugest any thing better ?

Thank you

Pool Tuning

Hi,
I would like to know how to parameterize the number of connections ?
Or if I well understood, the number will increase each time a connection is needed, but none available ?
Anyway, is it possible to prestart a number of connections ?
Thx.

Question about supported data types

Hello, I looking for an information about supported data types. I used only strings and it worked just fine, but my question is related to date/time data type. How can I filter/sort based on column with data/time information? I am assuming that using string is not a proper way.

Thank you,
Martin

Issue with null values in proplist

A value of null seems to work well, unless there is a tuple after the null value:

(jurassic_park@snark)6> P.
[{<<"prop1">>,<<"val1">>},{<<"prop2">>,null}]
(jurassic_park@snark)7> M:save("bucket", P).
{oid,<<"0007e2528f9ca73060000005">>}
(jurassic_park@snark)8> P1 = [{<<"prop1">>,<<"val1">>},{<<"prop2">>,null},{<<"prop3">>,<<"val3">>}].
[{<<"prop1">>,<<"val1">>},
 {<<"prop2">>,null},
 {<<"prop3">>,<<"val3">>}]
(jurassic_park@snark)9> 
(jurassic_park@snark)9> M:save("bucket", P1).                                   
{oid,<<"000954ed8f9ca73060000006">>}
(jurassic_park@snark)10> P2 = [{<<"prop1">>,<<"val1">>},{<<"prop2">>,true},{<<"prop3">>,<<"val3">>}].
[{<<"prop1">>,<<"val1">>},
 {<<"prop2">>,true},
 {<<"prop3">>,<<"val3">>}]
(jurassic_park@snark)11> M:save("bucket", P2).                                   
{oid,<<"000b1f348f9ca73060000007">>}
(jurassic_park@snark)12> 

So I can save the 3 proplists fine. However, in the mongo shell I get this:

> db.bucket.find()
{"_id" :  ObjectId( "0007e2528f9ca73060000005")  , "prop1" : "val1" , "prop2" : null}
{"_id" :  ObjectId( "000954ed8f9ca73060000006")  , "prop1" : "val1" , "prop2�prop3" : null , "" : BinData type: 108 len: 1635123200}
{"_id" :  ObjectId( "000b1f348f9ca73060000007")  , "prop1" : "val1" , "prop2" : true , "prop3" : "val3"}
> 

I am running this with 1.0.1, I was trying to see if the null was a problem with mongo itself.

Encode empty proplist

I'm trying to store a proplist, one of whose values is an empty proplist itself, and getting an error:
(jurassic_park@snark)33> P = [{"key1","value1"}, {"key2", []}].
{"key1","value1"},{"key2",[]}34> Mong:save("job", P).
** exception error: no function clause matching mongodb:encode_element({<<"key2">>,[]})
in function mongodb:'-encode/1-fun-0-'/2
in call from lists:foldl/3
in call from mongodb:encode/1
in call from mongoapi:save/3

Trying it as a list with 1 empty tuple did not work either:

(jurassic_park@snark)35> P1 = [{"key1","value1"}, {"key2", [{}]}].
[{"key1","value1"},{"key2",[{}]}]
(jurassic_park@snark)36> Mong:save("job", P1).                    
** exception error: bad argument
 in function  unicode:characters_to_binary/1
    called as unicode:characters_to_binary([{}])
 in call from mongodb:encode_cstring/1
 in call from mongodb:encode_element/1
 in call from mongodb:'-encode/1-fun-0-'/2
 in call from lists:foldl/3
 in call from mongodb:encode/1
 in call from mongoapi:save/3

If I go into the mongo shell, I can save an object with an empty value:

> db.job.save({'key1':'value1','key2':{}})  
> db.job.find()                           
{"_id" :  ObjectId( "4af0743b6198301749d63745")  , "key1" : "value1" , "key2" : {}}
> 

Problem with reading file from GridFS

Hello, me again :) While I was trying to read file from GridFS (storing works like a charm) I face arity error.

Code

FilePid = Db:gfsOpen(#gfs_file{docid = DocId}).
Bin = Db:gfsRead(FilePid, 1000).

SASL error message

=ERROR REPORT==== 15-Apr-2010::20:29:01 ===
Error in process <0.238.0> with exit value: {badarith,[{mongodb,gfs_proc,2}]}

I quickly looked at source code, but with no luck. What am I missing? I followed code from readme file.

Best Regards,
tgrk

Eventual Consistency & getLastError

Hi,
in the API described in the readme, I can't find a function getLastError.
Is it anyway available ?
So what is the default behavior of the driver regarding the "w" parameter, allowing to ensure consistency between replicas ?
Thx.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.