abstractvector / node-dbf Goto Github PK
View Code? Open in Web Editor NEWAn efficient dBase DBF file parser written in pure JavaScript
License: MIT License
An efficient dBase DBF file parser written in pure JavaScript
License: MIT License
What license is this released under?
I've tried the following code:
myFunction: function (req, res) {
var uploadedFile = req.files['fileId'],
var parser = new Parser(uploadedFile.path);
parser.parse();
And I've got a mysterious 15
appearing in my console. If I comment the parser.parse()
line it does no longer display the 15
so this is really coming from the parser.
I have a dbf file whose column names are partially alphabets and partially unknown characters.
How can I get value of a specific field in this case?
I have attached the dbf file as well as the screenshot of output that I get when I am reading dbf file.
3 years has passed since last update. E.g., I was forced to add the updated source of node-dbf into my product repository. It would be better to just add a Node dependency and get 0.1.1 from the net.
I am using node-dbf version 0.2.1
I have float values (including negative values) in my dbf file. Its getting parsed as integer.
Logical fields are parsed to null because there is an error in parser.js on line 166:
switch (value) {
case ['Y', 'y', 'T', 't'].includes(value):
value = true;
break;
case ['N', 'n', 'F', 'f'].includes(value):
value = false;
break`
default:
value = null;
}
It should be:
switch(true){
In the module "node-dbf" there is the following problem:
the encoding with large files,
buffer = overflow + buffer //return string utf-8
replace
buffer = Buffer.concat([overflow,buffer], buffer.length + overflow.length); //return buffer
fields, records "N" may be not integer ("N 15 3"),
when processing fields of type "F" is equal to 0 it returns NaN.
value = +value; //for field.type 'F','N' (standard no leading zeros)
I would also like to ask to add the ability to specify the decoding function of string
value = (buffer.toString @encoding).trim()
replace
if (this.encodingFunction) {
value = (this.encodingFunction(buffer)).trim();
} else {
value = (buffer.toString(this.encoding)).trim();
}
and add type the date
if (field.type === 'D') {
value = new Date(+value.slice(0,4), +value.slice(4,6), +value.slice(6,8));
}
do you plan make dbf-writer? in my task i need export data into dbf... i would be greate if you add such functionality
thx
Node v 4.3.1
This code don`t work
if (overflow !== null) {
buffer = overflow + buffer;
}
It actually decrease buffer length
This is my 2 chunks read log
read 65536
read + overflow 65536
record 93
overflow 86
read 65536
read + overflow 64298
The first record in 2nd chunk was parsed wrong.
Buffer.concat do the trick
if (overflow !== null) {
buffer = Buffer.concat([overflow, buffer]);
}
3 chunk log
read 65536
read + overflow 65536
record 93
overflow 86
read 65536
read + overflow 65622
record 93
overflow 57
read 65536
read + overflow 65593
I noticed that the '@deleted' property in the record objects was not always accurate (possibly related to
this issue)
I found in the .DBF specification (and confirmed) that the deleted flag could have one of four different values: 20h, 2Ah, '*', and " " (blank).
The assignment for record['@deleted']
in lib/parser.js was only accounting for one. I fixed this by (locally) changing the assignment to:
'@deleted': ( (buffer.slice(0, 1))[0] === 42 || (buffer.slice(0, 1))[0] === '*' ) ? true : false,
I hope this helps
Hi, I have this problem reading a DBF file with this
PRECIO1,N,14,2
That is a float (I opened the file in Libre Office Calc)
Any ideas on how I can open it?
As you may have noticed from the lack of activity, this library is no longer being maintained. I no longer have any use cases for DBF files, and many of the issues, bugs and feature requests refer to capabilities that the test files I was using don't cover.
If you're interested in taking over this library and updating it, please let me know by leaving a comment on this issue and a leaving a preferred way for me to contact you.
This code is really helpful and performs well.
But, I had problems with DBF columns having lengths >128 characters, which are parsed wrong (i.e. column lengths appear as <0). IMHO, there is a bug in function convertBinaryToInteger() in header.js. The code uses buffer.readIntLE(0, buffer.length)
but should use the unsigned version instead: buffer.readUIntLE(0, buffer.length)
.
Thanks,
Ken
I tried to use latest commit which uses stream parsing ( @sedenardi )
But all of my boolean fields are reversed after this pull.
Missing encoding in parse.js
_fs2.default.createReadStream(_this2.filename);
_fs2.default.createReadStream(_this2.filename, 'latin1');
or
var stream = _fs2.default.createReadStream(_this2.filename, _this2.header.encoding);
Love your library, very easy to use and works pretty well.
I'm having an issue where if I call another function from the 'record' event that performs some sort of operation, like a mysql query, it only performs the operation for the first record. It won't perform the function for subsequent records until after the 'end' event.
For example:
parser.on('start', function() {
console.log('start parsing');
});
parser.on('record', function(record) {
var sql = 'insert into table(field1,field2) values(?,?);';
var inserts = [];
inserts.push(record.field1);
inserts.push(record.field2);
console.log('inserting');
var cmd = connection.format(sql, inserts);
connection.query(cmd, function(err, res) {
if (err) {
console.log('MYSQL: ' + sql + ' \n' + err);
return;
}
console.log('inserted successfully');
});
});
parser.on('end', function() {
console.log('end parsing');
});
If I run the parser on a dataset of 100 records, I'll see 'start parsing', 'inserting' x 100, 'end parsing', then 'inserted successfully' x 100. In my real world application I'm reading datasets with several 100,000 rows, so this issue is much more apparent.
I'm not sure what could be causing the issue, perhaps reading the file as a stream would allow the db operation to flow through. Let me know if you would like any more specific details or troubleshooting, I'd be more than happy to help. Again, thanks for your work on this project, it's proven very useful to me.
Hello.
npm install node-dbf --global
node-dbf convert 123.dbf > 123.csv
Error:
node-dbf-convert(1) does not exist, try --help
hi, how i can convert a dbf file with json?
thank you
Hi.
Is it possible to read only one register, instead of the whole DBF file? I want to know a concrete value, a CARD ID, from person ID. I have the ID, and I want to know the CARD ID from it without reading the whole file (SQL style).
Thanks in advance.
I'm parsing some GPS coordinates and it appears that all numbers are parsed as integers. I tracked it down to parseInt() in parseField() in parser.js.
I changed that to parseFloat() and now I'm getting the correct decimal numbers.
I think emitting an 'error' event would be much easier to deal with, and more idiomatic Javascript. I don't see any good way to handle the exception thrown from the parser.
Thank you for creating and sharing this module.
i can use this directtly on browser?
I honestly dont understand why the file path passed to cli as
$ node-dbf convert file.dbf
get joined to node-dbf folder inside node_modules.
I ended up replacing this line:
node-dbf/bin/node-dbf-convert.js
Line 16 in 12fb719
with:
.action(function(f) { file = f; })
this solved the issue and allowed me to convert to csv, but feels like I'm missing something.
I think this line should be the event 'close' from the stream:
Line 57 in c5f30cf
Because I have some nested promises creating multiple parsers, it seems that when I call parser.parse()
, the stream gets "mixed" somehow and the parsing is totally wrong. For example:
const parseDbfFile = (file) => new Promise((resolve => {
const results = [];
// instanciate parser etc...
stream.on('record', record => results.push(record));
stream.on('end', () => resolve(results));
});
parseDbfFile(fileA).then(resA => {
// resA is fine
return parseDbfFile(fileB).then(resB => {
// resB is completely wrong <======
});
});
I don't understand much about streams in node.js but changing the event from 'end' to 'close' on the source code fixes my issue. If someone knows what is happening behind I would love to know more about it.
according to what I read on the internet, DBF doesn't use UTF-8 encoding (for characters like è, é, ü, and so on...), but the Windows CP1252 encoding.
Could you fix that so it supports CP1252 decoding?
Thx!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.