Giter VIP home page Giter VIP logo

bayes's People

Contributors

bin-huang avatar cecchi avatar dawsbot avatar dlebech avatar ttezel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bayes's Issues

Not taking custom tokenizer?

I'm rusty on my JS so I'm probably doing something dumb here, but I can't get your classifier to take a custom tokenizer.

const classifier = bayes({'tokenizer': tokenizer});

var tokenizer = function (text) {
  var rgxPunctuation = /[^(a-zA-Z)+\s]/g

  var sanitized = text.replace(rgxPunctuation, ' ').toLowerCase();

  return sanitized.split(/\s+/)
}

If I put a console.log in there, it's clear it's not getting executed.

Use of plain objects prevents tokens or categories named "constructor"

The "vocabulary", "docCount", "wordCount", "wordFrequencyCount" and "categories" data structures in the classifier are defined as {} which means that "constructor" is a field. This causes problems for documents containing the word "constructor" as well as categories with that name. The solution is to use Object.create(null) as is already used elsewhere in the existing code.

Classifier does not work, when text contains "contructor" as token.

The problem is this line: https://github.com/ttezel/bayes/blob/master/lib/naive_bayes.js#L248

Naivebayes.prototype.frequencyTable = function (tokens) {
  var frequencyTable = {}

  tokens.forEach(function (token) {
    if (!frequencyTable[token])
      frequencyTable[token] = 1
    else
      frequencyTable[token]++
  })

  return frequencyTable
}

When token is "constructor", frequencyTable[token] is always true, because every object in Javascript natively has the constructor property. Therefore frequencyTable[token]++ runs and this results in NaN.

To fix this, we need to check for if (!frequencyTable.hasOwnProperty(token)). We will overwrite the constructor property, but we do not need it for the object anyway.

Possible to return multiple categories?

In one of the examples you say is a news article about technology, politics, or sports ?

What if it's an article about robots playing football?

In this case I would think the categories should be technology & sports.

Can the current code return multiple categories?

Thank you.

UTF-8 support

Sadly does not support UTF-8. The problem lies here:

getWords : function(doc) {
    if (_(doc).isArray()) {
      return doc;
    }
    var words = doc.split(/\W+/);
    return _(words).uniq();
  }
doc.split(/\W+/);

does not seem to work for UTF-8

Here is an example with Cyrilic language (like Russian):

"Надежда за обич еп.36 Тест".split(/\W+/);

This returns:

[ "", "36", "" ]

Instead should return something like this:

[ "Надежда", "за", "обич", "еп", "36", "Тест"]

I was looking for fix, but ended up here:
http://stackoverflow.com/questions/280712/javascript-unicode-regexes

Support async tokenizer?

It is difficult to segment a text into tokens in some languages(such as Chinese), there are many hard works need to do to implement better tokenizer. For this reason, sometimes the tokenizer is implemented in other programming language, even in other services( in microservices architecture). In this case, support async version of tokenizer to request tokens between services is required.

PR: #21

How well will this handle Chinese?

I know that Chinese does not have the same density of spaces as English and most languages; a Chinese character is more analogous to an English word than an English letter.

Would you expect your classifier to treat Chinese characters as letters, or as words?

how to use word vectors?

it seems the classifier just works on the passed in token (words unless you write your own tokenizer)
how could I best use this with multidimensional tokens such as word vectors?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.