Giter VIP home page Giter VIP logo

fuzzywuzzy's Introduction

This project has been renamed and moved to https://github.com/seatgeek/thefuzz

TheFuzz version 0.19.0 correlates with this project's 0.18.0 version with thefuzz replacing all instances of this project's name.

PRs and issues here will need to be resubmitted to TheFuzz

fuzzywuzzy's People

Contributors

aburkov avatar acslater00 avatar aheadricksg avatar bcombourieu avatar brandomr avatar david-desmaisons avatar davidcellis avatar fjsj avatar foxxyz avatar garetht avatar geekrypter avatar graingert avatar grigi avatar jamesnunn avatar jeffpaine avatar jonafato avatar josegonzalez avatar lerignoux avatar medecau avatar nathantypanski avatar nol13 avatar ojomio avatar olethanh avatar patjackson52 avatar rralf avatar shalecraig avatar tlaunay avatar xdrop avatar xraymemory avatar zackkitzmiller avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fuzzywuzzy's Issues

Finding best matches in a list gives wrong results.

The process.extract function doesn't seem to handle capitalised queries well with some scorers:

>>>fuzzywuzzy.fuzz.partial_ratio('Santa Ana','Santa Ana')
100
>>>fuzzywuzzy.process.extract('Santa Ana', ['Santa Ana', 'Manta'], scorer=fuzzywuzzy.fuzz.partial_ratio)
[('Manta', 80), ('Santa Ana', 77)]

This is because here the choice string is processed but the query string is not:

>>>fuzzywuzzy.process.extract('santa ana', ['Santa Ana', 'Manta'], scorer=fuzzywuzzy.fuzz.partial_ratio)
[('Santa Ana', 100), ('Manta', 80)]

With some other scorers (e.g., WRatio) things work fine because those processes both strings internally anyway.

A workaround I use right now is:

>>>fuzzywuzzy.process.extract('Santa Ana', ['Santa Ana', 'Manta'], scorer=fuzzywuzzy.fuzz.partial_ratio, processor=lambda x:x)
[('Santa Ana', 100), ('Manta', 80)]

Broken partial_ratio functionality with python-Levenshtein

The partial_ratio calculation seems to yield incorrect results for certain combinations of strings when it uses the python-Levenshtein SequenceMatcher.

This works well:

> fuzz.partial_ratio('this is a test', 'is this is a not this is a test!')
> 100

But changing the longer string slightly, while not affecting the target:

> fuzz.partial_ratio('this is a test', 'is this is a not really thing this is a test!') 
> 92

Digging deeper, it appears that the get_matching_blocks() method returns substrings that do not actually match the string we are searching for, so the subsequent ratio calculations are performed on a set of poorly-matched ones.

Removing python-Levenshtein and using the python-only SequenceMatcher makes that method perform its job correctly. I couldn't figure out what it was about certain strings that made it break, after trying a whole bunch.

To top it off, the python-Levenshtein library appears to have been left unsupported for a while now. Any ideas? Maybe for now, removing the recommendation to use python-Levenshtein would let code run correctly, if not as fast? Thanks!

token_set_ratio() incorrect match

We've been using fuzzywuzzy quite successfully for a while but ran into this strange issue:

from fuzzywuzzy import fuzz
print round(fuzz.token_set_ratio("Recruiting Partner", "Tooling Planner"))

Returns a match of 61.0. If someone could illuminate us to why this is (we might be missing something!) that'd be great. Starting to peer at the code now

Tried with both 0.3.1 and 0.4

Thanks

Upload latest version to pypi

Could we kindly upload the latest version to pypi (directions)?

I'd like to use this library on a python 3 project where using pip install would be greatly appreciated 😄 Cheers!

Unicode & token_sort_ratio

Hi

What's wrong with this code snippet?

In [1]: from fuzzywuzzy import fuzz

In [2]: fuzz.token_sort_ratio(u"ааа ббб ввв", u"ббб ааа ввв")
Out[2]: 0

In [3]: fuzz.token_sort_ratio(u"aaa bbb ccc", u"bbb aaa ccc")
Out[3]: 100

$ python -V
Python 2.7.6

Classifying transactions

Hello,

I get bank account statements in csv with a date and description of the transaction per row.

I have a set of properly categorised transactions (previously imported transactions) and I would like to categorise new transactions based on the existing set of transactions. I wanted to do a process.extract(new_transaction.description, [List of existing_transaction.description]) and then assign to the new transaction the category of the best matched transaction.
How can I speed up the process of categorising N transaction based on a set of M transactions ? I think at each run for a new transaction, fuzzywuzzy must reprocess the M transactions. Can we preprocessing/compile the M transactions to speed it ? Other ideas ?

Fuzzy find string in document

I need to fuzzy find a string (one word) in a lightly garbled document (OCR:ed with maybe 1-3% of characters wrong). What would be the best way to do this? I can go through each word in the document and match against those, but if extra spaces are inserted or removed, this will not work well. I could also have a running window of len(word) letters over the doc and match against that, which should work but seems terribly inefficient.

Is there a better way? I think the feature of finding a string in a document would be a useful addition to your fine library, in general.

partial_ratio

Great library. I have a quick question about the partial_ratio function.

fuzz.partial_ratio("mariners vs angels", "los angeles angels at seattle mariners")

ends up calling

SequenceMatcher(None, "mariners vs angels", "mariners").ratio()

Reading your blog post, I would have expected something like

SequenceMatcher(None, "mariners vs angels", "t seattle mariners").ratio()

So is the current behavior intended?

License headers

Please, change license headers on source files to match LICENSE.txt

How can i say for fuzzywuzzy about installed dependencies?

// Excuse my beginner's English

Hi!

How can i explicitly resolve dependencies (show they for fuzzywuzzy)?
I have installed python-Levenshtein (by command aptitude install python-Levenshtein).
And i don't understand have i difflib or no.

Any way, fuzzywuzzy prints warning:

/tmp/fuzzywuzzy/fuzzywuzzy/fuzz.py:33: UserWarning: Using slow pure-python SequenceMatcher
  warnings.warn('Using slow pure-python SequenceMatcher')

I have been watching sources, but not figured out what happened...

My code:

#! /usr/bin/env python3

import sys
import re
sys.path.append('/tmp/fuzzywuzzy')
from fuzzywuzzy import fuzz
from fuzzywuzzy import process

str1 = sys.argv[1]
str2 = sys.argv[2]
print(fuzz.ratio(str1, str2))

I run this script like this: ./fuzzy-compare-two-strings.py 123456 213456

n.b. i use Debian7

Sorry again and thanks!

extract fail if needed match string contains all double-byte char and choices contains both double-byte and single-byte

sample:
ok --- process.extract('a草', ['草', 'a'], limit=10)
ok --- process.extract('草', ['草'], limit=10)
fail --- process.extract('草', ['草', 'a'], limit=10)

ok --- process.extract(u'a草', [u'草', 'a'], limit=10)
ok --- process.extract(u'草', [u'草'], limit=10)
fail --- process.extract(u'草', [u'草', 'a'], limit=10)

traceback:
"fuzzywuzzy/fuzz.py", line 181, in WRatio
len_ratio = float(max(len(p1),len(p2)))/min(len(p1),len(p2))
ZeroDivisionError: float division by zero

Symmetry and ratio calculation

Sorry to write you again,
but I noticed that when ratio() or partial_ratio() is called from next-level functions, eg _token_sort or _token_set it is called only as
ratio(a,b),
while
max([ratio(a,b), ratio(b,a)])
might produce a better match indeed. Consider the example

>>> fuzzywuzzy.fuzz.ratio('gs dfs f sg', 'gs fsfsgfs gsrg')
62
>>> fuzzywuzzy.fuzz.ratio('gs fsfsgfs gsrg', 'gs dfs f sg')
54
>>>

Considering, that in many functions you perform several comparisons in order to achieve the best score, should reverse check also be made when calling ratioor partial_ratio?

Algorithms in Fuzzywuzzy

Can you please give some references to literature on which fuzzywuzzy string matching follows.

validate_string() allowing invalid strings

Edit - The first version of the code that I grabbed was not up to date. This has apparently already been resolved.

A string composed only of space, hyphen, period, comma, colon will pass utils.validate_string then be stripped to zero length by utils.full_process and cause a divide by zero in fuzz.WRatio at len_ratio = float(max(len(p1),len(p2)))/min(len(p1),len(p2))

UserWarning: Using slow pure-python SequenceMatcher

C:\Python27\lib\site-packages\fuzzywuzzy\fuzz.py:33: UserWarning: Using slow pure-python SequenceMatcher
  warnings.warn('Using slow pure-python SequenceMatcher')

The line of code I have for importing is from fuzzywuzzy import fuzz.

I'm running Python 2.7.8 on Windows 8, pip 1.5.6, and fuzzywuzzy 0.4.0.

Not python3 compliant

I get the following error:

File "/usr/lib/python3.4/site-packages/fuzzywuzzy/fuzz.py", line 49, in ratio
    s1, s2 = utils.make_type_consistent(s1, s2)
  File "/usr/lib/python3.4/site-packages/fuzzywuzzy/utils.py", line 43, in make_type_consistent
    elif isinstance(s1, unicode) and isinstance(s2, unicode):
NameError: name 'unicode' is not defined

Issue with scores.

Hi,I do the following using fuzzy wuzzy,

choices = ["BestBuy", "ebay", "overstock", "rakuten","sears"]
match1 = process.extractOne("ebay - asdlfjlksj ", choices)
match2 = process.extractOne("thebay - asdlfjlksj ", choices)

if I print match1 and match2 I get the following.

match1: ('ebay', 90)
match2: ('ebay', 90).

As you can see match1 should be a closer match but both have a ratio of 90, is there anyway to use to use this library to give more weightage to whole words and there by more match ratio to match1 or is this a bug?

Thanks.

Published Package fails to install on windows. Character encoding issue

It appears to be a 'sixes' and 'nines' quotation issue in README.rst

Traceback (most recent call last):
File "setup.py", line 6, in
long_description=open('README.rst').read(),
File "C:\Python34\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1983: character maps to

Install problem Using Python 3

Trying to pip3 install on Python 3 generates an error, I think because of encoding issues in the README?

Downloading/unpacking fuzzywuzzy
  Downloading fuzzywuzzy-0.3.1.tar.gz
  Running setup.py (path:/tmp/pip_build_root/fuzzywuzzy/setup.py) egg_info for package fuzzywuzzy
    Traceback (most recent call last):
      File "<string>", line 17, in <module>
      File "/tmp/pip_build_root/fuzzywuzzy/setup.py", line 6, in <module>
        long_description=open('README.rst').read(),
      File "/usr/lib/python3.4/encodings/ascii.py", line 26, in decode
        return codecs.ascii_decode(input, self.errors)[0]
    UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 1797: ordinal not in range(128)
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "<string>", line 17, in <module>

  File "/tmp/pip_build_root/fuzzywuzzy/setup.py", line 6, in <module>

    long_description=open('README.rst').read(),

  File "/usr/lib/python3.4/encodings/ascii.py", line 26, in decode

    return codecs.ascii_decode(input, self.errors)[0]

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 1797: ordinal not in range(128)

This is a common problem resolved across several git repos, I think?

ImportError: cannot import name fuzz

Below is more information.

$> sudo pip install fuzzywuzzy   #works, no error

$> vi mytest.py
from fuzzywuzzy import fuzz
from fuzzywuzzy import process

$>python mytest.py
...
ImportError: cannot import name fuzz

'LC-40LE830U' and 'LC-60LE830U' returns 90% match

These two strings have a 90% match, regardless of the algorithm -- I tried WRatio, ratio, partial_ratio, etc.

Is there some way to detect that single digit difference - the "4" and the "6" - in the two strings? The thinking is that the mismatched numbers should lower the score.

Thanks

Why is there a StringProcessor class?

Reading the source code, I was confused as to why there is a StringProcessor class defined in string_processing.py? You never initialize it, and it contains only static methods; in fact, even the replace_non_letters_non_numbers_with_whitespace() can be made static by using the @staticmethod decorator. It appears like it is used entirely as a namespace.

Is there some extra reasoning behind this design that I missed? Or could the same thing be achieved by just making all static methods free functions in a module?

EDIT: There is an answer on Stackoverflow discussing functions which do not have a shared state.

single quotes processing

process.extract("Castel Sant’Angelo", "Castel Sant'Angelo");

it gives 97 score, because of different single quotes

More optimisations to fuzzy matching: happy to contribute but need community advice

Hi.

I was further digging code to speed-up it for my particular task.
I need to match 75k strings to another set of 75k strings. I'm using token_sort_ratio method with cut off on ration=90. Generally only 5% of strings from second set can find a pair in first set, so it's basically 75k^2 comparisons.

Obviously for my case items from first list should be processed (tokenized) only once.

Below is my idea:
https://gist.github.com/dchaplinsky/8307327

I totally understand that I'm:

  1. Optimizing for my particular usage pattern.
  2. Implemented it in a way that doesn't look like a good pull request.
  3. It's basically not an issue at all, I just cannot find any group where I can post that idea for discussion.

Yet another 4.5x speed up is too yummy to not to get an advantage of.

Please (please!) let me know if you have an idea how to integrate it with the rest of the code to keep the flexibility/readability/maintainability and I'll be happy to prepare pull request.

Add LICENSE.txt file to official releases

Hi,
I will be packaging fuzzywuzzy for Fedora because I want to package lettuce and fuzzywuzzy is its requirement. I'd like to ask you if it would be possible to add the LICENSE.txt file to official pypi releases - it is optimal for packaging, if the licensing is contained in the upstream source.

Thank you.

partial_token_set_ratio returning significantly higher ratio than it should after upgrade to 0.4.0

After I upgraded to 0.4.0 I started getting the following oddity: (please note I get the Using slow pure-python SequenceMatcher warning, not sure if that has any bearing)

from fuzzywuzzy import fuzz
line = "Notice of l ntention to Obtain a Compulsory License for Making and Distributing Phonorecords"
fuzz.partial_token_set_ratio(line, "Name of principal recording")  # 100
fuzz.partial_token_sort_ratio(line, "Name of principal recording")  # 52

When I downgrade back to 0.2 I get the following:

from fuzzywuzzy import fuzz
line = "Notice of l ntention to Obtain a Compulsory License for Making and Distributing Phonorecords"
fuzz.partial_token_set_ratio(line, "Name of principal recording")  # 29
fuzz.partial_token_sort_ratio(line, "Name of principal recording")  # 52

It generates UnicodeDecodeError. Please solve it at method level

Traceback (most recent call last):
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1413, in call
return self.func(*args)
File "/media/DELLUTILITY/DELL/PyCharm Projects/DuplicateDetection/UploadXLSFile.py", line 102, in uploadCallBack
startChecking()
File "/media/DELLUTILITY/DELL/PyCharm Projects/DuplicateDetection/UploadXLSFile.py", line 93, in startChecking
print "First 5 Match Results :->",process.extract(pname,result_list, limit=5)
File "/usr/local/lib/python2.7/dist-packages/fuzzywuzzy/process.py", line 60, in extract
processed = processor(choice)
File "/usr/local/lib/python2.7/dist-packages/fuzzywuzzy/process.py", line 51, in
processor = lambda x: utils.full_process(x)
File "/usr/local/lib/python2.7/dist-packages/fuzzywuzzy/utils.py", line 50, in full_process
string_out = StringProcessor.replace_non_lettters_non_numbers_with_whitespace(s)
File "/usr/local/lib/python2.7/dist-packages/fuzzywuzzy/string_processing.py", line 18, in replace_non_lettters_non_numbers_with_whitespace
return regex.sub(u" ", a_string)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 1: ordinal not in range(128)

Minor optimization for process.extract

diff --git a/fuzzywuzzy/process.py b/fuzzywuzzy/process.py
index f31dbe8..9fa4e97 100644
--- a/fuzzywuzzy/process.py
+++ b/fuzzywuzzy/process.py
@@ -96,6 +96,8 @@ def extract(query, choices, processor=None, scorer=None, limit=5):
         scorer = fuzz.WRatio

     sl = []
+    # localize variable access to minimize overhead
+    sl_append = sl.append

     try:
         # See if choices is a dictionary-like object.
@@ -108,7 +110,7 @@ def extract(query, choices, processor=None, scorer=None, limit=5):
         for choice in choices:
             processed = processor(choice)
             score = scorer(query, processed)
-            sl.append((choice, score))
+            sl_append((choice, score))

     sl.sort(key=lambda i: i[1], reverse=True)
     return sl[:limit]

This will save some CPU cycles in case the input choice has many elements.

extract and extractOne fail if choices contains null strings

Extract and extractOne fail if choices contains either an empty string or a string of only whitespace. For example:

process.extract('string1', ['', 'string', ''])

and

process.extract('string1', [' ', 'string', ' '])

will traceback:

    len_ratio = float(max(len(p1),len(p2)))/min(len(p1),len(p2))

ZeroDivisionError: float division by zero

An advice

Change:

from fuzzywuzzy.string_processing import StringProcessor

=>

from .string_processing import StringProcessor

in util.py is better

Lost the ability of processing unicode choices

Regarding of process.extract, when we call scorer, which is fuzz.WRatio actually, the exact locations are process.py#L104 and process.py#L110, we leave the third argument force_ascii to default value which is True according to fuzz.py#L236 thus we can not process Unicode choices.

Solution:

  1. provide one new parameter for process.extract at process.py#L33, means change
    def extract(query, choices, processor=None, scorer=None, limit=5):
    to
    def extract(query, choices, processor=None, scorer=None, limit=5, force_ascii=True):
  2. take the argument for fuzz.WRatio, means change process.py#L104 and process.py#L110 from
    score = scorer(query, processed)
    to
    score = scorer(query, processed, force_ascii)

Unnecessary warning should be removed.

This is the first project I've ever seen to produce command line garble because an optional is not there.

You likely know I am talking about this.
#67 was closed after a mere package suggestion. @medecau made a very solid point: this pertains to the documentation, if it is optional, do not throw it at every terminal that will use the code.

I can throw in a PR removing that if you don't want to write it. It could maybe just be logged silently at a very high level.

Partial match on Python3 doesn't work if Levenshtein module is present

Traceback (most recent call last):
  File "./test_fuzzywuzzy.py", line 199, in testWRatioUnicodeString
    score = fuzz.WRatio(s1, s2, force_ascii=False)
  File "/tmp/pyphrasy/fuzzywuzzy/fuzzywuzzy/fuzz.py", line 255, in WRatio
    partial = partial_ratio(p1, p2) * partial_scale
  File "/tmp/pyphrasy/fuzzywuzzy/fuzzywuzzy/fuzz.py", line 77, in partial_ratio
    blocks = m.get_matching_blocks()
  File "/tmp/pyphrasy/fuzzywuzzy/fuzzywuzzy/StringMatcher.py", line 57, in get_matching_blocks
    self._str1, self._str2)
TypeError: inverse expected a list of edit operations

Bug in token_set_ratio

token_set_ratio does not handle empty string well.

fuzz.token_set_ratio('', 'abc abc abc') and fuzz.token_set_ratio('asdasdas', '')
will return a ration of 100 (exact match) when clearly it should either return None or 0.

References for Metrics Used?

Could you direct me towards a reference that describes the metrics you guys use when evaluating string likeness?

Bug in partial_ratio

There is a small bug in the partial_ratio

fuzz.partial_ratio('HSINCHUANG','Sinjhuang District'.upper()) = 30
whereas
fuzz.partial_ratio('HSINCHUANG','SINJHUAN) = 88
and
fuzz.partial_ratio('HSINCHUANG', 'LSINJHUANG DISTRIC')

Thus the function is unable to match the partial string of 'HSINCHUANG' and 'SINJHUANG DISTRIC'

if you substitute line
long_start = block[1] - block[0]
in partial_ratio by
long_start = block[1] - block[0] if (block[1]-block[0])>0 else 0

Fuzzywuzzy 4x slower *with* python-Levenshtein installed

According to the intro page:
"python-Levenshtein (optional, provides a 4-10x speedup in String Matching)"

I am fuzzy matching paragraphs with their best match from a significant list of paragraphs. When I use fuzzywuzzy without python-Levenshtein on Ubuntu, I receive the error that I should install it for a speed-up, and it took about 45 mins on a test set of paragraphs:

real 46m56.813s
user 125m45.952s
sys 0m0.432s

When I then installed python-Levenshtein, the error went away as expected, but running the same example set of paragraphs took over 200 minutes:

real 204m51.384s
user 522m49.436s
sys 0m21.376s

The results are identical except for one paragraph, which does match better with the slower, python-Levenshtein version. While I haven't timed an example, doing the same on the mac also seems significantly slower with python-Levenshtein installed.

The error message when python-Levenshtein is not installed says that the reason to install the package is for speed, so to me something isn't right: either it shouldn't be slower, or the error message should change to say that it should be installed for increased accuracy (if that is the case).

unintuitively high score for single letter choices

Let me state here that I did not look into the fuzzy matching algorithms yet but I came across the following example and it struck me as humanly odd. Any suggestions, workarounds, explanations?

In [57]: process.extract("pheL", ["h", "phe-L", "trnaphe", "p", "e", "L"], limit=None)

Out[57]: [('h', 90), ('p', 90), ('e', 90), ('L', 90), ('phe-L', 89), ('trnaphe', 76)]

Submit to PyPI

It would be better if this package is listed in PyPI (Python Package Index). It makes users able to install the package through easy_install or pip.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.