Giter VIP home page Giter VIP logo

pytorch-struct's Introduction

Torch-Struct: Structured Prediction Library

Tests Coverage Status

A library of tested, GPU implementations of core structured prediction algorithms for deep learning applications.

  • HMM / LinearChain-CRF
  • HSMM / SemiMarkov-CRF
  • Dependency Tree-CRF
  • PCFG Binary Tree-CRF
  • ...

Designed to be used as efficient batched layers in other PyTorch code.

Tutorial paper describing methodology.

Getting Started

!pip install -qU git+https://github.com/harvardnlp/pytorch-struct
# Optional CUDA kernels for FastLogSemiring
!pip install -qU git+https://github.com/harvardnlp/genbmm
# For plotting.
!pip install -q matplotlib
import torch
from torch_struct import DependencyCRF, LinearChainCRF
import matplotlib.pyplot as plt
def show(x): plt.imshow(x.detach())
# Make some data.
vals = torch.zeros(2, 10, 10) + 1e-5
vals[:, :5, :5] = torch.rand(5)
vals[:, 5:, 5:] = torch.rand(5) 
dist = DependencyCRF(vals.log())
show(dist.log_potentials[0])

png

# Compute marginals
show(dist.marginals[0])

png

# Compute argmax
show(dist.argmax.detach()[0])

png

# Compute scoring and enumeration (forward / inside)
log_partition = dist.partition
max_score = dist.log_prob(dist.argmax)
# Compute samples 
show(dist.sample((1,)).detach()[0, 0])

png

# Padding/Masking built into library.
dist = DependencyCRF(vals, lengths=torch.tensor([10, 7]))
show(dist.marginals[0])
plt.show()
show(dist.marginals[1])

png

png

# Many other structured prediction approaches
chain = torch.zeros(2, 10, 10, 10) + 1e-5
chain[:, :, :, :] = vals.unsqueeze(-1).exp()
chain[:, :, :, :] += torch.eye(10, 10).view(1, 1, 10, 10) 
chain[:, 0, :, 0] = 1
chain[:, -1,9, :] = 1
chain = chain.log()

dist = LinearChainCRF(chain)
show(dist.marginals.detach()[0].sum(-1))

png

Library

Full docs: http://nlp.seas.harvard.edu/pytorch-struct/

Current distributions implemented:

  • LinearChainCRF
  • SemiMarkovCRF
  • DependencyCRF
  • NonProjectiveDependencyCRF
  • TreeCRF
  • NeuralPCFG / NeuralHMM

Each distribution includes:

  • Argmax, sampling, entropy, partition, masking, log_probs, k-max

Extensions:

  • Integration with torchtext, pytorch-transformers, dgl
  • Adapters for generative structured models (CFG / HMM / HSMM)
  • Common tree structured parameterizations TreeLSTM / SpanLSTM

Low-level API:

Everything implemented through semiring dynamic programming.

  • Log Marginals
  • Max and MAP computation
  • Sampling through specialized backprop
  • Entropy and first-order semirings.

Examples

Citation

@misc{alex2020torchstruct,
    title={Torch-Struct: Deep Structured Prediction Library},
    author={Alexander M. Rush},
    year={2020},
    eprint={2002.00876},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

This work was partially supported by NSF grant IIS-1901030.

pytorch-struct's People

Contributors

chijames avatar da03 avatar dpfried avatar erip avatar johnreid avatar justinchiu avatar kemal1056949 avatar kmkurn avatar sanjayss34 avatar srush avatar sustcsonglin avatar urchade avatar zhaoyanpeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-struct's Issues

Question on the Complexity of CKY

  1. By using the GPU, we are able to reduce the complexity of linear-chain CRF from O(NT^2) to O(log N). where N is the sentence length and T is the number of labels.

So, if I view the linear-chain CRF as a specific case of Tree, where the height H = sequence length N, so the complexity can be re-written as O(log H).

  1. Then, in the case of CKY, I can see that the complexity can be reduced to O(log N)/ O(H) by parallel computing. I'm wondering if it can be furthered reduced to O(log H) as well using the parallel scanning algorithm mentioned in the tutorial?

Multi-root NonProjectiveDependencyCRF

Hi,

Does NonProjectiveDependencyCRF support multi-root trees? If I understand correctly, [KGCPerezC07] did propose a method for the multi-root non-projective case.

Thanks!

DependencyCRF partition function broken

Getting the following in-place operation error when using the DependencyCRF:

B,N = 3,50
phi = torch.randn(B,N,N)
DependencyCRF(phi).partition
/usr/local/lib/python3.7/dist-packages/torch_struct/deptree.py in _check_potentials(self, arc_scores, lengths)
    121         arc_scores = semiring.convert(arc_scores)
    122         for b in range(batch):
--> 123             semiring.zero_(arc_scores[:, b, lengths[b] + 1 :, :])
    124             semiring.zero_(arc_scores[:, b, :, lengths[b] + 1 :])
    125 

/usr/local/lib/python3.7/dist-packages/torch_struct/semirings/semirings.py in zero_(xs)
    124     @staticmethod
    125     def zero_(xs):
--> 126         return xs.fill_(-1e5)
    127 
    128     @staticmethod

RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

Alignment CRF error

Looks like there is an error now on install for AlignmentCRF.

https://github.com/harvardnlp/pytorch-struct/blob/master/notebooks/CTC_with_padding.ipynb

Find marginals (see uncertainty from randomness)
show(dist.marginals, 1)

Error:
`
~/opt/anaconda3/lib/python3.7/site-packages/torch_struct/alignment.py in dp_scan(self, log_potentials, lengths, force_grad)
98 point = (l + M) // 2
99
--> 100 charta[1][:, b, point:, 1, ind, :, :, Mid] = semiring.one
(
101 charta[1][:, b, point:, 1, ind, :, :, Mid]
102 )

AttributeError: type object 'LogSemiring' has no attribute 'one_'
`

Get the score of dist.topk()

The topk() function returns top k predictions from the distribution, how to easily get the corresponding score of each prediction?

By the way, when sentence lengths are short and the k value of topk is large, how to know the number of predictions that are valid? For the example in DependencyCRF, when sentence length is 2 and k is 5, only the top 3 predictions are valid I think.

[Question] How to compute a marginal probability over a (contiguous) set of nodes?

Hi.

Thank you for the great library. I have one question that I hope you could help with.

How can I compute a marginal probability over a (contiguous) set of nodes? Right now, I am using your LinearChain-CRF to do NER. In addition to the best sequence itself, I also need to compute the model’s confidence in its predicted labeling over a segment of input. For example, what is the probability that a span of tokens constitute a person name?

I read your example and see how you get the marginal prob for each individual node. But I was not quite sure how to compute the marginal prob over a subset of nodes. If you could give any hint, it would be great.

Thank you.

Preterminal rule prob. in NeuralCFG: probably a bug?

def terms(words):
return torch.einsum(
"bnh,th->bnt", self.word_emb[words], self.mlp1(self.term_emb)
).log_softmax(-2)

This, for every preterminal, estimates the distribution over the words in a sentence rather than over the whole vocabulary.

Is it a bug or intended to be an approximation to the true distribution? The approximation indeed saves a lot of GPU memory but I have not tested if it would cause any performance loss.

Bug DependencyCRF function topk

The output is normal for the first best case, for the rest, the results are strange.
Bug test for DependencyCRF function topk()

import torch
from torch_struct import DependencyCRF

potential = torch.rand(1,2,2)
dist = DependencyCRF(potential)
print(dist.topk(3))

FastLogSemiring

Hi,

Thanks for making this library and it's amazing to have these different CRFs wrapped up in a common and easy to use framework.

I've been playing with the LinearChainCRF and one thing I noticed is the memory usage can be very high during loss backward pass on both CPU and GPU. I found the FastLogSemiring in fast_semirings.py uses genbmm.logbmm() and significantly reduce memory usage on GPU if I change the default LogSemiring used in StructDistribution class to FastLogSemiring. However, I haven't seen this being documented anywhere so my questions are:

  1. Is FastLogSemiring ready to be used? It's not being included in test_semirings.py
  2. If so, what would be the best way to switch between LogSemiring and FastLogSemiring? Is there a plan to introduce a parameter to choose between the semirings in StructDistribution class?

Mini-batch setting with Semi Markov CRF

I encounter learning instability when using a batch size > 1 with the semi-markovian CRF (loss goes to very large negative number), even when explicitly providing "lengths". I think the bug comes from the masking.
The model train well when setting batch size 1.

pytorch-struct shows a warning for torch versions >= 1.8.0

When using pytorch-struct while using torch>=1.8.0, the following warning appears:

In [1]: import torch
   ...: from torch_struct import DependencyCRF, LinearChainCRF

In [2]: vals = torch.zeros(2, 10, 10) + 1e-5
   ...: vals[:, :5, :5] = torch.rand(5)
   ...: vals[:, 5:, 5:] = torch.rand(5)
   ...: dist = DependencyCRF(vals.log())
/Users/goncalocorreia/.pyenv/versions/3.8.1/envs/struct/lib/python3.8/site-packages/torch/distributions/distribution.py:44: UserWarning: <class 'torch_struct.distributions.DependencyCRF'> does not define `arg_constraints`. Please set `arg_constraints = {}` or initialize the distribution with `validate_args=False` to turn off validation.
  warnings.warn(f'{self.__class__} does not define `arg_constraints`. ' +

notebooks are broken

CTC, CTC_with_padding, and Unsupervised_CFG notebooks are not working. in sparse.py L591 too many values to unpack for CTCs and no module named torch_struct.networks for CFG.

log_potentials argument shape of PCFG

In the API document, it said class torch_struct.SentCFG has parameter

log_potentials (tuple) – event tuple with event shapes terms (N x T) rules (NT x (NT+T) x (NT+T)) root

I wonder why the rule is not of the shape NT x (NT+T)?
Thanks!

Dependency CRF add labels

Just check the documentation of dependency CRF, it seems that the dependency labels (i.e. relations) are not considered (yet). Am I right?

Increasing memory usage of DependencyCRF

Running the piece of code below multiple times (with CUDA_VISIBLE_DEVICES is set to a single GPU id)

_ = DependencyCRF(torch.zeros(5,5,5).cuda(), multiroot=False).marginals
print(torch.cuda.memory_allocated())

will result in increasing allocated CUDA memory, e.g. 43520, 44544, 45568, so on. The same thing happens with .partition, but doesn't happen with NonProjectiveDependencyCRF where the memory usage is constant. Is this expected?

gumbel crf

Hi,

Is the gumbel_crf function ready to use? If so, can you point me to the relevant documentation? (I cannot find it anywhere in here)

Thanks!

Release on PyPI?

Is there any interest on releasing pytorch-struct (and genbmm) on the official Python Package Index?

I ran into this because I distribute my constituency parser on PyPI, and I just recently pushed a new version that depends on pytorch-struct: https://pypi.org/project/benepar/0.2.0a0/

It turns out that packages on PyPI aren't allowed to depend on packages only hosted on github, so users of my parser can't just pip install benepar and have it work right away.

Single-root vs multi-root dependency trees

Hi, it's me again. Quick question, does DependencyCRF support single-root trees? That is, the root symbol has exactly one dependent. NonProjectiveDependencyCRF does support it, but DependencyCRF seems to support only the multi-root case.

[docs] Semantics of ops in log_potential of Alignment

http://nlp.seas.harvard.edu/pytorch-struct/model.html#torch_struct.AlignmentCRF:
Ops are 0 -> j-1, 1->i-1,j-1, and 2->i-1

Could you please clarify what are the ops/third dimension semantics? I guess they are somehow related to weights of insertions/deletions/substitutions in levenshtein-kinded alignment, but it'd be much clearer if this was explicitly explained.

Context:
I'd like to use Alignment as a way to do CTC alignment: I have a Tx(L+1)-sized character probabilities tensor and a (t<=T)-sized ground truth sequence. I'd like to do forced alignment and find the best monotonic mapping of ground truth characters onto input probabilities (for visualization purposes).

Instable learning with SemiMarkov CRF

HI,

First, thank you for fixing #110 (@da03), the SemiCRF works better now, I was able to get good results on span extraction tasks. However, I still encounter a learning instability where the loss (neg logprob) gets negative after several steps (and the accuracy starts to drop). The same problem occurs with batch_size = 1. Below I put the learning curve (f1_score and log loss).

(Maybe the bug comes from the masking of spans where (length, length + span_with) and length + span_with > length, but I am not sure.)

Edit: I created a test and it seems that the masking is good. Maybe the log_prob computation or the to_parts function ?

train_loss
score

Lengths on the same device as potentials

For LinearCRF and other similar APIs, it's better that lengths should be created on the same device following potential. Lengths is optional, so if users simply input potentials on cuda, torch-struct will result error.

Differentiating Through Marginals of Dependency CRF

Hi,

I tried using the DependencyCRF in a learning setting which required me to differentiate through the marginals. This turned out to be really difficult to achieve. I noticed that the gradients computed for the marginals tended to be of high variance + larger than I would expect (even though I haven't deep-dived into the Eisner algorithm yet).

I wonder if this a feature of the Eisner algorithm or might potentially hint at a bug?
Below is a minimal example which showcases that the maximum gradient returned for the arcscores can be quite large, even if they are on a reasonable scale.

import torch
from torch_struct import DependencyCRF
torch.manual_seed(99)

maxlen = 50
vals = torch.randn((1, maxlen, maxlen), requires_grad=True)
grad_output = torch.rand(1, maxlen, maxlen)
dist = DependencyCRF(vals)
marginals = dist.marginals
marginals.backward(grad_output)
print(vals.max().item())
print(marginals.max().item())
print(grad_output.max().item())
print(vals.grad.max().item())

#3.5494842529296875
#0.8289076089859009
#0.9995625615119934
#19.625778198242188

Inference for the HMM model

Hello! I was playing with the HMM distribution and I obtained some results that I don't really understand. More precisely, I've set the following parameters

t = torch.tensor([[0.99, 0.01], [0.01, 0.99]]).log()
e = torch.tensor([[0.50, 0.50], [0.50, 0.50]]).log()
i = torch.tensor(np.array([0.99, 0.01])).log()
x = torch.randint(0, 2, size=(1, 8))

and I was expecting the model to stay in the hidden state 0 regardless of the observed data x – it starts in state 0 and the transition matrix makes it very likely to maintain it. But when plotting the argmax, it appears that the model jumps from one state to the other:

def show_chain(chain):
    plt.imshow(chain.detach().sum(-1).transpose(0, 1))

dist = torch_struct.HMM(t, e, i, x)
show_chain(dist.argmax[0])

image

I must be missing something obvious; but shouldn't dist.argmax correspond to argmax_z p(z | x, Θ)? Thank you!

how about enabling gradient calculation in topk() by default

Pytorch enables grad inside @lazy_property methods by default. So in @lazy_property argmax we can use torch.autograd.grad without the risk of unrecorded computations.

In topk we are sure to use torch.autograd.grad. However, since it is not in lazy mode there is a chance that computations are not recorded. E.g., invoking topk in the scope of torch.no_grad. It is normal to disable grad during inference but it will implicitly affect the behavior of topk.

Would it be better to enable grad inside topk by default? The easiest way to achieve this is to modify

return self._struct(KMaxSemiring(k)).marginals(
    self.log_potentials, self.lengths, _raw=True
) 

to

with torch.enable_grad():
    return self._struct(KMaxSemiring(k)).marginals(
        self.log_potentials, self.lengths, _raw=True
    ) 

There could be different ways to achieve this. E.g., if we knew a function call is from topk we would switch on grad explicitly.

[Bug?] Positive probabilities in Alignment CRF when length nontrivial

Hello!
First thanks for this awesome project. I think it will make a big difference in the NLP community and I am already excited to use it in my work.

Problem TLDR: AlignmentCRF.log_prob returns values greater than 0 (marginal probabilities greater than 1. I think this might be a bug unless I'm misusing the API.

More background

I'm attempting to use an Alignment CRF (smith waterman) as the loss function for a generative model. The "true" labels are sequences of tokens of various length, and I like the alignment because it will still work if my model accidentally adds an extra token. Obviously my data is variable length, so I provide the lengths tensor argument which has the length of my true labels in each batch. When I do this and compute dist.log_prob(dist.argmax) I get some batches who have log likelihood which is very positive. I checked the partition function and it looks very negative, so my guess is the error might have to do with masking in the partition but not the score or vice-versa.

To reproduce this error

I get the same are in the CTC.ipynb example in the notebooks folder when I initialize the distribution with dist = torch_struct.AlignmentCRF(log_potentials, lengths=torch.Tensor([t] * 2 + [t-1] * 3).long())
and then dist.log_prob(dist.argmax).
Image of the error is below. Here is a collab notebook which has the error https://colab.research.google.com/drive/1C1uDWNe8IcXB-Re6WcnwRsif-q-JQySe.

image

Do you agree this is a bug or is this the intended behavior? Any suggestions for how I might debug this or correctly use your API would be much appreciated. I am not familiar with SemiRings but if you explain what might fix the issue I can try to submit a pull request (still fairly new to pytorch so I'm not sure how much help I can be).

Thanks for your attention and again for the great library!
Tim Y.

[Bug] Implementation of Eisner's algorithm does not restrict the root number to 1

Hey, I found that your implementation of Eisner's algorithm admits arbitrary root number, which is a very severe bug since dependency parsing usually has only one root token.

In your DepTree.dp() method, you make a conversion to let the root token as the first token in the sentence. Imagine that the root x{0} attacks word x_{i}, I_{0,0} + C_{1, i} = I_{0, i} and I_{0, i} + C_{i,j} = C_{0, j} for some j < L where L is the length of sentence. Now complete span C_{0, j} still have opportunity to attach a new word x_{k} for j< k<=L, making multiple root attachment possible.

Fortunately, I made some changes to your codes to restrict the root number to 1.

` def _dp(self, arc_scores_in, lengths=None, force_grad=False, cache=True):
if arc_scores_in.dim() not in (3, 4):
raise ValueError("potentials must have dim of 3 (unlabeled) or 4 (labeled)")

    labeled = arc_scores_in.dim() == 4
    semiring = self.semiring
    # arc_scores_in = _convert(arc_scores_in)
    arc_scores_in, batch, N, lengths = self._check_potentials(
        arc_scores_in, lengths
    )
    arc_scores_in.requires_grad_(True)
    arc_scores = semiring.sum(arc_scores_in) if labeled else arc_scores_in
    alpha = [
        [
            [
                Chart((batch, N, N), arc_scores, semiring, cache=cache)
                for _ in range(2)
            ]
            for _ in range(2)
        ]
        for _ in range(2)
    ]

    semiring.one_(alpha[A][C][L].data[:, :, :, 0].data)
    semiring.one_(alpha[A][C][R].data[:, :, :, 0].data)
    semiring.one_(alpha[B][C][L].data[:, :, :, -1].data)
    semiring.one_(alpha[B][C][R].data[:, :, :, -1].data)


    for k in range(1, N):
        f = torch.arange(N - k), torch.arange(k, N)
        ACL = alpha[A][C][L][: N - k, :k]
        ACR = alpha[A][C][R][: N - k, :k]
        BCL = alpha[B][C][L][k:, N - k :]
        BCR = alpha[B][C][R][k:, N - k :]
        x = semiring.dot(ACR, BCL)
        arcs_l = semiring.times(x, arc_scores[:, :, f[1], f[0]])
        alpha[A][I][L][: N - k, k] = arcs_l
        alpha[B][I][L][k:N, N - k - 1] = arcs_l
        arcs_r = semiring.times(x, arc_scores[:, :, f[0], f[1]])
        alpha[A][I][R][:N - k, k] = arcs_r
        alpha[B][I][R][k:N, N - k - 1] = arcs_r
        AIR = alpha[A][I][R][: N - k, 1 : k + 1]
        BIL = alpha[B][I][L][k:, N - k - 1 : N - 1]
        new = semiring.dot(ACL, BIL)
        alpha[A][C][L][: N - k, k] = new
        alpha[B][C][L][k:N, N - k - 1] = new
        new = semiring.dot(AIR, BCR)
        alpha[A][C][R][: N - k, k] = new
        alpha[B][C][R][k:N, N - k - 1] = new

    root_incomplete_span = semiring.times(alpha[A][C][L][0, :], arc_scores[:, :, torch.arange(N), torch.arange(N)])
    root =  [ Chart((batch,), arc_scores, semiring, cache=cache) for _ in range(N)]
    for k in range(N):
        AIR = root_incomplete_span[:, :, :k+1]
        BCR = alpha[B][C][R][k, N - (k+1):]
        root[k] = semiring.dot(AIR, BCR)
    v = torch.stack([root[l-1][:,i] for i, l in enumerate(lengths)], dim=1)
    return v, [arc_scores_in], alpha

`

Basically, I don't treat the first token as root anymore. I handle the root token just after the for-loop, so you may need handle the length variable. (length = length-1, root no longer be treated as part of sentence) . I tested the modified code and found it bug-free

Conditional and joint span probabilities in TreeCRF

I want to use the TreeCRF class to learn latent tree distributions for constituency trees for sentences. I noticed you can easily obtain the text span marginals with .marginals. However, I am interested in computing more probabilities in the tree distribution, like the conditional probability that one span occurs in the tree, given that another one occurs, or the joint probability of two spans. Is there an easy way to compute these probabilities from the marginals? Or using different torch-struct functionality?

A 'dirty' trick for the conditional probability could be to compute the marginals again, with the potential of the span you want to condition on set to a very high value? The new marginals would then actually be conditional probabilities? But that requires running the parsing algorithm once per condition, which ideally I would like to avoid.

DependencyCRF marginals possible error

Hi, while working on #63, I noticed that DependencyCRF marginals may have numerical errors:

>>> crf = DependencyCRF(torch.zeros(1,2,2))
>>> print(crf.partition.exp().item())
3.0
>>> crf.marginals.exp()
tensor([[[1.9477, 1.3956],
         [1.3956, 1.9477]]], grad_fn=<ExpBackward>)

crf.partition is correct; there are 3 trees. Since all edges have weight 1, I'd expect the marginals to be (very close to) 2 on diagonals, and 1 on off-diagonals. But they're not. Is this an error or am I misunderstanding something?

Low memory implementations

A lot of the approaches could also have low-memory versions (like CKY does). Figure out how to enable that in a clean manner (v0.6)

[Bug] DependencyCRF.log_prob returns a positive value when the input arc score is large.

Hi, I found that if I input log_potential into the DependencyCRF with some large values, the marginal distribution will be slightly over 1, and this also results in the log_prob returns a very large positive value in training.
My code is somthing like this:

dist = DependencyCRF(arc_scores, lengths=mask.sum(-1))
labels = dist.struct.to_parts(arcs[:,1:], lengths=mask.sum(-1)).type_as(arc_scores)
log_prob = dist.log_prob(labels)

I found my arc_scores contains a large value like 807687.0625
图片
This will result in the marginal contains values like 1.002
图片
The final log_prob will return a very large positive value of 750633.4375.
图片

I think this problem is something like floating-point precision and can you suggest something to me?

up sweep and down sweep

I'm interested in the parallel scan algorithm for the linear-chain CRF.

I read the related paper in the tutorial and found that there are two steps:
up sweep and down sweep in order to obtain all-prefix-sum.

I think in this case, we use that algorithm to obtain all Z(x) with different lengths in a batch.
But seems I couldn't find out the down sweep code in the repo. Can you point me out there?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.