Giter VIP home page Giter VIP logo

cosmostore's Introduction

CosmoStore

F# Event Store library for various storage providers (Cosmos DB, Table Storage, Marten, InMemory and LiteDB)

Features

  • Storage agnostic F# API
  • Support for Azure Cosmos DB
  • Support for Azure Table Storage
  • Support for Marten
  • Support for In-memory
  • Support for LiteDB
  • Optimistic concurrency
  • ACID compliant
  • Simple Stream querying

Available storage providers

Storage Provider Payload type Package Version Author
none (API definition only) - CosmoStore NuGet @dzoukr
Azure Cosmos DB Newtonsoft.Json CosmoStore.CosmosDb NuGet @dzoukr
Azure Table Storage Newtonsoft.Json CosmoStore.TableStorage NuGet @dzoukr
InMemory Newtonsoft.Json CosmoStore.InMemory NuGet @kunjee
Marten Newtonsoft.Json CosmoStore.Marten NuGet @kunjee
LiteDB BsonValue / BsonDocument CosmoStore.LiteDb NuGet @kunjee
ServiceStack 'a CosmoStore.ServiceStack NuGet @kunjee

What is new in version 3

All previous version of CosmoStore were tightly connected with Newtonsoft.Json library and used its JToken as default payload for events. Since version 3.0 this does not apply anymore. Whole definition of EventStore was rewritten to be fully generic on payload and also on version level. Why? Some libraries not only use different payload than JToken, but possibly use different type for Version then int64 (default before version 3). Authors of libraries using CosmoStore API now can use any payload and any version type that fits best their storage mechanism.

Event store

Event store (defined as F# record) is by design storage agnostic which means that no matter if you use Cosmos DB or Table Storage, the API is the same.

type EventStore<'payload,'version> = {
    AppendEvent : StreamId -> ExpectedVersion<'version> -> EventWrite<'payload> -> Task<EventRead<'payload,'version>>
    AppendEvents : StreamId -> ExpectedVersion<'version> -> EventWrite<'payload> list -> Task<EventRead<'payload,'version> list>
    GetEvent : StreamId -> 'version -> Task<EventRead<'payload,'version>>
    GetEvents : StreamId -> EventsReadRange<'version> -> Task<EventRead<'payload,'version> list>
    GetEventsByCorrelationId : Guid -> Task<EventRead<'payload,'version> list>
    GetStreams : StreamsReadFilter -> Task<Stream<'version> list>
    GetStream : StreamId -> Task<Stream<'version>>
    EventAppended : IObservable<EventRead<'payload,'version>>
}

Each function on record is explained in separate chapter.

Initializing Event store for Azure Cosmos DB

Cosmos DB Event store has own configuration type that follows some specifics of database like Request units throughput.

type Configuration = {
    DatabaseName : string
    ContainerName : string
    ConnectionString : string
    Throughput : int
    InitializeContainer : bool
}

Note: If you don't know these terms check official documentation

Configuration can be created "manually" or use default setup (fixed collection with 400 RU/s)

open CosmoStore

let cosmosDbUrl = Uri "https://mycosmosdburl" // check Keys section on Azure portal
let cosmosAuthKey = "VeryPrivateKeyValue==" // check Keys section on Azure portal
let myConfig = CosmosDb.Configuration.CreateDefault cosmosDbUrl cosmosAuthKey

let eventStore = myConfig |> CosmosDb.EventStore.getEventStore

Initializing Event store for Azure Table Storage

Configuration for Table Storage is much easier since we need only account name and authentication key.

type StorageAccount =
    | Cloud of accountName:string * authKey:string
    | LocalEmulator

type Configuration = {
    DatabaseName : string
    Account : StorageAccount
}

As for Cosmos DB, you can easily create default configuration.

open CosmoStore

let storageAccountName = "myStoreageAccountName" // check Keys section on Azure portal
let storageAuthKey = "VeryPrivateKeyValue==" // check Keys section on Azure portal
let myConfig = TableStorage.Configuration.CreateDefault storageAccountName storageAuthKey

let eventStore = myConfig |> TableStorage.EventStore.getEventStore

Writing Events to Stream

Events are data structures you want to write (append) to some "shelf" also known as Stream. Event for writing is defined as this type:

type EventWrite<'payload> = {
    Id : Guid
    CorrelationId : Guid option
    CausationId : Guid option
    Name : string
    Data : 'payload
    Metadata : 'payload option
}

When writing Events to some Stream, you usually expect them to be written having some version hence you must specify optimistic concurrency strategy. For this purpose the type ExpectedVersion exists:

type ExpectedVersion<'version> =
    | Any
    | NoStream
    | Exact of 'version

There are two defined functions to write Event to Stream. AppendEvent for writing single Event and AppendEvents to write more Events.

let expected = ExpectedVersion.NoStream // we are expecting brand new stream
let eventToWrite = ... // get new event to be written
let streamId = "MyAmazingStream"

// writing first event
eventToWrite |> eventStore.AppendEvent streamId expected 

let moreEventsToWrite = ... // get list of another events
let newExpected = ExpectedVersion.Exact 2L // we are expecting next event to be in 2nd version

// writing another N events
moreEventsToWrite |> eventStore.AppendEvents streamId newExpected

If everything goes well, you will get back list (in Task) of written events (type EventRead - explained in next chapter).

Reading Events from Stream

When reading back Events from Stream, you'll a little bit more information than you wrote:

type EventRead<'payload,'version> = {
    Id : Guid
    CorrelationId : Guid option
    CausationId : Guid option
    StreamId : StreamId
    Version: 'version
    Name : string
    Data : 'payload
    Metadata : 'payload option
    CreatedUtc : DateTime
}

You have two options how to read back stored Events. You can read single Event by Version using GetEvent function:

// return 2nd Event from Stream
let singleEvent = 2L |> eventStore.GetEvent "MyAmazingStream"

Or read list of Events using GetEvents function. For such reading you need to specify the range:

// return 1st-2nd Event from Stream
let firstTwoEvents = EventsReadRange.VersionRange(1,2) |> eventStore.GetEvents "MyAmazingStream"

// return all events
let allEvents = EventsReadRange.AllEvents |> eventStore.GetEvents "MyAmazingStream"

To fully understand what are the possibilities have a look at EventsReadRange definition:

type EventsReadRange<'version> =
    | AllEvents
    | FromVersion of 'version
    | ToVersion of 'version
    | VersionRange of fromVersion:'version * toVersion:'version

If you are interested in Events based on stored CorrelationId, you can use function introduced in version 2 - GetEventsByCorrelationId

let myCorrelationId = ... // Guid value
let correlatedEvents = myCorrelationId |> eventStore.GetEventsByCorrelationId

Reading Streams from Event store

Each Stream has own metadata:

type Stream = {
    Id : string
    LastVersion : int64
    LastUpdatedUtc : DateTime
}

If you know exact value of Stream Id, you can use function GetStream. To query more Streams, use GetStreams function. The querying works similar way as filtering Events by range, but here you can query Streams by Id:

let allAmazingStream = StreamsReadFilter.StartsWith("MyAmazing") |> eventStore.GetStreams
let allStreams = StreamsReadFilter.AllStream |> eventStore.GetStreams

The complete possibilities are defined by StreamsReadFilter type:

type StreamsReadFilter =
    | AllStreams
    | StartsWith of string
    | EndsWith of string
    | Contains of string

Observing appended events

Since version 1.4.0 you can observe appended events by hooking to EventAppended property IObservable<EventRead>. Use of FSharp.Control.Reactive library is recommended, but not required.

Comparison with Jet's Equinox?

Coming from Jet's Equinox? Please see amazing comment describing conceptual differences between Equinox and CosmoStore written by @bartelink: #6 (comment)

Known issues (Azure Table Storage only)

Azure Table Storage currently allows only 100 operations (appends) in one batch. CosmoStore reserves one operation per batch for Stream metadata, so if you want to append more than 99 events to single Stream, you will get InvalidOperationException.

cosmostore's People

Contributors

ctaggart avatar dzoukr avatar eugene-g avatar jspearman3 avatar kunjee17 avatar ninjarobot avatar theangrybyrd avatar viktorvan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cosmostore's Issues

In memory example configuration

Hey, a while ago I was playing around with CosmoStore and I used the in-memory approach.

#load ".paket/load/main.group.fsx" 

open System
open System.Collections.Concurrent
open CosmoStore
open CosmoStore.InMemory
open Thoth.Json.Net
open Microsoft.FSharp.Reflection

let config : Configuration =
    { InMemoryStreams = ConcurrentDictionary<string,Stream>()
      InMemoryEvents = ConcurrentDictionary<Guid, EventRead>() }
    
let store = EventStore.getEventStore config

type Event =
    | Increase of int
    | Decrease of int

let encodeEvent = Encode.Auto.generateEncoder<Event>()
let decodeEvent = Decode.Auto.generateDecoder<Event>()

let getUnionCaseName (x:'a) = 
    match FSharpValue.GetUnionFields(x, typeof<'a>) with
    | case, _ -> case.Name  

let createEvent event =
    { Id = (Guid.NewGuid())
      CorrelationId = None
      CausationId = None
      Name = getUnionCaseName event
      Data = encodeEvent event
      Metadata = None }
    
let appendEvent event =
    store.AppendEvent "CounterStream" ExpectedPosition.Any event
    |> Async.AwaitTask
    |> Async.RunSynchronously

let increase amount =
    Increase(amount)
    |> createEvent
    |> appendEvent
    
let decrease amount =
    Decrease(amount)
    |> createEvent
    |> appendEvent

increase 5
decrease 3
increase 198
decrease 203
increase 4

store.GetEvents "CounterStream" EventsReadRange.AllEvents
|> Async.AwaitTask
|> Async.RunSynchronously
|> List.fold (fun acc e ->
    match Decode.fromValue "$" decodeEvent e.Data with
    | Ok(Increase i) -> acc + i
    | Ok(Decrease d) -> acc - d
    | Error e ->
        failwithf "invalid event from store, %s" e
) 0
|> printfn "Projected state is %d"

When upgrading to v3 I noticed that my configuration isn't correct anymore.
Could a sample perhaps be added to the README for InMemory configuration?

Agreegate in CosmoStore

I don't know if that is good idea or now. But CosmoStore (API library) should have agreegrate type as well. (totally inspired by your blog)

It will help for user to have little less code to have. Even I m in favor of any other generic helpers we can provide which can help end users (Not the Store Creators like us) to write ES based systems easily.

Haven't thought of thoroughly but these includes but not limited to Agreegate, ReadDB, Projection Support etc etc.

I don't want to overwhelm user or bound with some way of doing things but if something usual is there we can surely help them.

Why no CausationId?

Been reading the doc and I see CorrelationId but no CausationId, is this a deliberate decision? If so, why? Note that it's very likely I would know this if I were more experienced in ES.

Confusion in types. Might need some extra functions.

Even though we are providing more generic types to make things more flexible. In code we are still having JToken and int64.

Obviously we are doing logic part on it. We might need to update that, to truly support everything.

Azure Table indexing issues

Hi,

Firstly, good stuff so far. This project is looking pretty interesting.

So, I am seriously considering using this, however, I have a couple of scalability concerns which I feel need addressing first. Let me clarify:

Azure table only indexes partition key and row key, which forms a sort of composite primary key. As I understand it there are no secondary indexes at all, so if I were to query a table field which is not partition or row key, I would effectively be running a 'partition scan' (a mini table scan).

Because of this, it is imperative to store (at least a copy) of the data in a way that is read-optimized if it is going to be used as a primary store. Having a dig through the source code it seems that there are basically two common types of query groups:

  • Get all events in stream

  • Get all events in stream where offset > myLocalOffset (there are others but this is most prevalent IMO)

  • Get a specific stream summary by streamId

  • Get all stream summaries (so I can determine if there are new events to accumulate over)

  • Get a specific stream summary (as above, probably more important in practice)

A couple of things jumped out at me -

  • If I want to query multiple stream summaries, I have to do a cross-partition query which is inefficient
  • If I want to do a range query, Position is not indexed, which means scanning all events every time.

The latter one worries me the most, and I think it can easily be fixed by simply rearranging the ID field with the Position field. As far as I can tell the event Ids are arbitrary and are rarely (if ever) queried against. Perhaps we can exchange the RowKey to actually be the position indexer and the Id as a secondary property? The benefit of this is range queries will be executed against a clustered index, and thus will be massively more efficient.

The summaries issue could be reduced by putting the summaries in a separate partition or similar where the RowKey is the streamId, although this has its own scalability issues if you have a lot of streams. Alternatively, you probably at least want an API that gets just a single stream by partition key and row key (stream id). This would be far more efficient than a cross partition query, and seems like the most common operation to me. I often know what I am projecting from, so to provide that explicitly in a query makes sense (although there is an argument to be made that I could just directly get the events after X, which will be 0 events most of the time).

I would be happy to contribute here or help out btw, just I am aware this is a massive breaking change for existing users so I didnt want to dive right in. Let me know if/how I can help though!

There is a very comprehensive article about all this indexing stuff, along with patterns of how to solve here:
https://docs.microsoft.com/en-gb/azure/cosmos-db/table-storage-design-guide#index-entities-pattern

Azure Table Storage Change Feed?

Hi,

I saw the comment that the author moved from Cosmos DB to Azure Table Storage due to high costs with the former. How do you push the data to the Read Model though, I couldnt find Change Feed or similar for Table Storage ... and polling doesnt sound workable ...

Best regards,
Deyan

Collection name cannot be specified?

Hi,

I am looking into creating a Cosmos DB database with shared throughput across all collections/containers. It seems the database name can be specified when setting up the configuration, but the collection/container name not? Asking because I wanted to create several XxxEvents collections, and have different Cosmos Db Triggers assigned to each one ..

Version 3 discussion

This issue is created to track discussion about upcoming version 3 of CosmoStore and what should be its main goal. For me it is mainly about making EventStore "JSON payload representation agnostic" (in other words, do not force to use JToken) since Microsoft itself annonced new JSON API and moving away from Newtonsoft.Json coupling it its libraries. It will also allow to add other stores like LiteDB without unnecessary transformation to JToken and back.

Technically it should be quite easy to make EventWrite, EventRead and EventStore generic like:

type EventWrite<'a> = {
    Id : Guid
    CorrelationId : Guid option
    CausationId : Guid option
    Name : string
    Data : 'a
    Metadata : 'a option
}

Also upgrading existing libraries should be straightforward, so version 3 could be done pretty quickly.

Any other ideas? cc: @kunjee17 @ctaggart @TheAngryByrd @viktorvan

One or more errors occurred. (unordered field Item) - This is probably more a question.

Hi there!
I am looking for an event store which will be friendly to F#, so the one written in F# should be perfect ;D I tried to use it with postgres + marten, but I am failing to read events if they are represented by discriminated union. Let me just give You the test that I run;

module Tests

open System
open System.Threading.Tasks
open Xunit
open CosmoStore.Marten
open CosmoStore
open FsUnit.Xunit

type DummyEvent = { IncrementedBy: int }
type DummyEvent2 = { Date: DateTime }
type Dummies =
    | Dummy1 of DummyEvent
    | Dummy2 of DummyEvent2

[<Fact>]
let ``Saving and reading`` () =
    // Arrange
    let streamId = "MyAmazingStream"

    let config: Configuration =
        { Host = "localhost"
          Username = "postgres"
          Password = "Secret!Passw0rd"
          Database = "cativity" }

    let es = EventStore.getEventStore config

    let eventWrite: EventWrite<Dummies> =
        { Id = Guid.NewGuid()
          CorrelationId = None
          CausationId = None
          Name = nameof DummyEvent
          Data = Dummies.Dummy1{ IncrementedBy = 4 }
          Metadata = None }
    // Act + Assert
    [ eventWrite ]
        |> es.AppendEvents streamId ExpectedVersion.Any
        |> Async.AwaitTask
        |> Async.RunSynchronously
        |> ignore
    let events =
        EventsReadRange.AllEvents
        |> es.GetEvents "MyAmazingStream"
        |> Async.AwaitTask
        |> Async.RunSynchronously
    [ events.Head.Data ] |> should equal [ eventWrite.Data ]

The test breaks on event read.

System.AggregateException
One or more errors occurred. (unordered field Item)

So it is quite mysterious. Probably I am doing something wrong.
Without DU it works just OK - so if we will use EventWrite<DummyEvent>
Any hints? Thanks guys in advance!

Need some details for contribution

Hi @Dzoukr

It would be great if you can provide some details around the contribution. I ll start with few questions.

  • What is getting store? From the code I figure it out that Stream type and EventRead is getting stored. But I am not sure if anything I missed.
  • When we are creating EventRead createdUtc is based on event or stream's createUtc.
  • Is any validation I need to take care of ? I got one validation about expected position but is there any other validation is there?
  • Is there any coding guideline for adding support of another data store ?

Generalize Cosmo Store

@Dzoukr Loved your work. Is it possible abstract away the implementation to make is more generalize.

I was trying Marten Event Store but it was tied with C# thingy for project part of it. But Marten as store is wonderful.

And then there is lite db for simple testing or simple application. I am noob at ES so can't say how complicated it would be. But it would be good to have it.

Easy Migration from AzureTable to Serverless CosmosDB Account

I really enjoy using CosmoStore. Thanks for creating it!

Apologies if anyone has already shared this or I missed documentation.

I was running into some of the size limitations using AzureTable storage and decided to give the new AzureTable and Serverless/Consumption plan support for CosmosDB.

https://learn.microsoft.com/en-us/azure/cosmos-db/table/introduction

It was easy enough to update the existing TableStorage configuration to make that happen. After creating your CosmosDB AzureTable instance (see the link above for that)...

let sovereignCloud = 
    CosmoStore.TableStorage.SovereignCloud (acctName, acctKey, "cosmos.azure.com")
let config = 
    {   
        TableName = tableName
        Account = sovereignCloud
    }
config
|> CosmoStore.TableStorage.EventStore.getEventStore

SovereignCloud to the rescue! Someone was forward-thinking there!

Thanks!

There is some Dependancy issues

I was trying pull CosmoStore.ServiceStack. It should depend on CosmoStore 3.0 but it is depending on CosmoStore 3.0.1. And that is making paket little crazy.

Can you please cross check. I guess some thing messed up at the time of publishing.

Live subscription using Cosmos change feed

Was planning on building an eventstore for CosmosDb like this one.
One requirement would be to support live subscriptions, which could be implemented using the change feed of CosmosDb.
That would break the idea of having 1 interface for both storage implementations (or you would need to implement some kind of polling for Table Store or use the event grid notifications).

I'm totally fine to implement this myself, but I'm in doubt whether to create a PR as improvement for your project or fork/start over with a Cosmos-only implementation.

What would you prefer?

Missing Version in CosmoStore 3.0?

I tried CosmoStore.LiteDb today and had a weird experience.

I installed it with paket. When running it I got a runtime error about the missing Dll CosmoStore in Version 3.0.0.0

I then opened the CosmoStore.dll which comes with the nuget package in dnspy where it shows it has version 1.0.0.0. I changed this to 3.0.0.0 so that I got rid of the runtime error.

For me it looks like the release 3.0 is faulty but I am absolutely not sure about this, because it is already 4 months old. Am I really the first one who encountered this issue?

Switching to Azure.Data.Tables

The WindowsAzure.Storage libraries are pretty old - would you be open to updating this to Azure.Data.Tables? I've got a work in progress, but wanted to make sure @Dzoukr is good with it.

Azure Table Storage requires JToken payload

It seems that the Azure Table Store implementation somehow requires a JToken payload. Although if I understand correctly it should allow any type of payload since version 3.

Example script:

#r "nuget: CosmoStore, Version=3.0.1"
#r "nuget: CosmoStore.TableStorage, Version=3.0.1"

open CosmoStore
open System

let storageAccountName = "account"
let storageAuthKey = "key"
let myConfig = TableStorage.Configuration.CreateDefault storageAccountName storageAuthKey
let eventStore = myConfig |> TableStorage.EventStore.getEventStore

let serializedData = """
{
    "value": "data",
    "somenum": 8
}
"""

let eventToWrite = {
    Id = Guid.NewGuid()
    CorrelationId = None
    CausationId = None
    Name = "My custom event"
    Data = serializedData
    Metadata = Some "Super meta"
}
let streamId = "awesome-stream"

eventToWrite |> eventStore.AppendEvent streamId ExpectedVersion.NoStream |> Async.AwaitTask |> Async.RunSynchronously

When you run the script you get the following error:

C:\repos\cosmo\cosmotest.fsx(29,17): error FS0001: Type mismatch. Expecting a
    'EventWrite<string> -> 'a'
but given a
    'EventWrite<Newtonsoft.Json.Linq.JToken> -> Threading.Tasks.Task<EventRead<Newtonsoft.Json.Linq.JToken,int64>>'
The type 'string' does not match the type 'Newtonsoft.Json.Linq.JToken'

Document with type=Stream

What is exactly the purpose of the document with type=Stream, including lastPosition, lastUpdatedUtc attribute? Shouldn't it be in another collection?
Btw, what is the partition key of the collection?

Favored way of making JToken?

Hi!

I have to do a lot of back and forth to convert my events to the JToken type. Since Newtonsoft's JsonConvert's default converter serializes fsharp types in a very bloated and unnatural way, I have to use FSharpLu.Json's serializer (see the difference here: https://github.com/Microsoft/fsharplu/wiki/fsharplu.json#option-type).

However, FSharpLu.Json convert object directly to string โ€“ which means I have to take the result of that and parse it with JToken.Parse to get the desired JToken type. That seems like a lot of unnecessary conversion back and forth. Probably not a huge deal for a modern CPU, but still creates a little unnecessary load โ€“ especially considering that our app will grow and eventual have a lot of events that needs processing.

So... Is there a way to pass it a string directly without parsing it first with JToken.Parse? Also, when I read the event, I have to do a .ToString() to convert it back to raw json, so that FSharpLu.Json can convert it to an object again.

Also, I'm really curious: How do you guys create Json? Do you use JsonConvert from Newtonsoft? How do you deal with the awkward json it makes for SumTypes & Options types etc.? Is there a better library than FSharpLu.Json? Chiron has zero documentation it seems, and all other json projects look pretty dead :( It seems that Json + F# is almost a little unsolved...?

We should have github actions to push to do all the CI / CD.

As stores are growing we should be having CI/CD and publishing to nuget automated. Just to lower down a burden on maintainer.

Also, we have docs and site deployment as well. Mostly github docs. (I guess we might need more than something to readme. as mentioned in #34 :) )

ESERROR_POSITION_POSITIONNOTMATCH return last position?

Would it be possible to return the last stream position (nextPosition) in the ESERROR_POSITION_POSITIONNOTMATCH exception, so that the client code can parse it out and return 409 with ETag = last stream position (in the context of REST API)?

Comparison with Jet's Equinox

Didn't want to get too far off-topic in the other issue, so creating a new one to discuss this.
One of the reasons it took me longer to start working on the changefeed, except for me being typical optimistic developer, is the fact a ran into this library: https://github.com/jet/equinox

Seen Jet's presentation long ago and almost no longer expected them to release this as open-source, but they did. Giving that they use this in production at their scale I think it's valuable to do a little comparison between their approach and CosmoStore.

Don't say this should be a reason to stop developing CosmoStore, but I think it's good to agree on the reason why we develop a new CosmosDb-based event-store and be explicit about that.

Remove Fixed collection from CosmosDB Configuration

Yesterday on Build conference 2018 Microsoft announced they dropped the Fixed (up to 10 GB) collections for CosmosDB and lowering the initial throughput from 1000 RU/s to 400 RU/s. From configuration point of view this could be the breaking change so I rather prefer to have it marked as obsolete for a while (few releases) and remove it later, however start to use proper (unlimited) collection setup with partition id.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.