Giter VIP home page Giter VIP logo

fabric8-ipaas's Introduction

This repository has been archived and resetted, you can still look at the git history for the old reference.

fabric8-ipaas's People

Contributors

albertocsm avatar chirino avatar chmouel avatar cmoulliard avatar davsclaus avatar ericwittmann avatar fusesource-ci avatar gitter-badger avatar iocanel avatar jamesnetherton avatar jimmidyson avatar jpechane avatar jstrachan avatar kurtstam avatar nicolaferraro avatar oscerd avatar rajdavies avatar rawlingsj avatar rhuss avatar rparree avatar valdar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fabric8-ipaas's Issues

make it easy to browse all destinations across brokers

its really handy in hawtio with jolokia to be able to browse destinations, send messages and so forth.

I wonder if we could provide a facade to make a sharded cluster of brokers appear like one broker? Kinda relates to #248 but this is more about the hawtio console for messaging really

Hawtio not working again for amqbroker

Recently i submitted a pull request in order to make jolokia work on amqbroker. However commit 8e9e4f1 breaks this because it removes the docker-from (fabric8/s2i-java) from the amqbroker/pom.xml, which now uses the one expressed in the parent pom: fabric8/java-jboss-openjdk8-jdk

Is the general approach not to use fabric8/s2i-java in favour of fabric8/java-jboss-openjdk8-jdk. If so should java-jboss-openjdk8-jdk not be fixed to expose jolokia correctly?

tx.,

BearerToken validation logic in apiman is wrong

The symptom is that it will bring back the latest user created. So if you had used "oc login" with a new user, it would use that user.

The URL to get the current user was wrong (before it brought back a list).

Disable jube build

Seems like jube plugin is building if you do a mvn clean install, we should disable jube like we have done for quickstarts

Released version either as source or docker repo that actually works

Hi

I am trying to stitch together an Fabric8 platform primarily for Java based applications with OpenShift and the Fabric8MQ and ActiveMQ based templates as I understand comes from this repository.

Though none of the recent versions appears to actually work. The most recent one 2.2.90 and the snapshot version 2.2.91-SNAPSHOT appears to fail on the basic start of the ActiveMQ broker.

org.jboss.weld.exceptions.DeploymentException: WELD-000123: Error loading io.fabric8.cdi.weld.ClientProducer defined in io.fabric8.cdi.weld.ClientProducer in jar:file:/deployments/lib/fabric8-cdi-2.2.91-tests.jar!/META-INF/beans.xml@21
at org.jboss.weld.bootstrap.enablement.GlobalEnablementBuilder$ClassLoader.apply(GlobalEnablementBuilder.java:269)
at org.jboss.weld.bootstrap.enablement.GlobalEnablementBuilder$ClassLoader.apply(GlobalEnablementBuilder.java:256)
at com.google.common.collect.Lists$TransformingRandomAccessList.get(Lists.java:572)
at java.util.AbstractList$Itr.next(AbstractList.java:358)
at java.util.AbstractCollection.toArray(AbstractCollection.java:141)
at com.google.common.collect.ImmutableSet.copyOf(ImmutableSet.java:378)
at org.jboss.weld.bootstrap.enablement.GlobalEnablementBuilder.createModuleEnablement(GlobalEnablementBuilder.java:233)
at org.jboss.weld.bootstrap.BeanDeployment.createEnablement(BeanDeployment.java:219)
at org.jboss.weld.bootstrap.WeldStartup.startInitialization(WeldStartup.java:377)
at org.jboss.weld.bootstrap.WeldBootstrap.startInitialization(WeldBootstrap.java:76)
at org.jboss.weld.environment.se.Weld.initialize(Weld.java:146)
at io.fabric8.amq.Main.main(Main.java:31)
Caused by: java.lang.NoClassDefFoundError: org/easymock/IAnswer
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getDeclaredConstructors(Class.java:2020)
at org.jboss.weld.environment.deployment.WeldResourceLoader.classForName(WeldResourceLoader.java:52)
at org.jboss.weld.bootstrap.enablement.GlobalEnablementBuilder$ClassLoader.apply(GlobalEnablementBuilder.java:267)

Can you point to a tag in either the master branch or another branch that have "release quality" which I can trust to start from?

Best regards
Lars Milland

allow consumers to be moved to a different broker

to rebalance things we need to be able to move destinations to different brokers. Moving the producer side is easy; then when the old consumer is empty, we should move the consumer over to the new broker.

To move a consumer over we need the gateway to remove consumers from the old broker and add them to the new broker. So we'll need to maintain a list of consumers per broker in the gateway - and remove them as clients disconnect from the gateway.

We should then define an operation (e.g. via REST or a CLI) to move a destination to a different broker that moves the producer. Then we watch for any destination where the producer broker != consumer broker and when that consumer broker has no more messages for that destination, we move the consumer across

[apiman] apiman breaks during startup if ElasticSearch is not available

OpenShift does not guarantee the order of pods during the start. If ElasticSearch is initialized AFTER apiman manager then the manager is not available and has to be restarted.
As OpenShift is elastic environment it should be able to survive such situation and be able to work when ES is available.

707:17:51,108  WARN FAILED o.e.j.s.ServletContextHandler@29d070c7{/apiman,null,STARTING}: java.lang.RuntimeException: org.apache.http.NoHttpResponseException: elasticsearch-v1.jpechane.svc.cluster.local:9200 failed to respond
java.lang.RuntimeException: org.apache.http.NoHttpResponseException: elasticsearch-v1.jpechane.svc.cluster.local:9200 failed to respond
    at io.apiman.manager.api.es.EsStorage.initialize(EsStorage.java:164)
    at io.apiman.manager.api.micro.ManagerApiMicroServiceCdiFactory.initES(ManagerApiMicroServiceCdiFactory.java:290)
    at io.apiman.manager.api.micro.ManagerApiMicroServiceCdiFactory.provideStorageQuery(ManagerApiMicroServiceCdiFactory.java:139)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
    at org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:78)
    at org.jboss.weld.injection.producer.ProducerMethodProducer.produce(ProducerMethodProducer.java:98)
    at org.jboss.weld.injection.producer.AbstractMemberProducer.produce(AbstractMemberProducer.java:161)
    at org.jboss.weld.bean.AbstractProducerBean.create(AbstractProducerBean.java:181)
    at org.jboss.weld.context.AbstractContext.get(AbstractContext.java:96)
    at org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:101)
    at org.jboss.weld.bean.ContextualInstanceStrategy$ApplicationScopedContextualInstanceStrategy.get(ContextualInstanceStrategy.java:141)
    at org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
    at org.jboss.weld.bean.proxy.ContextBeanInstance.getInstance(ContextBeanInstance.java:99)
    at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:99)
    at org.jboss.weld.proxies.IStorageQuery$825249753$Proxy$_$$_WeldClientProxy.listPolicyDefinitions(Unknown Source)
    at io.fabric8.apiman.BootstrapFilter.loadDefaultPolicies(BootstrapFilter.java:85)
    at io.fabric8.apiman.BootstrapFilter.init(BootstrapFilter.java:239)
    at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
    at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
    at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
    at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
    at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
    at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
    at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
    at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
    at org.eclipse.jetty.server.Server.start(Server.java:387)
    at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
    at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
    at org.eclipse.jetty.server.Server.doStart(Server.java:354)
    at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
    at io.apiman.manager.api.micro.ManagerApiMicroService.start(ManagerApiMicroService.java:81)
    at io.fabric8.apiman.ApimanStarter.main(ApimanStarter.java:42)

provide global metrics of all brokers?

so folks can see global metrics for queue sizes and throughput rates.

Maybe the best way to do this is via prometheus / hawkular metrics per broker pod with aggregation?

sharding (fan out/fan in) for topics?

right now each broker owns a destination, which works well for topics. I wonder if we have a massive number of consumers on a single topic if we need to support sharding of topics too?

e.g. producers could send to all brokers who own a topic; consumers pick a random broker to consume from (or maybe the one with the least consumers?)- so that the gateway does a kinda fan out / fan in pattern?

Dependency issue over elasticsearch

Apiman has a dependency upon elasticsearch 2.2.54 which clashes with version 2.2.95 deployed with Logging.

Deploying Apiman breaks both applications.

what should we call the default messaging service name?

we've been using fabric8mqup to now. It could be artemis, acitvemq or something else I guess really.

I wonder if we should just call it activemq for now? As its all the protocols that ActiveMQ supports (OpenWire, Stomp, MQTT, AMQP)? Or maybe messaging?

support scale down

if we scale down the brokers, we should be able to use kubernetes lifecycle hooks - the preStop hook to ensure the pod that is being scaled down has all its destinations moved to another broker and has all its destinations drained before it lets kubernetes kill the pod.

So mostly the preStop hook needs to:

  • mark the pod as stopping, tell the message-gateway to move the pods destinations to other brokers
  • when that is complete, wait for the destinations to be drained in the pod

msg-gateway fails to find artemis endpoints

The msg-gateway looks for artemis endpoints - but fails with the following exception:

FAILED lookupBrokers
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/endpoints/artemis. Message: User "system:serviceaccount:default:default" cannot get endpoints in project "default". Received status: Status(apiVersion=v1, code=403, details=StatusDetails(causes=[], kind=endpoints, name=artemis, retryAfterSeconds=null, additionalProperties={}), kind=Status, message=User "system:serviceaccount:default:default" cannot get endpoints in project "default", metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Forbidden, status=Failure, additionalProperties={}).
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:264)[jar:file:/app/msg-gateway-2.2.94-SNAPSHOT.jar!/lib/kubernetes-client-1.3.69.jar!/:]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:235)[jar:file:/app/msg-gateway-2.2.94-SNAPSHOT.jar!/lib/kubernetes-client-1.3.69.jar!/:]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:208)[jar:file:/app/msg-gateway-2.2.94-SNAPSHOT.jar!/lib/kubernetes-client-1.3.69.jar!/:]
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:197)[jar:file:/app/msg-gateway-2.2.94-SNAPSHOT.jar!/lib/kubernetes-client-1.3.69.jar!/:]
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:506)[jar:file:/app/msg-gateway-2.2.94-SNAPSHOT.jar!/lib/kubernetes-client-1.3.69.jar!/:]
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:114)[jar:file:/app/msg-gateway-2.2.94-SNAPSHOT.jar!/lib/kubernetes-client-1.3.69.jar!/:]
at io.fabric8.msg.gateway.brokers.impl.KubernetesBrokerControl.lookupBrokers(KubernetesBrokerControl.java:127)[jar:file:/app/msg-gateway-2.2.94-SNAPSHOT.jar!/:]
at io.fabric8.msg.gateway.brokers.impl.KubernetesBrokerControl.lambda$start$0(KubernetesBrokerControl.java:72)[jar:file:/app/msg-gateway-2.2.94-SNAPSHOT.jar!/:]
at io.fabric8.msg.gateway.brokers.impl.KubernetesBrokerControl$$Lambda$2/2143226352.run(Unknown Source)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_51]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)[:1.8.0_51]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)[:1.8.0_51]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)[:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_51]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_51]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_51]

support kafka topics & queues?

it would be cool to be able to use kafka as a store for messages on a topic then to be able to consume from any topic using queue like semantics.

e.g. if a topic was configured in the gateway to be a kafka topic, the gateway could write the message into a kafka topic. For consumers, it could read from kafka (keeping track in ZK of where it is etc)

Then for queues, we could have a 'kafka queue' microservice which for each logical queue on a topic, we have a single process as the owner, which then consumes messages from the topic and load balances them across consumers on any of the available message gateways.

The kafka queue service would consume a kafka topic and send each message to one of the message-gateway processes that have registered a consumer with it (with retry logic & acks etc).

I wonder if the easiest way to implement the kafka queue service is to just use a regular Artemis Broker and consume messages from the kafka topic and in effect send a KafkaMessage to the queue in the Artemis broker; which is a special kind of message that just refers to the offset and number of bytes in the Kafka topic? then Artemis can do all the usual persistence, load balancing, flow control, acking and so forth?

Then when the gateway gets the KafkaMessage it then transforms it into the actual payload from the Kafka topic? Then we get to have JMS queues on a kafka topic and reuse kafka for the message persistence; with Artemis implementing each queue and doing consumer flow control?

support sharding of destinations

rather than an entire destination being owned by a single broker, it'd be nice to define a destination as being sharded, so that each shard (a hash bucket range based on the JMSXGroupID header (or message ID maybe if there's no JMSXGroupID header) is used to associate a message to a bucket.

support HTTP / webhook style subscriptions

it'd be great if we could support clients subscribing to destinations using a HTTP POST to some endpoint. So a regular REST endpoint could subscribe to a destination's messages using annotations on the pod.

This would make it easy for developers to write simple REST endpoints which can work with reliable messaging.

Then the gateway (or some other microservice) would consume on the pods behalf and when a message is received, it would invoke the REST endpoint after converting the message to the given payload and HTTP headers. e.g. convert the payload to JSON / YAML / XML and convert the JMS headers into HTTP headers. Then if the REST endpoint returns a 2XX code that acts as an ACK, otherwise the request is retried up to the usual retry policy.

Note that the REST services should ideally use idempotency as there is the chance of duplicates if the message gateway were to fail after the REST service returns a 2XX and before the message is acked; or the REST service fails after sending a 2XX code before the gateway sees the response.

Note that it might be easiest to implement this generically using Camel like this:
fabric8io/fabric8#5762

api-registry pod can't be started

The api-registry can't be started and this error is reported continuously within the log of OpenShift

Image(s): fabric8/api-registry:2.2.93

Mar 10 18:19:19 vagrant.f8 openshift[1024]: I0310 18:19:19.870444    1024 manager.go:2079] Back-off 2m40s restarting failed container=api-registry pod=api-registry-itncw_default(7d30f56e-e6eb-11e5-a35d-080027b5c2f4)
Mar 10 18:19:19 vagrant.f8 openshift[1024]: E0310 18:19:19.883245    1024 pod_workers.go:125] Error syncing pod 7d30f56e-e6eb-11e5-a35d-080027b5c2f4, skipping: failed to "StartContainer" for "api-registry" witth CrashLoopBackOff: "B

API registry supposed to be working ?

Before opening issues I wonder whether the ApiRegistry microservice is supposed to be working ? I ask because I get strange protocol errors which indicate that there has not been update recently here.

Add auto registration bridge k8s->apiman

The ServiceDeveloper can set a new annotation on the service

api.service.openshift.io/api-manager

which if set, and if the value is set to apiman will cause the service to autoRegister itself into apiman.

[apiman] endpoint address defaults to localhost for services outside of current namespace on OSE

APIMan deployed on OSE 3.1

When importing service that is outside of current APIMan namespace, it fails to resolve DNS and defaults to localhost. It should probably look for route associated with service and use host attribute.

11:37:45,031  WARN Could not resolve DNS for sso-app, trying ENV settings next.
11:37:45,031  INFO Defaulting sso-app host to: localhost
11:37:45,031  INFO Defaulting sso-app port to: 8080

Failed to start fabric8mq

After starting Fabric8MQ from fabric8 console the pod has following logs:

 No access restrictor found, access to all MBean is allowed
2   Jolokia: Agent started with URL http://172.17.0.21:8778/jolokia/
3   WELD-000900: 2.3.1 (Final)
4   Nov 19, 2015 7:52:16 PM org.apache.deltaspike.core.util.ProjectStageProducer initProjectStage
5   INFO: Computed the following DeltaSpike ProjectStage: Production
6   WELD-001208: Error when validating jar:file:/app/fabric8-vertx-2.2.63.jar!/META-INF/beans.xml@22 against xsd. cvc-complex-type.3.2.2: Attribute 'version' is not allowed to appear in element 'beans'.
7   WELD-000101: Transactional services not available. Injection of @Inject UserTransaction not available. Transactional observers will be invoked synchronously.
8   WELD-001700: Interceptor annotation class javax.ejb.PostActivate not found, interception based on it is not enabled
9   WELD-001700: Interceptor annotation class javax.ejb.PrePassivate not found, interception based on it is not enabled
10  WELD-000411: Observer method [BackedAnnotatedMethod] protected org.apache.deltaspike.core.impl.interceptor.GlobalInterceptorExtension.promoteInterceptors(@Observes ProcessAnnotatedType, BeanManager) receives events for all annotated types. Consider restricting events using @WithAnnotations or a generic type with bounds.
11  WELD-000411: Observer method [BackedAnnotatedMethod] protected org.apache.deltaspike.core.impl.exclude.extension.ExcludeExtension.vetoBeans(@Observes ProcessAnnotatedType, BeanManager) receives events for all annotated types. Consider restricting events using @WithAnnotations or a generic type with bounds.
12  WELD-000411: Observer method [BackedAnnotatedMethod] protected org.apache.deltaspike.core.impl.message.MessageBundleExtension.detectInterfaces(@Observes ProcessAnnotatedType) receives events for all annotated types. Consider restricting events using @WithAnnotations or a generic type with bounds.
13  Nov 19, 2015 7:52:21 PM org.apache.deltaspike.core.util.ClassDeactivationUtils cacheResult
14  INFO: class: org.apache.deltaspike.core.impl.jmx.MBeanExtension activated=true
15  Nov 19, 2015 7:52:21 PM org.apache.deltaspike.core.util.ClassDeactivationUtils cacheResult
16  INFO: class: org.apache.deltaspike.core.impl.config.ConfigurationExtension activated=true
17  Nov 19, 2015 7:52:22 PM org.apache.deltaspike.core.util.ClassDeactivationUtils cacheResult
18  INFO: class: org.apache.deltaspike.core.impl.exception.control.extension.ExceptionControlExtension activated=true
19  Nov 19, 2015 7:52:22 PM org.apache.deltaspike.core.util.ClassDeactivationUtils cacheResult
20  INFO: class: org.apache.deltaspike.core.impl.message.MessageBundleExtension activated=true
21  Nov 19, 2015 7:52:22 PM org.apache.deltaspike.core.util.ClassDeactivationUtils cacheResult
22  INFO: class: org.apache.deltaspike.core.impl.interceptor.GlobalInterceptorExtension activated=true
23  Nov 19, 2015 7:52:22 PM org.apache.deltaspike.core.util.ClassDeactivationUtils cacheResult
24  INFO: class: org.apache.deltaspike.core.impl.exclude.extension.ExcludeExtension activated=true
25  Nov 19, 2015 7:52:22 PM org.apache.deltaspike.core.util.ClassDeactivationUtils cacheResult
26  INFO: class: org.apache.deltaspike.core.impl.exclude.CustomProjectStageBeanFilter activated=true
27  Nov 19, 2015 7:52:22 PM org.apache.deltaspike.core.util.ClassDeactivationUtils cacheResult
28  INFO: class: org.apache.deltaspike.core.impl.exclude.GlobalAlternative activated=true
29  Nov 19, 2015 7:52:22 PM org.apache.deltaspike.core.util.ClassDeactivationUtils cacheResult
30  INFO: class: org.apache.deltaspike.core.impl.scope.DeltaSpikeContextExtension activated=true
31  WELD-ENV-002003: Weld SE container STATIC_INSTANCE initialized
32  Waiting for ReplicationController amqbroker to start
33  Waiting for ReplicationController amqbroker to start
34  Waiting for ReplicationController amqbroker to start
35  Waiting for ReplicationController amqbroker to start
36  Waiting for ReplicationController amqbroker to start
37  Waiting for ReplicationController amqbroker to start
38  

Are there any additional missing steps?

support rebalancing of destinations to brokers

after we scale up new message brokers, we'll have older brokers with loads of destinations and new brokers getting little.

It would be nice to start rebalancing; so electing a leader message gateway that makes rebalancing decisions and moves destinations to different brokers

support better load balancing of destinations over brokers

when we allocate a destination to a broker it would be nice to use a more fair load balancing technique. e.g. to use some kind of calculation of load (message size on existing queues/topics maybe?) to find the least loaded broker to use? Worst case we should find the one with the least destination count; but then load on a destination will vary wildly so using a better metric would be good.

[apiman] API Catalog should hide apiman and support services by default

When a Service Import is to happen the list also suggests apiman, gateway, elasticsearch services that are probably not to be exposed. We thus propose

  • annotate such services witch a special annotation
  • provide a checkbox at API Catalog page that will be by default unchecked and hide such services. When checked it will show all services

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.