apache / camel-k Goto Github PK
View Code? Open in Web Editor NEWApache Camel K is a lightweight integration platform, born on Kubernetes, with serverless superpowers
Home Page: https://camel.apache.org/camel-k
License: Apache License 2.0
Apache Camel K is a lightweight integration platform, born on Kubernetes, with serverless superpowers
Home Page: https://camel.apache.org/camel-k
License: Apache License 2.0
We do store the integration code inside the Integration CRD but it would be better to have a default strategy that stores it in a configmap and mount them to the runtime pod
We should document what options and scenarios are available for a end user (from running "kamel run Source.java" on)
There's no embedded Docker registry. Are we going to create one?
Deployment through a builder sidecar is not a good option in a serverless environment
The general way of doing builds expects a build phase where a docker image is generated.
In some cases, when there are no additional dependencies to load respect to "camel-core" (or respect to a "prebuilt context" we will add in the future), there's no need to build a docker image and we can run the integration directly.
We can mount the source file as configmap and load it directly in the runtime.
File mounting will also be the preferred strategy when dealing with ".js" files (independently of the build phase).
For the first stage, we can add a new strategy to the CR. When selected, the build will be performed without the source file, that will be later added at pod startup (e.g. mounted in a configmap). Later-on, we will add contexts to skip the build phase.
| [INFO] ------------------------------------------------------------------------
| Downloading: https://repo.maven.apache.org/maven2/org/apache/camel/camel-groovy/2.22.1/camel-groovy-2.22.1.pom
| [INFO] ------------------------------------------------------------------------
| [INFO] BUILD FAILURE
| [INFO] ------------------------------------------------------------------------
| [INFO] Total time: 1.163 s
| [INFO] Finished at: 2018-09-17T13:12:44+00:00
| [INFO] Final Memory: 13M/293M
| [INFO] ------------------------------------------------------------------------
| [ERROR] Failed to execute goal on project camel-k-integration: Could not resolve dependencies for project org.apache.camel.k.integration:camel-k-integration:jar:0.0.2-SNAPSHOT: Failed to collect dependencies at org.apache.camel:camel-groovy:jar:2.22.1: Failed to read artifact descriptor for org.apache.camel:camel-groovy:jar:2.22.1: Could not transfer artifact org.apache.camel:camel-groovy:pom:2.22.1 from/to central (https://repo.maven.apache.org/maven2): handshake alert: unrecognized_name -> [Help 1]
| org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal on project camel-k-integration: Could not resolve dependencies for project org.apache.camel.k.integration:camel-k-integration:jar:0.0.2-SNAPSHOT: Failed to collect dependencies at org.apache.camel:camel-groovy:jar:2.22.1
| at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.getDependencies(LifecycleDependencyResolver.java:221)
| at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.resolveProjectDependencies(LifecycleDependencyResolver.java:127)
| at org.apache.maven.lifecycle.internal.MojoExecutor.ensureDependenciesAreResolved(MojoExecutor.java:257)
| at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:200)
| at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
| at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
| at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
| at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
| at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
| at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
| at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
| at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
| at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
| at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
| at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
| at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
| at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
| at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
| at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
| at java.lang.reflect.Method.invoke(Method.java:498)
| at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
| at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
| at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
| at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
| Caused by: org.apache.maven.project.DependencyResolutionException: Could not resolve dependencies for project org.apache.camel.k.integration:camel-k-integration:jar:0.0.2-SNAPSHOT: Failed to collect dependencies at org.apache.camel:camel-groovy:jar:2.22.1
| at org.apache.maven.project.DefaultProjectDependenciesResolver.resolve(DefaultProjectDependenciesResolver.java:180)
| at org.apache.maven.lifecycle.internal.LifecycleDependencyResolver.getDependencies(LifecycleDependencyResolver.java:195)
| ... 23 more
| Caused by: org.eclipse.aether.collection.DependencyCollectionException: Failed to collect dependencies at org.apache.camel:camel-groovy:jar:2.22.1
| at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:291)
| at org.eclipse.aether.internal.impl.DefaultRepositorySystem.collectDependencies(DefaultRepositorySystem.java:316)
| at org.apache.maven.project.DefaultProjectDependenciesResolver.resolve(DefaultProjectDependenciesResolver.java:172)
| ... 24 more
| Caused by: org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for org.apache.camel:camel-groovy:jar:2.22.1
| at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:302)
| at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:218)
| at org.eclipse.aether.internal.impl.DefaultDependencyCollector.resolveCachedArtifactDescriptor(DefaultDependencyCollector.java:535)
| at org.eclipse.aether.internal.impl.DefaultDependencyCollector.getArtifactDescriptorResult(DefaultDependencyCollector.java:519)
| at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:409)
| at org.eclipse.aether.internal.impl.DefaultDependencyCollector.processDependency(DefaultDependencyCollector.java:363)
| at org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(DefaultDependencyCollector.java:351)
| at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:254)
| ... 26 more
| Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact org.apache.camel:camel-groovy:pom:2.22.1 from/to central (https://repo.maven.apache.org/maven2): handshake alert: unrecognized_name
| at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:444)
| at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246)
| at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:223)
| at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:287)
| ... 33 more
| Caused by: org.eclipse.aether.transfer.ArtifactTransferException: Could not transfer artifact org.apache.camel:camel-groovy:pom:2.22.1 from/to central (https://repo.maven.apache.org/maven2): handshake alert: unrecognized_name
| at org.eclipse.aether.connector.basic.ArtifactTransportListener.transferFailed(ArtifactTransportListener.java:43)
| at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run(BasicRepositoryConnector.java:355)
| at org.eclipse.aether.util.concurrency.RunnableErrorForwarder$1.run(RunnableErrorForwarder.java:67)
| at org.eclipse.aether.connector.basic.BasicRepositoryConnector$DirectExecutor.execute(BasicRepositoryConnector.java:581)
| at org.eclipse.aether.connector.basic.BasicRepositoryConnector.get(BasicRepositoryConnector.java:249)
| at org.eclipse.aether.internal.impl.DefaultArtifactResolver.performDownloads(DefaultArtifactResolver.java:520)
| at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:421)
| ... 36 more
| Caused by: org.apache.maven.wagon.TransferFailedException: handshake alert: unrecognized_name
| at org.apache.maven.wagon.providers.http.AbstractHttpClientWagon.fillInputData(AbstractHttpClientWagon.java:1066)
| at org.apache.maven.wagon.providers.http.AbstractHttpClientWagon.fillInputData(AbstractHttpClientWagon.java:960)
| at org.apache.maven.wagon.StreamWagon.getInputStream(StreamWagon.java:116)
| at org.apache.maven.wagon.StreamWagon.getIfNewer(StreamWagon.java:88)
| at org.apache.maven.wagon.StreamWagon.get(StreamWagon.java:61)
| at org.eclipse.aether.transport.wagon.WagonTransporter$GetTaskRunner.run(WagonTransporter.java:560)
| at org.eclipse.aether.transport.wagon.WagonTransporter.execute(WagonTransporter.java:427)
| at org.eclipse.aether.transport.wagon.WagonTransporter.get(WagonTransporter.java:404)
| at org.eclipse.aether.connector.basic.BasicRepositoryConnector$GetTaskRunner.runTask(BasicRepositoryConnector.java:447)
| at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run(BasicRepositoryConnector.java:350)
| ... 41 more
| Caused by: javax.net.ssl.SSLProtocolException: handshake alert: unrecognized_name
| at sun.security.ssl.ClientHandshaker.handshakeAlert(ClientHandshaker.java:1446)
| at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:2026)
| at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1135)
| at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
| at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
| at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
| at org.apache.maven.wagon.providers.http.httpclient.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:275)
| at org.apache.maven.wagon.providers.http.httpclient.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:254)
| at org.apache.maven.wagon.providers.http.httpclient.impl.conn.HttpClientConnectionOperator.connect(HttpClientConnectionOperator.java:123)
| at org.apache.maven.wagon.providers.http.httpclient.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:318)
| at org.apache.maven.wagon.providers.http.httpclient.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:363)
| at org.apache.maven.wagon.providers.http.httpclient.impl.execchain.MainClientExec.execute(MainClientExec.java:219)
| at org.apache.maven.wagon.providers.http.httpclient.impl.execchain.ProtocolExec.execute(ProtocolExec.java:195)
| at org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec.execute(RetryExec.java:86)
| at org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RedirectExec.execute(RedirectExec.java:108)
| at org.apache.maven.wagon.providers.http.httpclient.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
| at org.apache.maven.wagon.providers.http.httpclient.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
| at org.apache.maven.wagon.providers.http.AbstractHttpClientWagon.execute(AbstractHttpClientWagon.java:832)
| at org.apache.maven.wagon.providers.http.AbstractHttpClientWagon.fillInputData(AbstractHttpClientWagon.java:983)
| ... 50 more
| [ERROR]
We need to embed the camel catalog. First simple use case is to provide auto-completion when running a integration.
the kamel run
command determine the integration name using the integration file name but it would be nice to let the user set the name of the integration with an additional flag.
If trying to execute kamel install
twice, the second installation fails for a issue while trying to update the service:
Service "camel-k-operator" is invalid: [metadata.resourceVersion: Invalid value: "": must be specified for an update, spec.clusterIP: Invalid value: "": field is immutable]
This is because the service gets assigned, after creation, a cluster IP that is not editable. We should either avoid replacing the cluster IP or avoid replacing the service if already present.
@dmvolod can you take a look?
If we postpone the compilation of the source file after the image build, we may not be able to detect failures such as compilation errors.
We should improve the monitoring phase to detect cases where the pod is failing and also add health checks to the runtime, so that we can detect when the pod is completely ready.
It is installed together with the operator and contains the global configuration. Each resource created by the operator (Integration, Integration Context) should be a child of the top level resource, so that we can retrieve the global config when processing them.
A side effect is that we can install more Camel K instances in a namespace. In the future we should make sure the work of the operator is minimal (offload builds to a builder pod).
This is compatible with OLM: https://github.com/operator-framework/operator-lifecycle-manager
People can install a "Camel" instane alone (maybe we should find a better name for the top level resource) in a namespace (it will install CR + operator), or install directly a "Integration": the "Integration" can be set to depend on a "Camel" resource, so the installer will setup everything needed for the integration to run.
| at org.joor.Reflect.on(Reflect.java:781)
| at org.joor.Reflect.call(Reflect.java:463)
| at org.joor.Compile.compile(Compile.java:73)
| at org.joor.Reflect.compile(Reflect.java:77)
| at org.apache.camel.k.jvm.RoutesLoaders$2.load(RoutesLoaders.java:90)
| at org.apache.camel.k.jvm.Application.main(Application.java:34)
| Caused by: java.lang.reflect.InvocationTargetException
| at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
| at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
| at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
| at java.lang.reflect.Method.invoke(Method.java:498)
| at org.joor.Reflect.on(Reflect.java:777)
| ... 5 more
| Caused by: java.lang.NoClassDefFoundError: etc/camel/Sample (wrong name: Sample)
| at java.lang.ClassLoader.defineClass1(Native Method)
| at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
| at java.lang.ClassLoader.defineClass(ClassLoader.java:642)
| ... 10 more
The kamel run
command support setting the language used for the routes but this is problematic if we want to load multiple routes that may have been developed in different languages.
Remove the command option and determine the language using the file extension
Route loaders determine the language to use by looking at the file extension only so it discards any value eventually set from by kamel run
command
We should document the architecture:
Linked to #73. The operator will process "Camel" resources and ensure that they have one "root" "IntegrationContext". That in turn will trigger a imagestream build immediately after installation.
This is the first "automatically-created" context, and we should disallow edits to them from the kamel
tool (there'll be a annotation for marking it) and discourage people from changing it.
Run 'kamel install --example' for automatic execute deploy/cr.yaml on install
@nicolaferraro what do you think about this?
A common set of Camel-K related metrics could be exposed to Prometheus for monitoring purpose.
Besides, It would be cool if it'd be possible to define some custom Prometheus metrics, directly in the custom resource.
Problably the JMX Prometheus Java agent would have to be enabled (configurably) and fragments would look like:
rules:
- pattern: 'fis.metrics<name=os.(.*)><>(.+):'
name: os_$1
help: some help
- pattern: 'org.apache.camel<context=camel, type=routes, name=\"(.*)\"><>LastProcessingTime'
name: camel_last_processing_time
help: Last Processing Time [milliseconds]
type: GAUGE
labels:
route: $1
When the user executes "kamel run Source.java", he might be using some components not available in camel-core.
In this case, the "kamel" script should create the resource as usual (do not put too much logic into the client), while the operator in the "Initialize" action should scan the source file to find declaration of external maven dependencies.
We can use a simplified way to declare dependencies, such as declaring them in a comment, e.g.
// k-include: camel-mail
This way we let the user specify everything in a single file. We can switch to something better later (best if recognized by the tooling).
The initializer will add "org.apache.camel:camel-mail" to the list of dependencies in the CR (custom resource).
We should enable the following ways to include dependencies:
kamel run --dependency camel-mail Source.java
We should also allow to disable the auto-discovery via a flag e.g. in "spec->dependencies->autoDiscovery", controlled by a "kamel --auto-dependencies false" client flag.
After #76 is merged the process should be working because all integrations are monitored for changes every 5 seconds. But, to improve speed, when a context gets into the "ready" state, we should trigger a update of "affected" integrations to completely remove dead times. Or even "all" integrations is ok.
It would be nice to analyze environment variable (DEBUG=true or another one) and run maven with '-X' or '-e' options.
We should add to the documentation a high level roadmap of the project (see https://github.com/nicolaferraro/integration-operator).
We should change the Integration build process to always rely on a IntegrationContext. The context can be the "root" context (#74) or another "platform" context created automatically.
Hi,
I am having difficulty how to differ when a Component (DefaultComponent) is deployed (NEW) or when it is just restarted or undeployed (REMOVED completely). Could someone explain a bit?
Many thanks, Kyle
We can speedup the build time by triggering a context update when a related S2I build finishes, without waiting the 5seconds loop time.
A similar thing is done in the context: when the context is marked as ready it already triggers a update on all related integrations.
It's would be nice to have an uninstall command which will remove all camel-k artifacts (cluster-wide and project related). This will be useful for development and update procedures.
Reported by: @dmvolod
We should add a flag to "kamel run" to wait for the deployment to be completed before returning ("–wait").
We should add a "kamel logs" command to display a integration logs. It's a shorthand for "kubectl logs" and "kubectl logs -f" but Camel K will direct those commands to the right pod/container using the integration name.
What is a trait?
It's a high level "feature" of Camel K, but if I say "feature" you think to a completely different thing in the Camel/Karaf world, so let's call it "trait" for now.
Some examples of traits I've in mind:
So traits are complex features that a user can enable/disable, or in some cases also configure. They're like "enrichers"/"generators" in the fabric8 maven plugin (for those who know it). But differently from f-m-p, we should document them.
When a trait is not enabled/disabled by the user, Camel K tries to determine the best configuration of each trait for the application. E.g. the "rest" trait can be enabled automatically when I'm using rest in my integration, the "cron" trait can be enabled if I have a single route that starts with timer (and a configured long delay) and it needs to have a specific configuration so that the engine can trigger the integration correctly.
Let's discuss about this.
What do you think @lburgazzoli, @dmvolod, @oscerd, @valdar, @onderson?
Is it a good abstraction over the features we've talked about in the dev mailing list?
As explained in the readme, the procedure to do a local build and deploy is complex. We should add a make install-minishift
command to simplify it (and update documentation).
As it looks like Camel-K is the Hawtest way of running Camel in the cloud 😉, I was wondering if the Hawtio console for Kubernetes / OpenShift could provide any Camel-K plugin.
If you have any ideas, suggestions, ..., I'd be glad to work something out!
Here is a screenshot of the discover page:
oc cluster up is using local docker repo. In this case an additional docker tag operation is required.
Need to document this or add step to build.
Now that we can run kamel run runtime/examples/routes.js -w --logs
, I see that we create two pods each time we deploy a integration, not just one.
The output of each pod is prefixed with a progressive number in brackets (e.g. [1]
):
integration "routes" created
integration "routes" in phase Building
integration "routes" in phase Deploying
integration "routes" in phase Running
[1] Monitoring pod routes-577bbd8f65-kl7mq[2] Monitoring pod routes-79c97fdbb-pxcnk[2] Starting the Java application using /opt/run-java/run-java.sh ...
[2] exec java -javaagent:/opt/jolokia/jolokia.jar=config=/opt/jolokia/etc/jolokia.properties -javaagent:/opt/prometheus/jmx_prometheus_javaagent.jar=9779:/opt/prometheus/prometheus-config.yml -XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:+ExitOnOutOfMemoryError -cp .:/deployments/* org.apache.camel.k.jvm.Application
[1] Starting the Java application using /opt/run-java/run-java.sh ...
[1] exec java -javaagent:/opt/jolokia/jolokia.jar=config=/opt/jolokia/etc/jolokia.properties -javaagent:/opt/prometheus/jmx_prometheus_javaagent.jar=9779:/opt/prometheus/prometheus-config.yml -XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:+ExitOnOutOfMemoryError -cp .:/deployments/* org.apache.camel.k.jvm.Application
[1] I> No access restrictor found, access to any MBean is allowed
Something like:
kamel run --dependency something Source.java
Dependency should be provided in the for of a URI like:
mvn:org.apache.camel/camel-mail/2.20.1
camel:mail
file:/path/to/dep.jar?id=my.company:my-dep:version
The kamel's run
command builds the docker image of the Camel's application with the help of a Buildconfig
, next update the imagestream
and finally trigger a redeploynent of the pod using the DeploymentConfig
In order to bootstrap/speed the process, I would like to propose add a new parameter a run --mode=dev
command (or using an arg, ...) responsible to install a DeploymentConfig and push the project (source, binary) and launch it.
This approach which rely on a container running a go supervisord which allows to configure different commands to start/stop/test/debug....
Here is a prototype project which has been created for Spring Boot and of course could be adapted for the Camel's needs ;-)
I'm linking the issue created by @objectiser in the original repo: nicolaferraro/integration-operator#1.
When things start to stabilize, we should dig into that.
If I execute kamel run -d mvn:dummy:xx:1.2.3 Sample.java --dev
, the correct build status (Error) is never reported.
Like: oc create -f cr.yaml
It should be possible to load configurations from a simple properties file
In some cases, for example in disconnected mode, it's unable to download image from docker.io
It wold be nice to have an option for 'kamel install' to define image name, location and version.
@nicolaferraro , what do you think about it?
We can now deploy our routes by defining them in a single file and the using the kamel
command like:
kamel run MyRoutes.js
But it would be nice to have kamel
supporting adding multiple routes
kamel run MyRoute.js MyOtherRoute.js
When we execute the command kamel install
to deploy the CustomResourceDefintion, then we get this error
./kamel install
Error: failed to get resource client: failed to get resource type: failed to get the resource REST mapping for GroupVersionKind(camel.apache.org/v1alpha1, Kind=IntegrationContext): no matches for kind "IntegrationContext" in version "camel.apache.org/v1alpha1"
This error is gone when running this command ./kamel install --cluster-setup
I suggest to update the doc and make the error reported more user's friendly
It would be nice to mix deployment with contexts and #33 to create a "kamel run --dev Source.java" that:
After we embed the camel catalog in #93, we can auto-discover dependencies directly from the code.
It will be a kind of static processing (project cannot be built without knowing its dependencies), but it will be really useful in "--dev" mode.
We should add a release
command that:
Maybe some of those steps will be manual at the beginning 😄
Also add --all for delete all available integrations in namespace or cluster
Java integrations using a package declaration (e.g. "package kamel;") cannot be run. The reason is that the builder expect all integrations are in the root package.
We should remove the package on pre-processing or allow declaring any package.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.