projectriff / system Goto Github PK
View Code? Open in Web Editor NEWriff's runtime inside k8s
License: Apache License 2.0
riff's runtime inside k8s
License: Apache License 2.0
Creating a streaming processor with a non-existent function-ref appears to work, but the controller starts logging an ERROR.
$ riff streaming processor create test \
> --function-ref echo-doesnt-exist \
> --input in \
> --output out
Created processor "test"
ERROR controller-runtime.controller Reconciler error {"controller": "processor", "request": "default/test", "error": "Function.build.projectriff.io \"echo-doesnt-exist\" not found"}
github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
NOTE: Doing the equivalent core deployer create
with a non-existent function-ref does not produce any errors.
2019-10-17T12:28:36.765Z INFO controllers.Deployer.tracker tracking resource {"ref": "Function.build.projectriff.io/default/echo-bad", "obj": "default/core-echo-bad", "ttl": "2019-10-17T22:28:36Z"}
2019-10-17T12:28:36.765Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "deployer", "request": "default/core-echo-bad"}
2019-10-17T12:37:04.290Z DEBUG controllers.Deployer.tracker no tracked items found {"ref": "Function.build.projectriff.io/default/echo-doesnt-exist"}
This was observed in a riff setup with core, knative, and streaming runtimes installed using --devel
build via helm install
on oct 28, 2019.
A random number generator posts a number to an input stream via the streaming gateway every second. The stream is consumed by a simple echo function deployed as a streaming processor.
It appears that keda is scaling this processor down to 0 every 30s or so, even though there is constant activity on the input stream
echo.js
module.exports = x => {
console.log('echo', x);
return x;
}
# deploy gateway (yaml uses riff-system namespace)
kubectl apply -f https://storage.googleapis.com/projectriff/riff-http-gateway/riff-http-gateway-0.5.0-snapshot.yaml
# create in and out streams
riff streaming stream create in --provider franz-kafka-provisioner --content-type application/json
riff streaming stream create out --provider franz-kafka-provisioner --content-type application/json
# build echo function
riff function create echo \
--local-path ~/riff/echo \
--artifact echo.js \
--tail
# deploy echo function as a stream processor
riff streaming processor create echo-out \
--function-ref echo \
--input in \
--output out
# create random number generator from image
riff container create random --image jldec/random:v0.0.2 --tail
# deploy random number generator as core-random using core runtime
riff core deployer create core-random --container-ref random --tail
# (in new terminal) port forward to core-random on port 8081
kubectl port-forward service/`kubectl get service -l core.projectriff.io/deployer=core-random -o jsonpath='{.items[0].metadata.name}'` 8081:80
# configure random to send a number to the in stream every second via the http-gateway
curl http://localhost:8081/ -w '\n' \
-H 'Content-Type: application/json' \
-d '{"url":"http://riff-streaming-http-gateway.riff-system/default/in"}'
# monitor the input stream using the liiklus client - it should show a new number on the stream every second. The name of the Kafka provider is franz
# (in new terminal) port forward to kafka-provider-liiklus on port 6565
kubectl port-forward service/`kubectl get service -l streaminprojectriff.io/kafka-provider-liiklus=franz -o jsonpath='{.items[0].metadata.name}'` 6565:6565
# (in a new terminal) watch in the input stream
# (this assumes that the liiklus client has been built at ~/riff/liiklus-client)
java -jar ~/riff/liiklus-client/target/liiklus-client-1.0.0-SNAPSHOT.jar --consumer localhost:6565 default_in
sample output of keda operator
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:55:21Z" level=info msg="Successfully scaled deployment (default/echo-out-processor-jxbw8) to 0 replicas"
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:55:22Z" level=info msg="Successfully updated deployment (default/echo-out-processor-jxbw8) from 0 to 1 replicas"
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:55:58Z" level=info msg="Successfully scaled deployment (default/echo-out-processor-jxbw8) to 0 replicas"
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:55:59Z" level=info msg="Successfully updated deployment (default/echo-out-processor-jxbw8) from 0 to 1 replicas"
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:56:34Z" level=info msg="Successfully scaled deployment (default/echo-out-processor-jxbw8) to 0 replicas"
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:56:35Z" level=info msg="Successfully updated deployment (default/echo-out-processor-jxbw8) from 0 to 1 replicas"
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:57:25Z" level=info msg="Successfully scaled deployment (default/echo-out-processor-jxbw8) to 0 replicas"
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:57:27Z" level=info msg="Successfully updated deployment (default/echo-out-processor-jxbw8) from 0 to 1 replicas"
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:58:30Z" level=info msg="Successfully scaled deployment (default/echo-out-processor-jxbw8) to 0 replicas"
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:58:31Z" level=info msg="Successfully updated deployment (default/echo-out-processor-jxbw8) from 0 to 1 replicas"
keda/keda-operator-7667cb476f-g59r5[keda-operator]: time="2019-10-28T13:59:10Z" level=info msg="Successfully scaled deployment (default/echo-out-processor-jxbw8) to 0 replicas"
For example, one role may provide write access to riff resources, and another role provides read-only access.
This metadata now exists within the binding and should be consumed via the binding.
Depends on #198
This will let invokers:
Currently we force the use of port 8080
I'm seeing this recurring error in the log from the streaming-controller-manager when i wire a simple echo function as a stream processor from an input to an output stream.
The function appears to be operating properly on the stream despite the error which repeats in the log every few minutes.
~/riff/echo/echo.js
module.exports = x => {
console.log('echo', x);
return x;
}
create echo
function
riff function create echo \
--local-path ~/riff/echo \
--artifact echo.js \
--tail
wire in
and out
streams via a test
processor
riff streaming stream create in --provider franz-kafka-provisioner --content-type application/json
riff streaming stream create out --provider franz-kafka-provisioner --content-type application/json
riff streaming processor create test \
--function-ref echo \
--input in \
--output out
error from kubectl logs -f riff-streaming-controller-manager-... -c manager -n riff-system
ERROR controller-runtime.controller Reconciler error {"controller": "processor", "request": "default/test", "error": "ScaledObject.keda.k8s.io \"test-processor\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"keda.k8s.io/v1alpha1\", \"kind\":\"ScaledObject\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2019-10-17T11:54:01Z\", \"generation\":1, \"labels\":map[string]interface {}{\"deploymentName\":\"test-processor\", \"streaming.projectriff.io/processor\":\"test\"}, \"name\":\"test-processor\", \"namespace\":\"default\", \"ownerReferences\":[]interface {}{map[string]interface {}{\"apiVersion\":\"streaming.projectriff.io/v1alpha1\", \"blockOwnerDeletion\":true, \"controller\":true, \"kind\":\"Processor\", \"name\":\"test\", \"uid\":\"69081ea3-f0d4-11e9-a9b7-025000000001\"}}, \"uid\":\"cf393560-f0d4-11e9-a9b7-025000000001\"}, \"spec\":map[string]interface {}{\"cooldownPeriod\":30, \"maxReplicaCount\":interface {}(nil), \"minReplicaCount\":interface {}(nil), \"pollingInterval\":1, \"scaleTargetRef\":map[string]interface {}{\"containerName\":\"\", \"deploymentName\":\"test-processor\"}, \"triggers\":[]interface {}{map[string]interface {}{\"metadata\":map[string]interface {}{\"address\":\"franz-kafka-liiklus.default:6565\", \"group\":\"test\", \"topic\":\"default_in\"}, \"name\":\"\", \"type\":\"liiklus\"}}}, \"status\":map[string]interface {}{\"currentReplicas\":0, \"desiredReplicas\":0}}: validation failure list:\nspec.maxReplicaCount in body must be of type integer: \"null\"\nspec.minReplicaCount in body must be of type integer: \"null\""}
github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
This repo currently hosts three runtimes in a single controller. It should be possible to install/uninstall any runtime without impacting the others. Currently the controller sniffs for Knative Serving to decide whether to load the knative runtime. This approach has a few issues:
The easy solution is to create a new controller for each runtime that has complete ownership over its CRDs and dependencies.
In the streaming runtime, the following 4 commands are required so that a function can consume 2 streams and write to the third:
riff streaming stream create numbers --provider kafka-provider --content-type application/json
riff streaming stream create letters --provider kafka-provider --content-type text/plain
riff streaming stream create repeated --provider kafka-provider
riff streaming processor create repeater-processor --function-ref repeater \
--input numbers --input letters \
--output repeated \
--content-type application/json
Drawing inspiration from scdf dsl we could define a new command like:
riff streaming flow create myflow \
--definition 'kafka-provider > numbers,letters | repeater | kafka-provider > repeated' \
--content-type=application/json
This command is equivalent to the above 4 commands and would just delegate to the existing commands, creating streams and processors.
We shouldn't require that processors reference a function. It must be possible to provide a pre-existing image and run it as a processor. This will be critical for users who have their own CD workflows or have acceptance tests for build images that need to pass before they go into production.
One of the key main benefits of the Knative runtime over the Core runtime is that it provides ingress routing to workloads. We should fill that gap.
Design principals:
Other things to keep in mind:
I am seeing intermittent failures in fats:
Internal error occurred: failed calling webhook "functions.build.projectriff.io": Post https://riff-build-webhook-service.riff-system.svc:443/mutate-build-projectriff-io-v1alpha1-function?timeout=30s: dial tcp 10.103.232.53:443: connect: connection refused
potentially due to webhooks crashing. full run at: https://dev.azure.com/projectriff/projectriff/_build/results?buildId=3187&view=logs&j=3b0a35bd-29b3-584b-7c25-cc0a75f2effd&t=7f4988a7-73b7-53d3-17dd-2c40b68b0ba6&l=1332
Take Stream.Spec.contentType
as an example: even though there is a call to SetDefault, the only thing that is ever updated is the Status subresource.
We need to revisit this pattern, which I assume we applied over the whole codebase.
Streams are like a database in that they can be bound to a workload in order to read/write messages to/from the stream. Following the Binding spec from CNB, we can create a ConfigMap and Secret for each stream that contains the appropriate metadata to be bound into a target workload.
ConfigMap properties:
kind
: 'streams.streaming.projectriff.io'
provider
: stream.name
tags
: emptySecret properties:
gateway
: stream.status.address.gateway
topic
: stream.status.address.topic
The act of binding the ConfigMap/Secret to a workload is out of scope for this work.
Currently the deployers generate a service name to keep them unique. This makes it harder to discover services since you can't bind using just the DNS name, you would have to look it up based on labels. We should provide a way to define the actual name used and the end-user would be responsible for avoiding clashes. If user doesn't specify the name, then we should generate one as we do today.
The typical current pattern for updating a Status is the following:
// check if status has changed before updating, unless requeued
if !result.Requeue && !equality.Semantic.DeepEqual(stream.Status, original.Status) {
// update status
log.Info("updating stream status", "diff", cmp.Diff(original.Status, stream.Status))
if updateErr := r.Status().Update(ctx, stream); updateErr != nil {
log.Error(updateErr, "unable to update Stream status", "stream", stream)
return ctrl.Result{Requeue: true}, updateErr
}
}
if the inner reconcile()
function returns with requeue == true
, any Condition
set on the resource status will not be surfaced.
This explains why a failing stream provisioning is not surfaced atm for example.
refs #117
README mentions dep instead of go modules.
The list of CRDs for streaming is incomplete
The set of additional dependencies is outdated (knative build ...)
Pulsar has slashes in topic names.
At least this piece of code will break.
Pulsar is the second technology that liiklus supports for record storage, so this is a low hanging fruit. Allows validation of the loose coupling of providers, and will provide subject matter for #86
If I add a volume referencing a secret to a Deployer
I get validation error:
Error executing command:
Deployer.core.projectriff.io "petclinic" is invalid: []: Invalid value: map[string]interface {}{"apiVersion":"core.projectriff.io/v1alpha1", "metadata":map[string]interface {}{"name":"petclinic", "namespace":"default", "creationTimestamp":"2019-10-17T02:40:09Z", "generation":1, "uid":"6fa9353e-f087-11e9-81fa-42010a8001c4"}, "spec":map[string]interface {}{"build":map[string]interface {}{"applicationRef":"petclinic"}, "template":map[string]interface {}{"volumes":[]interface {}{map[string]interface {}{"name":"petclinic-mysql-binding", "secret":map[string]interface {}{"secretName":"petclinic-mysql-binding", "items":[]interface {}{map[string]interface {}{"key":"config.yaml", "path":"application.yaml"}}, "defaultMode":420}}}, "containers":[]interface {}{map[string]interface {}{"env":[]interface {}{map[string]interface {}{"name":"SPRING_PROFILES_ACTIVE", "value":"mysql"}, map[string]interface {}{"name":"SPRING_DATASOURCE_INITIALIZE", "value":"always"}}, "resources":map[string]interface {}{}, "volumeMounts":[]interface {}{map[string]interface {}{"name":"petclinic-mysql-binding", "readOnly":true, "mountPath":"/workspace/config"}}, "name":""}}}}, "kind":"Deployer"}: validation failure list:
type in spec.template.volumes.secret is required
This issue is to track ideas about a stream being seen as something you bind in the same way you could bind say, a database connection:
Thanks to
system/pkg/controllers/streaming/kafkaprovider_controller.go
Lines 601 to 615 in 6737531
Add a property to core and knative Deployers indicating whether an Ingress should be created.
Name of property is TBD
This issue was found by the CI:
2019-11-05T15:30:29.571Z ERROR controllers.Processor unable to update Deployment for Processor {"processor": "fats-1572966365-kind/fats-cluster-repeater-java", "deployment": {"name": ""}, "error": "resource name may not be empty"}
github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
github.com/projectriff/system/pkg/controllers/streaming.(*ProcessorReconciler).reconcileProcessorDeployment
/home/runner/work/system/system/pkg/controllers/streaming/processor_controller.go:367
github.com/projectriff/system/pkg/controllers/streaming.(*ProcessorReconciler).reconcile
/home/runner/work/system/system/pkg/controllers/streaming/processor_controller.go:168
github.com/projectriff/system/pkg/controllers/streaming.(*ProcessorReconciler).Reconcile
/home/runner/work/system/system/pkg/controllers/streaming/processor_controller.go:91
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
full logs here: https://github.com/projectriff/fats/pull/221/checks?check_run_id=289333525
Several global variables appear to be shared, or at least serially reused, by distinct kubernetes custom resources. This may explain issues, like this, where the Ready
condition becomes true
even though the resource isn't really ready. It could also explain other problems where custom resource status behaves unexpectedly.
An example is the global variable processorCondSet
which is shared between processors, e.g. here:
func (ps *ProcessorStatus) InitializeConditions() {
processorCondSet.Manage(ps).InitializeConditions()
}
The controller currently uses knative/pkg as a foundation. The Kubernetes ecosystem is consolidating around Kubebuilder as the model for reconcilers.
Kubebuilder has a few key advantages:
The formal names for the CRDs are currently:
The builds.build
bit is redundant and would be cleaner as:
However, we should not do this if we are likely to reintroduce higher level Function and Application CRDs as they will collide and force Kubectl users to qualify which resource they intend to work with (like Knative vs K8s Service).
I have a hello
processor and it doesn't scale up when I post messages on the input topic. I see this in the keda logs:
{"level":"error","ts":1574810051.6503396,"logger":"controller_scaledobject","msg":"Error updating scaledObject status with used externalMetricNames","Request.Namespace":"default","Request.Name":"hello-processor-znrvs","error":"ScaledObject.keda.k8s.io \"hello-processor-znrvs\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"keda.k8s.io/v1alpha1\", \"kind\":\"ScaledObject\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2019-11-26T23:13:23Z\", \"finalizers\":[]interface {}{\"finalizer.keda.k8s.io\"}, \"generateName\":\"hello-processor-\", \"generation\":2, \"labels\":map[string]interface {}{\"deploymentName\":\"hello-processor-bfc7x\", \"streaming.projectriff.io/processor\":\"hello\"}, \"name\":\"hello-processor-znrvs\", \"namespace\":\"default\", \"ownerReferences\":[]interface {}{map[string]interface {}{\"apiVersion\":\"streaming.projectriff.io/v1alpha1\", \"blockOwnerDeletion\":true, \"controller\":true, \"kind\":\"Processor\", \"name\":\"hello\", \"uid\":\"582a5508-10a2-11ea-a4af-42010a80002a\"}}, \"resourceVersion\":\"18691636\", \"uid\":\"582d10e7-10a2-11ea-a4af-42010a80002a\"}, \"spec\":map[string]interface {}{\"cooldownPeriod\":30, \"maxReplicaCount\":30, \"minReplicaCount\":0, \"pollingInterval\":1, \"scaleTargetRef\":map[string]interface {}{\"deploymentName\":\"hello-processor-bfc7x\"}, \"triggers\":[]interface {}{map[string]interface {}{\"metadata\":map[string]interface {}{\"address\":\"franz-kafka-gateway-r8n2h.default:6565\", \"group\":\"hello\", \"topic\":\"default_in\"}, \"type\":\"liiklus\"}}}, \"status\":map[string]interface {}{\"externalMetricNames\":[]interface {}{\"lagThreshold\"}}}: validation failure list:\nstatus.currentReplicas in body is required\nstatus.desiredReplicas in body is required","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/kedacore/keda/pkg/controller/scaledobject.(*ReconcileScaledObject).getScaledObjectMetricSpecs\n\tkeda/pkg/controller/scaledobject/scaledobject_controller.go:350\ngithub.com/kedacore/keda/pkg/controller/scaledobject.(*ReconcileScaledObject).newHPAForScaledObject\n\tkeda/pkg/controller/scaledobject/scaledobject_controller.go:296\ngithub.com/kedacore/keda/pkg/controller/scaledobject.(*ReconcileScaledObject).reconcileDeploymentType\n\tkeda/pkg/controller/scaledobject/scaledobject_controller.go:202\ngithub.com/kedacore/keda/pkg/controller/scaledobject.(*ReconcileScaledObject).Reconcile\n\tkeda/pkg/controller/scaledobject/scaledobject_controller.go:146\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"}
Refs #116
Currently we hard-code example.com
as the domain, we should make that configurable
@sbawaska and I observed that, sometimes, created streams do not expose their address at all.
Yet, the processor gets reconciled with bogus stream addresses (i.e. the result of an empty gateway and empty topic separated by slash, in other words "/"
) and ends up failing way later than it could.
The processor should not reconcile unless all streams' statuses are fully initialized.
Currently, when a stream is deleted we do nothing. The stream will continue to exist in Liiklus and the backing messaging middleware. We should minimally call the stream's provisioner when the stream is deleted. What the provisioner chooses to do is up to it.
We can ensure the streaming runtime controller has the opportunity to cleanup the stream by introducing a finalizer on the stream. The controller will be responsible for setting the finalizer when the resource is created, and calling the provider and clearing the finalizer after the stream is deleted.
Since there is a delay on delete introduced by finalizers, operations that rely on deleting and recreating a stream may fail, as you can't create a resource that is in the process of being deleted.
The pod logs error like:
default/square-deployer-964sb-8b4c779b6-5r4sq[handler]: E1028 18:32:19.021680203 15 server_chttp2.cc:40] {"created":"@1572287539.021649377","description":"No address added out of total 1 resolved","file":"../deps/grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":394,"referenced_errors":[{"created":"@1572287539.021645968","description":"Failed to add port to server","file":"../deps/grpc/src/core/lib/iomgr/tcp_server_custom.cc","file_line":404,"referenced_errors":[{"created":"@1572287539.021640246","description":"Failed to initialize UV tcp handle","file":"../deps/grpc/src/core/lib/iomgr/tcp_uv.cc","file_line":72,"grpc_status":14,"os_error":"address family not supported"}]}]}
Dependabot couldn't parse the go.mod found at /go.mod
.
The error Dependabot encountered was:
go: sigs.k8s.io/[email protected] requires
k8s.io/[email protected]: invalid pseudo-version: does not match short name of revision (c18f71bf2947)
riff now depends on system to provide the in-cluster runtime for the riff experience. This repo needs to be treated like any other projectriff repo.
Refs #106
Many files in this project are generated and should not be edited by hand. We should make sure that generated files are labeled as such to help people avoid wondering where their edit went when they ran make
When you kubectl describe
a resource it displays the resource followed by events for that resource. Our reconciler does not report any events, but it's nice to see a history of how a given resource was reconciled, in particular if there was an error during the reconciliation.
One of the key main benefits of the Knative runtime over the Core runtime is that it provides http based autoscaling for workloads. We should fill that gap.
Design principals:
Other things to keep in mind:
This better reflects what that field really is while preserving loose coupling
This allows us to pick up the change of the API group from certmanager.k8s.io
to cert-manager.io
which will avoid failing KubernetesAPIApprovalPolicyConformant status which causes kapp
to wait forever when installing.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.