openfaas / faas-netes Goto Github PK
View Code? Open in Web Editor NEWServerless Functions For Kubernetes
Home Page: https://www.openfaas.com
License: MIT License
Serverless Functions For Kubernetes
Home Page: https://www.openfaas.com
License: MIT License
Suggestion from @stefanprodan - to enable NATS Streaming for async by default.
Deployment already includes NAT Streaming for async behaviour
Optional via helm or by using separate YAML
Bake it in, but document how to swap/remove in markdown/docs.
I have a image processing function that need to use 13G memory. I create a Cluster
have 16G memory with 3 nodes. But I can not find a way to configure my function's
Pod memory size or swap memory size. And if several request at same time, How to
set configure to make my each function call can have enough memory to complete the job?
The image processing job can have enough memory(13G) resource to finish job and upload the result.
Since faas-cli use default settings to deploy function. The function's pod yaml only set cpu resource request = 100m. So this cause the image processing job return memory not enough error. I observe the pod resource usage are only use 40M memory and 0.419 core cpu.
Does faas-cli or faas-netes have command that can set the deploy function's memory/swap memory size? or I can manual set the function's pod's memory? If I can make sure the function's pod yaml can have request and limit memory resource, I can let large resource requirement job complete safety.
I can run my image processing function on local docker with large swap memory. I try to enlarge
my cluster's memory but I found the root cause is that the function's pod only can use 40M memory and 0.419 cores cpu for one job request. I try to check the pod's yaml: it shows resources->requests->cpu:100m. I guess this is create by faas-cli deploy command or default faas-netes settings. Can I use faas-cli or chage faas-netes settings to increase my function's pod memory?
docker version
(e.g. Docker 17.0.05 ):This proposal is inspired by the Docker Swarm secret management but should be generalizable to Kubernetes Secrets and possibly other 3rd party secret management systems.
It is common that a function may need to connect to a database or a secure API. The username and password values used to connect to these data sources should be kept secret and secured. Currently these values are either hardcoded into functions or provided as environment variable. This means that these values are not encrypted at rest and even more specifically it leads to the highly likely situation where these secret values are checked into git repositories. This puts these sensitive values at risk.
Both Docker Swarm and Kubernetes have built in secret management systems. In particular, both provide an API for management of secrets and support mounting those secrets in services as files. Kubernetes is the most flexible in allowing you to specify the exact path of where these files are mounted. Docker Swarm on the other hand strictly mounts the secrets to /run/secrets
inside the containers.
OpenFaaS should support defining services that access secret values from and encrypted storage.
Secrets can be provided via the Environment variables, but these values are not encrypted at rest.
The end user should be able to provide secrets to the orchestration layer (docker swarm or kubernetes) and then reference those same secrets in the function. The creator of a function would simply provide a list of secrets that are required for the function during the function creation (or in the stack yaml for the cli) and the orchestration layer would be responsible for mounting those secrets as files in a standard location.
Specifically,
/run/secrets
.// CreateFunctionRequest create a function in the swarm.
type CreateFunctionRequest struct {
// Service corresponds to a Docker Service
Service string `json:"service"`
// Image corresponds to a Docker image
Image string `json:"image"`
// Network is specific to Docker Swarm - default overlay network is: func_functions
Network string `json:"network"`
// EnvProcess corresponds to the fprocess variable for your container watchdog.
EnvProcess string `json:"envProcess"`
// EnvVars provides overrides for functions.
EnvVars map[string]string `json:"envVars"`
// Secrets is a list of secrets required for the orchestration layer to provide
Secrets []string `json:"secrets"`
// RegistryAuth is the registry authentication (optional)
// in the same encoded format as Docker native credentials
// (see ~/.docker/config.json)
RegistryAuth string `json:"registryAuth,omitempty"`
// Constraints are specific to back-end orchestration platform
Constraints []string `json:"constraints"`
}
I want to use this functionality specifically to securely store database and api access credentials.
Install Server Side Tiller Component Via:
$ helm init --skip-refresh --upgrade --service-account tiller
Installation Failed:
$ sudo helm init --skip-refresh --upgrade --service-account tiller /usr/local/bin/helm: 1: /usr/local/bin/helm:ELF: not found /usr/local/bin/helm: 2: /usr/local/bin/helm: }▒: not found /usr/local/bin/helm: 8: /usr/local/bin/helm: Syntax error: "(" unexpected
docker version
(e.g. Docker 17.0.05 ):This matches the API gateway in the faas repo:
read_timeout - seconds as an integer
write_timeout - seconds as an integer
The code can be copied from read_config.go
fprocess and other variables sent via FaaS-CLI or RESTful client should be propagated to the container template spec.
Make the change here:
https://github.com/alexellis/faas-netes/blob/master/handlers/deploy.go#L87
Read the value from this request type:
https://github.com/alexellis/faas/blob/master/gateway/requests/requests.go#L20
Add Memory limits to functions on Kubernetes
Since OpenFaaS is agnostic of the provider, I'm looking at using the Swarm notation which is much easier/cleaner to parse and doesn't involve vendoring in-ordinate amounts of complex code.
See: openfaas/faas#385 openfaas/faas-cli#223 openfaas/faas-cli#225
This would allow for easier deployment with Helm, and would also allow for blazing fast deployment with tools such as Kubeapps, which pull charts from repositories.
We currently have a max number of replicas for any function, but in Swarm we also support overriding that. This issue is about providing a label that can be passed at deploy-time to specify the minimum number of replicas and maximum number too.
Running
kubectl apply -f ./faas.armhf.yml,monitoring.armhf.yml,rbac.yml
should bring up pods on Raspberry Pi k8s cluster.
alertmanager
pod godes into CrashLoopBackOff
.
HypriotOS/armv7: pirate@navi in ~
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
alertmanager-2609462557-bf54n 0/1 CrashLoopBackOff 142 11h 10.244.2.5 tatl
faas-netesd-1317931779-wk4gw 1/1 Running 0 11h 10.244.2.3 tatl
gateway-1085489343-cgs53 1/1 Running 0 11h 10.244.1.5 tael
prometheus-4259297277-qhj0b 1/1 Running 0 11h 10.244.2.4 tatl
Alertmanager pod log shows missing /alertmanager.yml
HypriotOS/armv7: pirate@navi in ~
$ kubectl logs alertmanager-2609462557-bf54n
time="2017-09-19T18:50:52Z" level=info msg="Starting alertmanager (version=0.5.1, branch=master, revision=0ea1cac51e6a620ec09d053f0484b97932b5c902)" source="main.go:101"
time="2017-09-19T18:50:52Z" level=info msg="Build context (go=go1.7.3, user=root@fb407787b8bf, date=20161125-08:19:00)" source="main.go:102"
time="2017-09-19T18:50:52Z" level=info msg="Loading configuration file" file="/alertmanager.yml" source="main.go:195"
time="2017-09-19T18:50:52Z" level=error msg="Loading configuration file failed: open /alertmanager.yml: no such file or directory" file="/alertmanager.yml" source="main.go:198"
Update https://github.com/alexellis/faas-netes/blob/master/monitoring.armhf.yml#L68 to
command: ["/bin/alertmanager","-config.file=/etc/alertmanager/config.yml", "-storage.path=/alertmanager"]
kubectl apply -f ./faas.armhf.yml,monitoring.armhf.yml,rbac.yml
Following example doc: https://github.com/alexellis/faas/blob/master/guide/deployment_k8s.md
docker version
(e.g. Docker 17.0.05 ):$ docker version
Client:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:30:54 2017
OS/Arch: linux/arm
Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:30:54 2017
OS/Arch: linux/arm
Experimental: false
Are you using Docker Swarm or Kubernetes (FaaS-netes)?
Kubernetes v1.7.5 (FaaS-netes)
Operating System and version (e.g. Linux, Windows, MacOS):
HypriotOS v1.5.0 on Raspberry Pi 3
When I perform a faas-cli deploy -f function.xml against my kubrnetes cluster, I receive the message "Server returned unexpected status code 500 deployment.extension 'function name' not found". The function works fine, no issues with using the function - it appears to be a cosmetic error but I can't determine the origin.
I would expect the functions to deploy without the error message displaying OR provide more details around why the error was occurring.
When I perform a faas-cli deploy -f function.xml against my kubrnetes cluster, I receive the message "Server returned unexpected status code 500 deployment.extension 'function name' not found".
Provide additional details regarding the error message that's being triggered
Docker version docker version
(e.g. Docker 17.0.05 ):
Are you using Docker Swarm or Kubernetes (FaaS-netes)?
Faas-Netes
Operating System and version (e.g. Linux, Windows, MacOS):
CentOS 7
Link to your project or a code example to reproduce issue:
Examples of the code im deploying are here - https://github.com/codyde/faas-functions/
i used Helm to setup the openfaas stack
configured my function with
gateway: http://localhost:8001/api/v1/proxy/namespaces/default/services/gateway:8080
deployed it, but when I navigate to https://EXTERNAL_CLUSTER_IP/system/functions/FUNCTION_NAME
I'm getting the following error
User "system:anonymous" cannot get path "/system/functions/FUNCTION_NAME".: "No policy matched.\nUnknown user \"system:anonymous\""
I'm trying to deploy a telegram bot that uses a webhook
I'm new to kubernetes and i've had a really hard time deploying to GKE, I think we can all benefit from some more resources regarding this combo (gke+faas-nets)
Thanks!
add NOTES.txt in Charts
Docker version docker version
(e.g. Docker 17.0.05 ):
Are you using Docker Swarm or Kubernetes (FaaS-netes)?
Operating System and version (e.g. Linux, Windows, MacOS):
Link to your project or a code example to reproduce issue:
Nice work. Just wanted to understand how this compares to https://github.com/fission/fission?
Thanks.
This is currently not implemented but is needed for the FaaS CLI.
Default Delete/Create approach gives 500 error when it should be 404
$ faas-cli deploy -f samples.yml
500 deployment function_name not found.
function_name deployed
This should say 404 instead.
@weikinhuang do you think you can take a look?
Let's try using https://golang.org/pkg/net/http/httputil/#ReverseProxy in this code: https://github.com/alexellis/faas-netes/blob/master/handlers/proxy.go
I think it could be more efficient and less maintenance.
Deployment, pod and service names conform to a requirement to match a DNS-1123 label. These must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character. The regular expression used to validate names is 'a-z0-9?'. If you use a name, for your deployment, that does not validate, an error is returned.
"function1" should be valid, but isn't with our current regex.
Must type functiontwo
Alter regex and unit tests in project.
faas-cli deploy --image functions/alpine --name function1 --fprocess=env
Start here:
https://github.com/openfaas/faas-netes/blob/master/handlers/deploy.go#L29
Must have unit test coverage for positive and negative scenarios
Must be tested on Kubernetes 1.8 (with console output pasted into the PR)
Must pass CI etc.
Valid RegEx needs to be documented in the readme and in the troubleshooting guide in the main faas repo.
Currently, faas-netes
assumes that all secrets are related to docker creditials, specifically that they should be mounted as ImagePullSecrets
. See the current deploy handler where it sets each named secret as an image pull secret
imagePullSecrets := []apiv1.LocalObjectReference{}
for _, secret := range request.Secrets {
imagePullSecrets = append(imagePullSecrets,
apiv1.LocalObjectReference{
Name: secret,
})
}
The image pull secrets are a particular special use case for secrets, but does not actually facilitate the use of secrets by the functions for their unique needs, for example DB credentials. This functionality has been implemented for docker swarm, see pull request 292.
Kubernetes allows for several configuration options when mounting and using secrets, more than Swarm. I propose that we deploy functions so that the usage in Swarm and K8S is identical. Specifically, this means that secrets are deployed so that they are mounted as files at /run/secrets
Per the documentation on kubernetes.io this can be implemented
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/run/secrets"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
Implementing this requires a potentially backwards incompatible change or weird naming going forward. Concretely I propose that
The request.Secrets
should be reserved for function secrets
When creating the the DeploymentSpec
for the function, the secrets list should be added to the volumes list so that the secrets are mounted as files at /run/secrets
, see the example yaml above. In the source code this would roughly looks like
Containers: []Container{
{
VolumeMounts: []VolumeMount{
{
Name: "Secret1"
ReadOnly: true,
MountPath: "/run/secrets",
},
},
},
},
Volumes: []Volume{
{
Name: "Secret1",
Secret: &SecretVolumeSource{
SecretName: "Secret1",
},
}
Registry credentials secrets should be passed in a new request.RegistryAuth
, this would again be a list of strings, e.g.
RegistryAuth:
- dockerHub
- internalHub
- internal-gcr
- internal-ecr
this would allow K8S to pull from multiple private registries and is already supported by the K8S api.
I believe this proposal will break the current support for ImagePullSecrets
. Alternatively, we could continue the support for ImagePullSecrets
via request.Secrets
and then add a new request.FunctionSecrets
to support secrets that are mounted as files as described above. This would require refactoring the secrets support in the core gateway
and I also believe brings in weird naming issues. Consider this table of how to translate between the platforms we currently support
OpenFaas | Swarm | k8s | |
---|---|---|---|
Registry Auth | secrets | docker login | imagePullSecrets |
Secret value | functionSecrets | secrets | secrets |
It feels like using a new name for secrets mounted as a file would be confusing in the long term.
The trickiest part of this proposal is agreeing on an API and the potential break in backwards compatibility.
Deployed by helm chart with functionNamespace set as 'openfaas-fn'.
Async function called by queue-worker.
Async function could not be called, as function service could not be resolved.
Define faas_function_suffix environment variable for queue-worker in helm chart.
docker version
(e.g. Docker 17.0.05 ):Server:
Version: 17.06.1-ce
API version: 1.30 (minimum version 1.12)
Go version: go1.8.3
Git commit: 874a737
Built: Thu Aug 17 22:54:55 2017
OS/Arch: linux/amd64
Experimental: false
Are you using Docker Swarm or Kubernetes (FaaS-netes)?
FaaS-netes
Operating System and version (e.g. Linux, Windows, MacOS):
Ubuntu Server
Link to your project or a code example to reproduce issue:
Prior to Kubernetes v1.7 deletion basically requires four separate Kubernetes API requests, first one will scale down number of pods
to zero, second will remove deployment
object, third will remove replicaset
object and finally last one will remove service
object (exactly like kubectl
is working), this method will leave no orphaned objects behind, basically it will execute a proper GC after deletion.
On the other hand, on older versions of Kubernetes, by default you will have situation where only deployment
and service
objects are removed and all other subordinate objects (replicaset
and pods
) will still be running on Kubernetes, this is definitely some kind of GC issue.
This issue has already been addressed for Kubernetes >=v1.7 #44058.
That said, it would be nice if you can make it vertically compatible by applying proposed changes in to the faas-netesd code.
Thanks
I believe there is a potential issue with env-vars and Deployment v1beta1 when testing on Kubernetes 1.6.7 with RBAC enabled
Rolling updates should honour env-vars set in the container spec
They're not honouring it.
Code is present, maybe need to try different versions of Kubernetes to see if if that resolves things.
https://github.com/openfaas/faas-netes/blob/master/handlers/update.go#L44
Deploy:
faas-cli deploy --image functions/alpine:latest --fprocess="env" --name env-tester \
--env env1=true
Invoke it and you'll see the env1
set to "true"
Now do a rolling-update:
faas-cli deploy --image functions/alpine:latest --fprocess="env" --name env-tester \
--env env1=false --update=true --replace=false
In my configuration I saw that Kubernetes did not honour the env-vars and env1
was still true
.
This needs to sync with the main faas
project, but the idea is to provide memory/CPU limits for functions as we create their deployment spec through the Kubernetes API.
Unbounded.
Add to basic Create type in faas
project.
Pass values onto the spec when doing a create/update.
On a fresh 1.7.5 minikube helm
fails to deploy a release of openfaas due to RBAC rules inclusion with the error
helm install --name openfaas --set async=true ./openfaas/
should return the release details.
Error: release openfaas failed: clusterroles.rbac.authorization.k8s.io "faas-controller" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["create"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["delete"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["update"]}] user=&{system:serviceaccount:kube-system:tiller acb814ed-ab9a-11e7-ab23-1673569d708d [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]
You can still install openfaas via Helm skipping use of RBAC with
helm install --name openfaas --set async=true --set rbac=false ./openfaas/
This is preventing use of openfaas via Helm with default settings
Docker version docker version
(e.g. Docker 17.0.05 ):
Version: 17.09.0-ce
Are you using Docker Swarm or Kubernetes (FaaS-netes)?
Faas-Netes
Operating System and version (e.g. Linux, Windows, MacOS):
MacOS
Hi,
I try to deploy Fass-netes in my IPv6-only Kubernetes cluster (v1.8.2).
Every services seem to work at glances, but the problem occur when the Gateway pod try to redirect to the Function pod.
I try following this link : https://blog.alexellis.io/first-faas-python-function/ (hello-python function),
# Curl the final function pod
16:31:22 › curl '[1404:f200:f::d011:e21d]:8080' -d 'open-faas is awesome'
Hello! You said: open-faas is awesome
# Logs when trying to curl the Gateway pod :
2017/10/30 05:29:44 > Forwarding [POST] to /function/hello-pytho
2017/10/30 05:31:22 http: proxy error: context canceled
2017/10/30 05:31:22 < [http://faas-netesd.openfaas.svc.domain-k8s.tld:8080/function/hello-python] - 502 took 97.582934 seconds
2017/10/30 05:31:22 function=hello-python
GetHeaderCode before 502
16:35:15 › k -n openfaas exec -it gateway-b4bc9dd89-r79m4 sh
~ # ping faas-netesd.openfaas.svc.domain-k8s.tld
PING faas-netesd.openfaas.svc.domain-k8s.tld (1404:f200:f::66a8:6360): 56 data bytes
64 bytes from 1404:f200:f::66a8:6360: seq=0 ttl=62 time=0.608 ms
( internal pod)
bash-4.3# curl http://faas-netesd.openfaas.svc.domain-k8s.tld:8080
404 page not found
bash-4.3# curl http://gateway.openfaas.svc.domain-k8s.tld:8080/
<a href="/ui/">Moved Permanently</a>.
# When curl the gateway without '-d curl argument' get HTTP 200 :
- curl 'gateway.openfaas.svc.domain-k8s.tld:8080'/function/hello-python
2017/10/30 05:43:15 > Forwarding [GET] to /function/hello-python
2017/10/30 05:43:15 > Forwarding [GET] to /function/hello-python
2017/10/30 05:43:15 < [http://faas-netesd.openfaas.svc.domain-k8s.tld:8080/function/hello-python] - 200 took 0.004765 seconds
2017/10/30 05:43:15 function=hello-python
GetHeaderCode before 200
2017/10/30 05:43:15 < [http://faas-netesd.openfaas.svc.domain-k8s.tld:8080/function/hello-python] - 200 took 0.004765 seconds
2017/10/30 05:43:15 function=hello-python
GetHeaderCode before 200
EDIT :
Maybe I found any cause of the problem,
faas-netesd logs :
2017/10/30 05:48:31 Post http://:8080/function/hello-python: EOF
2017/10/30 05:48:31 [1509342018] took 493.257782 seconds
It seem Faas-netesd try to use the ClusterIP, but in IPv6 only, ClusterIP is useless.
Any feature to use Service'Endpoints instead of Service'ClusterIP to speak to function ? (Like Nginx-Ingress does) ?
Thanks for OpenFaas ! Pretty awesome project.
Create a CRD custom resource definition which represents the Function abstraction. Plus controller to reconcile the state.
A Function would internally create the existing pair of:
The faas-netesd controller would be responsible for accepting the Function definition via the existing RESTful API. Then rather than creating/updating/deleting services and deployments we'd create/update/delete Functions.
The controller would then be responsible for CRUD on Services/Deployments.
Other concerns include:
Level: advanced
Status: design/PoC
Initial work / artifacts should begin to define and scope the work only.
Please create architectural diagrams to support design.
I will add e2e test using minikube
Docker version docker version
(e.g. Docker 17.0.05 ):
Are you using Docker Swarm or Kubernetes (FaaS-netes)?
Operating System and version (e.g. Linux, Windows, MacOS):
Link to your project or a code example to reproduce issue:
The latest version of Derek uses yaml format for managing maintainers and features. The repo should include a .DEREK.yml
to enable the use of the latest version of Derek.
Derek is still wearing flares & a tank top and relies on MAINTAINERS.
Add a .DEREK.yml which includes the current maintainers and includes comments
and dco_check
as enabled features.
Derek is evolving, the project must too.
N/A
At the bottom of the list of addl Helm chart options, there is a missing bullet point.
### Additional OpenFaaS Helm chart options:
* `functionNamespace=defaults` - to the deployed namespace, kube namespace to create function deployments in
* `async=true/false` - defaults to false, deploy nats if true
* `armhf=true/false` - defaults to false, use arm images if true (missing images for async)
* `exposeServices=true/false` - defaults to true, will always create `ClusterIP` services, and expose `NodePorts/LoadBalancer` if true (based on serviceType)
* `serviceType=NodePort/LoadBalancer` - defaults to NodePort, type of external service to use when exposeServices is set to true
* `rbac=true/false` - defaults to true, if true create roles
`ingress.enabled=true/false` - defaults to false, set to true to create ingress resources. See openfaas/values.yaml for detailed Ingress configuration.
ingress.endabled=true/false
should have a * in front of it.
The API Gateway will now accept a GET invocation for a method.
We should check if faas-netes supports this and if it has any if-guards, we should remove those and do some basic testing with functions.
I.e. https://github.com/openfaas/faas-netes/blob/master/handlers/proxy.go#L41
It looks like this flow is not working as expected:
(Looks OK)
(Change is not reflected in the pod)
If you run kubectl delete manually on the svc/deployment it seems to fix when deployed again.
Alternatively - use a unique tag name for each push
To follow the style of Docker Swarm openfaas/faas#414 we should update the create and update handlers so that if a label is specified for the minimum amount of replicas that is used instead of "1".
labels:
"com.openfaas.scale.min": "5"
"com.openfaas.scale.max": "15"
In this instance we'd pick 5.
Links should point to the new shiny openfaas org
They point to alexellis
Update them
Hi Alex,
Been working to run faas-netes as a demo ontop of http://play-with-k8s.com
While debugging some DNS failures, I noticed the faas gateway was swamping the DNS server with requests for possible permutations of a k8s or AWS service domain, ie;
.default.svc.cluster.local
.svc.cluster.local
.cluster.local
.ec2.internal
See for more tcpdump examples.
I thought setting function_provider_url
in faas.yml
to simply faas-netesd
(the first name the lookups try) would solve this and not lookup any of the other domains, but it seems it's not just for finding faasd itself, the same hunting behaviour is also then seen with each configured function when hit via the gateway;
15:41:53.756886 IP (tos 0x0, ttl 64, id 34988, offset 0, flags [DF], proto UDP (17), length 181)
10.32.0.3.domain > 10.44.0.0.46904: [bad udp cksum 0x1501 -> 0xb1b0!] 58910 NXDomain q: AAAA? urlping2.default.default.svc.cluster.local. 0/1/0 ns: cluster.local. SOA ns.dns.cluster.local. hostmaster.cluster.local. 1502290800 28800 7200 604800 60 (153)
Would it be possible to add an env:
item into faas.yml
such as:
faas_service_lookup_domain
which in my case would be set to `default.svc.cluster.local, in order to prevent this try-em-all behaviour.
I ask specifically because PWK's has a default 150 concurrent DNS query limit, which this is dos-ing and i think PWK will create a nice testbed for teaching / using faas.
Thanks.
Characters which are valid for Docker Swarm such as _ are not valid for Kubernetes services.
We should prevent Prometheus scraping of function services, this can be done by adding an annotation to the service meta.
Since the watchdog doesn't expose a /metrics
endpoint, Prometheus should not try to scrape any function.
If Prometheus is configured with Kubernetes service discovery it will try to scrape the functions resulting in function errors and high resources usage.
On function deploy, add the prometheus.io.scrape='false'
annotation to the function service.
The colorisebot will run into OOM after multiple calls to /metrics
.
Hi, is there any plan to add Nomad support?
Thanks
success deploy faas-netes on kubernetes following the deployment_k8s.md guide
deploy fail with below error message:
`kubectl apply -f ./faas.yml,monitoring.yml,rbac.yml
service "faas-netesd" configured
serviceaccount "faas-controller" configured
deployment "faas-netesd" configured
service "gateway" configured
deployment "gateway" configured
service "prometheus" configured
deployment "prometheus" configured
service "alertmanager" configured
deployment "alertmanager" configured
clusterrolebinding "faas-controller" configured
Error from server (Forbidden): error when creating "rbac.yml": clusterroles.rbac.authorization.k8s.io "faas-controller" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["create"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["delete"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["update"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["watch"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["create"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["delete"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["update"]}] user=&{[email protected] [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/" "/apis" "/apis/" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]`
it's similar like #41. But I not use Helm and minikube
execute kubectl apply -f ./faas.yml,monitoring.yml,rbac.yml
I have a image processing function want to use faas framework to triggered by my web service.
But I stuck on deploy stage...The key error is: error when creating "rbac.yml": clusterroles.rbac.authorization.k8s.io "faas-controller" is forbidden: attempt to grant extra privileges
I'm new in kubernetes. I'm not sure how to open the privileges on my kubernetes.
docker version
(e.g. Docker 17.0.05 ):Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:45:38 2017
OS/Arch: linux/amd64
Experimental: true
We should offer Horizontal Pod Autoscaling as an alternative to alert-manager+Prometheus auto-scaling for Kubernetes.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
cc/ @stefanprodan
Auto-scaling currently works through the API Gateway so is automatic for any provider and can be tweaked with min/max scaling numbers and also varying alerts - if you want you can include NodeExporter metrics.
Built-in
Agnostic to provider
HPA2 is Kubernetes native
Will not work for Swarm or other back-ends
Harder to configure up front
A feature flag or / env-var would allow us to toggle.
It would be great to get a fine-tune done on the RBAC permissions which are currently broader than they need to be. Hoping for a PR from @luxas soon.
Memory based constraints are already enabled in the k8s provider, we should extend this functionality to add cpu based constraints which are already defined in the gateway.
https://github.com/openfaas/faas/blob/613ac888cdb6ffad2dcbef90282f1eea90ce85a3/gateway/requests/requests.go#L47
When a new function is deployed or a replica is created any cpu constraints set in the stack file should be applied to the deployment spec.
Currently only memory is applied, cpu is ignored
Amend func createResources(request requests.CreateFunctionRequest)
to handle CPU limits passed from the gateway.
For Hacktoberfast
We could do with a guide for running the open-source self-hosted registry with OpenFaaS and Docker Swarm. Also Kubernetes via
A helm chart would make deploying FasS to Kubernetes much simpler, and it is quickly becoming the standard way to try out third party solutions on Kubernetes...
Once FaaS on kubernetes is somewhat stable we can then submit our chart to the official repo ... https://github.com/kubernetes/charts
I am happy to work on this unless someone else wants to give it a go...
I will write a guide in the form of a blog post about how to deploy OpenFaaS on Kubernetes on AWS.
This work item was suggested by @alexellis, who also suggested that I create an issue to track my progress of this task.
I will leverage my K8-on-AWS deploy script which is here (https://github.com/ericstoekl/bash_profile/blob/master/kubesetup.bash).
The BestEffort pod (no resource definition) is the candidate to be evicted when node is under resource pressure. It is never recommended for system pods.
Faas components all have resource limit at least.
Faas components all have no resource limit.
Add resource limit to Deployment.
I have to patch faas-netes manually.
https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes
docker version
(e.g. Docker 17.0.05 ):Update Readme to include Homebrew option
N/A not a code change
N/A not a code change
N/A not a code change
N/A not a code change
The current steps have it installing manually when there is also a package available for it.
MacOS
Right now the entire setup is run under the default namespace regardless of if the yaml files specified a namespace
key or not. It would be useful to give the user (cluster) a ability to specify an arbitrary namespace to run under (ex. openfaas
, or open-faas
, etc).
Helm allows us to specify a namespaces:
For OpenFaaS system components: openfaas
For OpenFaaS functions: openfaas-fn
Let's leverage the changes we made for helm
for our default set of YAML files including the mounted configs for Prometheus/alertmanager.
NodePorts are used in the README as a short-cut, but an ingress controller would be better.
Proposal: enable private Hub repos via Kubernetes secrets
At deploy time we should be able to specify an additional secret name so that private Docker Hub images can be pulled.
Current behaviour supported in Swarm provider, but unsupported in this provider.
Use public images
Can work-around this with a private registry hosted in the cluster.
Create secrets ahead of time with kubectl
.
Add metadata when deploying via the CLI to refer to the desired secret name
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.