Giter VIP home page Giter VIP logo

kube-slack's Introduction

kube-slack

kube-slack is a monitoring service for Kubernetes. When a pod has failed, it will publish a message in Slack channel.

Screenshot

Installation

A Helm chart is available

  1. Create an incoming webhook:
    1. In the Slack interface, click on the gears button (Channel Settings) near the search box.
    2. Select "Add an app or integration"
    3. Search for "Incoming WebHooks"
    4. Click on "Add configuration"
    5. Select the channel you want the bot to post to and submit.
    6. You can customize the icon and name if you want.
    7. Take note of the "Webhook URL". This will be something like https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX
  2. (optional) If your kubernetes uses RBAC, you should apply the following manifest as well:
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kube-slack
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-slack
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kube-slack
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-slack
subjects:
  - kind: ServiceAccount
    name: kube-slack
    namespace: kube-system

Load this Deployment into your Kubernetes. Make sure you set SLACK_URL to the Webhook URL and uncomment serviceAccountName if you use RBAC

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-slack
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 3
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      name: kube-slack
      labels:
        app: kube-slack
    spec:
     # Uncomment serviceAccountName if you use RBAC.
     # serviceAccountName: kube-slack
      containers:
      - name: kube-slack
        image: willwill/kube-slack:v4.2.0
        env:
        - name: SLACK_URL
          value: https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX
        resources:
          requests:
            memory: 30M
            cpu: 5m
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      - key: CriticalAddonsOnly
        operator: Exists
  1. To test, try creating a failing pod. The bot should announce in the channel after 15s with the status ErrImagePull. Example of failing image:
apiVersion: v1
kind: Pod
metadata:
  name: kube-slack-test
spec:
  containers:
  - image: willwill/inexisting
    name: kube-slack-test

Additionally, the following environment variables can be used:

  • TICK_RATE: How often to update in milliseconds. (Default to 15000 or 15s)
  • FLOOD_EXPIRE: Repeat notification after this many milliseconds has passed after status returned to normal. (Default to 60000 or 60s)
  • NOT_READY_MIN_TIME: Time to wait after pod become not ready before notifying. (Default to 60000 or 60s)
  • METRICS_CPU: Enable/disable metric alerting on cpu (Default true)
  • METRICS_MEMORY: Enable/disable metric alerting on memory (Default true)
  • METRICS_PERCENT: Set percentage threshold on metric alerts (Default 80)
  • METRICS_REQUESTS: If no metrics limit defined, alert if the pod utilization is more than the resource request amount (this may be very noisy, Default false).
  • KUBE_USE_KUBECONFIG: Read Kubernetes credentials from active context in ~/.kube/config (default off)
  • KUBE_USE_CLUSTER: Read Kubernetes credentials from pod (default on)
  • KUBE_NAMESPACES_ONLY: Monitor a list of specific namespaces, specified either as json array or as a string of comma seperated values (foo_namespace,bar_namespace).
  • SLACK_CHANNEL: Override channel to send
  • SLACK_USERNAME: Override username to send
  • SLACK_PROXY: URL of HTTP proxy used to connect to Slack
  • RECOVERY_ALERT: Set to false to disable alert on pod recovery

Annotations

Pods can be marked with the following annotations:

  • kube-slack/ignore-pod: Ignore all errors from this pod
  • kube-slack/slack-channel: Name of slack channel to notify (eg. #monitoring)

License

MIT License

kube-slack's People

Contributors

arealmaas avatar ashmartian avatar blead avatar chribsen avatar dylannlaw avatar eddman avatar fhemberger avatar jstriebel avatar mvpgomes avatar omerozery avatar panj avatar phillipj avatar prabhu43 avatar whs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-slack's Issues

helm chart not upto date

Hi.

I see your helm chart is not uptodata, there is no way to send all the ENVs, do you plan to update it?

"Container entered status *Running*" in the wrong channel

Hi,
I've installed kube-slack via helm chart.

Basically I configured the slackUrl with an webhook with a default channel for the notifications. I've configured a deployment using kube-slack/slack-channel annotation to push pod notifications to a different "new" channel.

Basically I receive the "Container entered status CrashLoopBackOff" messages in the right "new" channel and the "Container entered status Running" messages in the default one. How is it possible?

I actually run v4.0.0 instead of v3.6.0 to check if this problem is fixed but I get the same behavior in both version.

Thank you in advance.

kube-slack/ignore-pod annotation does not work if you set an empty string as value.

Hi,
I installed kube-slack via the helm chart and tried to use the kube-slack/ignore-pod annotation. Sadly the pods were not ignored, because Javascript returns false if you test for an empty string
if(annotations['kube-slack/ignore-pod'])

Would it be possible to either change this to
if (typeof annotations['kube-slack/ignore-pod'] !== 'undefined')

or update the documentation to specify that you have to add a value.
Maybe I am the only one who got this wrong.
Thanks

Unhandled promise rejection

Release 3.2.1 is now working fine, thanks @chribsen.
but you should know that it keeps throwing the following messages:

(node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): ReferenceError: pod is not defined
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 3): ReferenceError: pod is not defined
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 4): ReferenceError: pod is not defined
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 5): ReferenceError: pod is not defined```

ContainersNotReady delay

Hi,

I often have this kind of message: ContainersNotReady. Even if I add really long delay on initialDelaySeconds. What is the recommended stuff to avoid this? In addition,I do not see any strange behavhior while the pods are booting.

Thanks

Drain support

Hi,

While draining a node, some pods will never stop (example kube-proxy, calico etc...). The problem is when you reboot a node when he has been drained but has kept some pods, I receive a lot of alerts (1 per pod saying not ready). It would be great to filter only on non drained nodes to avoid a such behavhior.

What do you think ?

Thanks

Lots of errors when upgrading to 4.0.1

(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 18)
(node:1) UnhandledPromiseRejectionWarning: Error: Failed to get /openapi/v2 and /swagger.json: Unauthorized
at _getSpec.catch.then.catch.err (/app/node_modules/kubernetes-client/lib/swagger-client.js:56:15)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 19)
(node:1) UnhandledPromiseRejectionWarning: Error: Failed to get /openapi/v2 and /swagger.json: Unauthorized
at _getSpec.catch.then.catch.err (/app/node_modules/kubernetes-client/lib/swagger-client.js:56:15)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 20)
(node:1) UnhandledPromiseRejectionWarning: Error: Failed to get /openapi/v2 and /swagger.json: Unauthorized
at _getSpec.catch.then.catch.err (/app/node_modules/kubernetes-client/lib/swagger-client.js:56:15)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 21)
(node:1) UnhandledPromiseRejectionWarning: Error: Failed to get /openapi/v2 and /swagger.json: Unauthorized
at _getSpec.catch.then.catch.err (/app/node_modules/kubernetes-client/lib/swagger-client.js:56:15)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 22)

Configuration property "slack_username" is not defined

I checked the log of Kube-slack and I noticed this error in both K8s clusters that I run in Azure.

Cluster 1

 (node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 268)
(node:1) UnhandledPromiseRejectionWarning: Error: Configuration property "slack_username" is not defined
    at Config.get (/app/node_modules/config/lib/config.js:203:11)
    at SlackNotifier.notify (/app/notify/slack.js:25:48)
    at PodLongNotReady.callback (/app/index.js:27:26)
    at PodLongNotReady.emit (events.js:189:13)
    at PodLongNotReady.check (/app/monitors/longnotready.js:65:18)
    at process._tickCallback (internal/process/next_tick.js:68:7)
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 269)
(node:1) UnhandledPromiseRejectionWarning: Error: Configuration property "slack_username" is not defined
    at Config.get (/app/node_modules/config/lib/config.js:203:11)
    at SlackNotifier.notify (/app/notify/slack.js:25:48)
    at PodLongNotReady.callback (/app/index.js:27:26)
    at PodLongNotReady.emit (events.js:189:13)
    at PodLongNotReady.check (/app/monitors/longnotready.js:65:18)
    at process._tickCallback (internal/process/next_tick.js:68:7) 

Cluster 2

 (node:1) UnhandledPromiseRejectionWarning: Error: Configuration property "slack_username" is not defined
    at Config.get (/app/node_modules/config/lib/config.js:203:11)
    at SlackNotifier.notify (/app/notify/slack.js:25:48)
    at PodLongNotReady.callback (/app/index.js:27:26)
    at PodLongNotReady.emit (events.js:189:13)
    at PodLongNotReady.check (/app/monitors/longnotready.js:65:18)
    at process._tickCallback (internal/process/next_tick.js:68:7)
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. 

My configuration:
K8s 1.12.7 deployed with Kubespray
Kube-Slack v4.2.0 deployed with helm

The pods are not restarted and work fine but this is still an issue.

Directed Slack Notifications

It would be great to be able to direct notifications to the last user that touched the deployment or pod. This may require a mapping of slack alias to kubernetes authentication i.e github_authn but it would be way less noisy and a better concentration of notifications to the right person.

Feature Request: Allow to configure the different username for slack notifications

Slack incoming webhook is configured with default username, which will be used for posting any incoming notifications. It is also allowed to change that username while posting a notification to that webhook.

In kube-slack, it will be used to re-use the same incoming webhook for more than one instance of kube-slack but change the username to identify the notifications coming from specific kube-slack.

Create Docker tag

Nice work but would you mind tagging the Docker image so that one doesn't have to depend on "latest"?

PV/PVC monitoring

Hello,
Have you considered to include PV/PVC statuses? I believe that this might be helpful:
Thanks

Release 3.2.0 (with feature: kube-slack/ignore-pod: "true") not working

Release 3.2.0 (with feature: kube-slack/ignore-pod: "true") not working properly.

Slack notifications are not being sent for pods when trying to apply:

  1. Deployment in any namespace (that creates replicationSet that creates pod eventually)
  2. Pod in any namespace besides default

It only sends slack notifications when applying a pod directly (like in the example) and in the default namespace.

NOTE: i tried to deploy kube-slack on kube-system, default and custom namespace.

node-slack log:

(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 3): TypeError: Cannot read property 'kube-slack/ignore-pod' of undefined ...

Release 3.1.1 is working fine

feature request: to watch only pods with set pod annotation

I would like to see an option to watch only pods with particular annotation.
My use case is: we run some set of different pods in one namespace, and I need to be able to watch only some pods there, I cannot put ignore-pod annotation to other pods.

Question about ContainersNotReady pod status

Why the blacklistReason array of PodStatus class constructor doesn't contain ContainersNotReady ?
I get a lot of messages of that kind every time i apply new deployment, even though eventually they are deployed successfully.
it is making my slack channel very noisy and unreliable.

Support for ignoring specific pods using annotations

Although kube-slack is a pretty neat plugin, it currently doesn't support ignoring specific pods. An obvious way of solving this problem would simply be to ignore all pods that has the following annotation:

annotations:
    kube-slack/ignore-pod: "true"

This simple annotation will make kube-slack much more useful to use it in clusters where you'd expect certain pods to fail predictably.

Getting cert error when pod starting

Get the following pod logs:

{"name":"kube-slack","hostname":"kube-slack-56d459fc44-fxq6g","pid":1,"level":30,"msg":"KUBE_NAMESPACE_ONLY not set. Watching pods in ALL namespaces.","time":"2018-10-17T09:21:20.864Z","v":0}
{"name":"kube-slack","hostname":"kube-slack-56d459fc44-fxq6g","pid":1,"level":30,"msg":"Kubernetes monitors started","time":"2018-10-17T09:21:20.890Z","v":0}
(node:1) UnhandledPromiseRejectionWarning: Error: unable to get issuer certificate
    at TLSSocket.onConnectSecure (_tls_wrap.js:1047:34)
    at TLSSocket.emit (events.js:182:13)
    at TLSSocket._finishInit (_tls_wrap.js:629:8)
(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:1) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

No Message during Deployment

Great Bot!
I have one issue though, during deployments I keep getting ContainersNotReady messages.
Is there a way to block these if a deployment is in progress?

Pod Security Policy compliant: runAsNonRoot

If you try to run willwill/kube-slack:v4.2.0 in a k8s with Pod Security Policy you get this error: container has runAsNonRoot and image has non-numeric user (kube-slack), cannot verify user is non-root

Posting to different channels per pod with newer Slack APIs

We'd like to be able to use annotations such as the existing kube-slack/slack-channel one to control which Slack channel events for particular pods post to. However, if my understanding is correct, Slack's current (and recently changed, I think?) recommended path to set up webhooks now restricts things such that a given webhook is only allowed to post to a single channel. Reading from https://api.slack.com/custom-integrations/incoming-webhooks:

The majority of your legacy code for sending messages using incoming Webhooks should continue to work within a Slack app without much modification; the only thing you can no longer do is customize the destination channel and author identity at runtime.

So, I think I can't do what I want using this code as it is, because AFAICT the webhook URL is only globally specifiable and not configurable via a pod-level annotation. Is that correct? I'm happy to consider providing a patch to fix this but just want to ensure my understanding is correct first. Thanks for providing this library!

Add RBAC to README.md

This is just an example that works for me..

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kube-slack
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-slack
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kube-slack
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-slack
subjects:
  - kind: ServiceAccount
    name: kube-slack
    namespace: kube-system

Sometimes PostStartHookError message is "" even though the event has a error log

I'm using a lot of (container lifecycle hooks)[https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks] and these will output errors if something goes wrong, sometimes the slackbot does not print the message

for example here it printed the bash error in full because the file isn't chmod +x:

Container entered status *PostStartHookError*
```Exec lifecycle hook ([/usr/bin/dumb-init -- /bin/bash -c sleep 10 && /etc/hooks.d/ValidateService.sh]) for Container "k8s-app" in Pod "k8s-app-6d9858ddb7-t2jqp_default(8e650045-641b-11e8-a16d-021a35178fd6)" failed - error: command '/usr/bin/dumb-init -- /bin/bash -c sleep 10 && /etc/hooks.d/ValidateService.sh' exited with 126: /bin/bash: /etc/hooks.d/ValidateService.sh: Permission denied
, message: "/bin/bash: /etc/hooks.d/ValidateService.sh: Permission denied\n"```
default/k8s-app-6d9858ddb7-t2jqp/k8s-app
Container entered status *PostStartHookError*
```Exec lifecycle hook ([/usr/bin/dumb-init -- /bin/bash -c sleep 10 && /etc/hooks.d/ValidateService.sh]) for Container "k8s-app" in Pod "k8s-app-6d9858ddb7-t2jqp_default(8e650045-641b-11e8-a16d-021a35178fd6)" failed - error: command '/usr/bin/dumb-init -- /bin/bash -c sleep 10 && /etc/hooks.d/ValidateService.sh' exited with 126: /bin/bash: /etc/hooks.d/ValidateService.sh: Permission denied
, message: "/bin/bash: /etc/hooks.d/ValidateService.sh: Permission denied\n"```

while when the script was executed and outputted a error slack message is ""

Container entered status *PostStartHookError*
```Exec lifecycle hook ([/usr/bin/dumb-init -- /bin/bash -c sleep 10 && bash /etc/hooks.d/ValidateService.sh]) for Container "k8s-app" in Pod "k8s-app-66f89b584f-5kstg_default(be87b3f7-641b-11e8-a16d-021a35178fd6)" failed - error: command '/usr/bin/dumb-init -- /bin/bash -c sleep 10 && bash /etc/hooks.d/ValidateService.sh' exited with 137: , message: ""```
default/k8s-app-66f89b584f-5kstg/k8s-app
Container entered status *PostStartHookError*
```Exec lifecycle hook ([/usr/bin/dumb-init -- /bin/bash -c sleep 10 && bash /etc/hooks.d/ValidateService.sh]) for Container "k8s-app" in Pod "k8s-app-66f89b584f-5kstg_default(be87b3f7-641b-11e8-a16d-021a35178fd6)" failed - error: command '/usr/bin/dumb-init -- /bin/bash -c sleep 10 && bash /etc/hooks.d/ValidateService.sh' exited with 137: , message: ""```

the event looks perfectly fine in k8s api when i get it with kubectl get events

49m         49m          1         k8s-app-66f89b584f-jhppk.153376985d155a0b   Pod          spec.containers{k8s-app}   Warning   FailedPostStartHook     kubelet, ip-10-0-116-153.eu-west-1.compute.internal   Exec lifecycle hook ([/usr/bin/dumb-init -- /bin/bash -c sleep 10 && bash /etc/hooks.d/ValidateService.sh]) for Container "k8s-app" in Pod "k8s-app-66f89b584f-jhppk_default(58e883ed-6420-11e8-a16d-021a35178fd6)" failed - error: command '/usr/bin/dumb-init -- /bin/bash -c sleep 10 && bash /etc/hooks.d/ValidateService.sh' exited with 1: , message: "{\n\"cmdline\": [\"/opt/k8s-app/k8s-app\"nTest case FAILED!\nExpected:\n{\n  \"labels\": [\n    {\n      \"foo\": 7.9950147\n    }\n  ],\n  \"tokens\": null\n}\nReceived:\nFAIL ME HARD\n"

Document how to setup the Slack integration

I've created a slack bot and invited it to a channel and tried to configure the SLACK_URL to the best of my knowledge but nothing seem to happen. The kube-slack pod doesn't print anything to the logs that indicates an error (I'm using the tagged v1.1.0 v2.0.0 version). Thus it's hard to troubleshoot what's going wrong. It would be nice if you could provide some additional docs describing in more depth how one should create and configure the slackbot.

Logs are not forwarded by fluent-bit by default

Hi, I've installed kube-slack on my k8s cluster.

Helm version

Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

Installed charts
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
fluent-bit 6 Wed Jan 30 15:06:16 2019 DEPLOYED fluent-bit-1.5.0 1.0.3 logging
kube-slack 6 Wed Jan 16 15:56:42 2019 DEPLOYED kube-slack-0.4.0 v3.6.0 infrastructure

Basically kube-slack produce this logs:
{"name":"kube-slack","hostname":"kube-slack-5fc4b6c55c-chc42","pid":1,"level":30,"msg":"Slack message sent","time":"2019-02-04T16:39:55.107Z","v":0}

This json log have "time" field inside. When docker save on /var/log/containers this log produce this:
{"log":"{"name":"kube-slack","hostname":"kube-slack-7cf99d5dbd-ffpd7","pid":1,"level":30,"msg":"Slack message sent","time":"2019-02-05T13:24:26.193Z","v":0}\n","stream":"stdout","time":"2019-02-05T13:24:26.193357608Z"}

As you can see this json is not valid because there is the key 'time' twice. For this reason Elasticsearch reject this json forwarded by fluent-bit. Among other things this "time" field inside the kube-slack log is not really needed because docker handle the timestamp on his own with an higher precision.

Support other logging URLs besides Kibana

It would be nice to customize the logging url for other services than Kibana. Maybe by adding LOGGING_URL with some kind of placeholders for container and pod to be replaced by a RegExp?

Example:

let loggingUrl = '';
if (process.env.LOGGING_URL) {
  loggingUrl = process.env.KIBANA_URL
    .replace('%%pod%%', encodeURIComponent(item.pod.metadata.name))
    .replace('%%container%%', encodeURIComponent(item.name));

  loggingUrl = `(<${loggingUrl}|View logs>)`;
}

Proxy setting to reach slack webhooks.

Firstly, thank you so much for providing this capability.. i am sure this is going to help to monitor the k8s infra...below is my query to run inside a corporate network..

I am running inside a corporate network, so i tried setting the http_proxy, https_proxy as env variables. I also set the proxy https_proxy in the .npmrc..but still i face the below error from the pod logs

(node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 201)
(node:1) UnhandledPromiseRejectionWarning: Error: tunneling socket could not be established, cause=connect EINVAL 0.0.35.130:80 - Local (0.0.0.0:0)
at

My NPM config on the pod looks like below..

app $ npm config ls
; cli configs
metrics-registry = "https://ok-repo.ok.com/nodejs/content/groups/npm/"
scope = ""
user-agent = "npm/5.6.0 node/v8.11.2 linux x64"

; project config /app/.npmrc
https-proxy = "proxy-appgw.ok.com:9090"
registry = "https://ci-repo.ok.com/nodejs/content/groups/npm/"

; userconfig /home/kube-slack/.npmrc
http_proxy = "http://proxy-appgw.ok.com:9090"
https_proxy = "http://proxy-appgw.ok.com:9090"
proxy = "http://proxy-appgw.ok.com:9090/"

; node bin location = /usr/local/bin/node
; cwd = /app
; HOME = /home/kube-slack
; "npm config ls -l" to show all defaults.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.