aylei / kubectl-debug Goto Github PK
View Code? Open in Web Editor NEWThis repository is no longer maintained, please checkout https://github.com/JamesTGrant/kubectl-debug.
License: Apache License 2.0
This repository is no longer maintained, please checkout https://github.com/JamesTGrant/kubectl-debug.
License: Apache License 2.0
Unable to open socket file: target process not responding or HotSpot VM not loaded
This will output the version of kubectl-debug
Hi, I get connection timed out when try to use this tool. It's managed Kubernetes on DigitalOcean
[~]$ k debug auth-service-c7c55cc59-jxsts
error execute remote, error sending request: Post http://10.136.228.161:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2Fad6c55366505d3816dc2d7c274f15738c7995483807d404755ea96188249e2fa&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 10.136.228.161:10027: connect: connection timed out
error: error sending request: Post http://10.136.228.161:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2Fad6c55366505d3816dc2d7c274f15738c7995483807d404755ea96188249e2fa&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 10.136.228.161:10027: connect: connection timed out
agent are installed:
$ k get pod
NAME READY STATUS RESTARTS AGE
auth-service-c7c55cc59-jxsts 1/1 Running 1 16h
debug-agent-jmh56 1/1 Running 0 5m58s
debug-agent-kscnq 1/1 Running 0 5m58s
debug-agent-zsn6g 1/1 Running 0 5m58s
Kubernetes version 1.13.5, kubectl 1.15.2
Awesome project!
I've followed the install instructions but it always complains about the config not being there.
$ kubectl debug mypod
2019/03/13 08:45:32 error loading file open /Users/jacob/.kube/debug-config: no such file or directory
Would be nice for this to be created automatically.
In my cluster, nicolaka image is load from my private registry,
It require a secret to pull images,
but I can't find any docs about setting it.
thanks
Like oc debug
, we can provide a option to copy the pod on debugging. This will helpful if the target pod is exited or stuck in CrashLoopBackoff
.
man page of oc debug
: https://www.mankier.com/1/oc-debug
I tried to use kubectl-debug in Kubernetes on MacOS Docker Desktop. Its kubelet is running inside hyperkit vm and have IP that is inaccessible from host machine where kubectl is running. So kubectl-debug "fails with error execute remote, error sending request: Post http://192.168.65.3:10027/api/v1/debug.... : dial tcp 192.168.65.3:10027: connect: operation timed out"
I suppose it also won't work in production envs where kubelet nodes doesn't have external IP or have everything except kubernetes internal traffic blocked at firewall.
One possible solution is to connect to agent via kubectl port-foward if some option is given in command line. Or allow to specify agent endpoint URL either directly in command line or in debug-config (where endpoint overrides can be specified as map nodename => endpoint-uri)
A bash completion is really nice to have😁
Maybe we just need some slight additions to the kubectl bash completion script.
k8s 1.16 版本不支持
apiVersion: extensions/v1beta1
kind: DaemonSet
希望能适配配置文件
Hi,when was trying to run kubectl-debug
for a running pod with agent mode, I received the following error:
kubectl-debug POD -n NAMESPACE
pulling image nicolaka/netshoot:latest...
latest: Pulling from nicolaka/netshoot
Digest: sha256:8b020dc72d8ef07663e44c449f1294fc47c81a10ef5303dc8c2d9635e8ca22b1
Status: Image is up to date for nicolaka/netshoot:latest
starting debug container...
error execute remote, Internal error occurred: error attaching to container: Error response from daemon: No such container: 89e0cbee76a885ac49a725d5c341980a7bf74268e5fe5c695200b41d524df73c
error: Internal error occurred: error attaching to container: Error response from daemon: No such container: 89e0cbee76a885ac49a725d5c341980a7bf74268e5fe5c695200b41d524df73c
But in the host I can get the running container with same container ID "89e0cbee76a885ac49a725d5c341980a7bf74268e5fe5c695200b41d524df73c".
kubectl version: v1.15.1
k8s version: v1.15.0
kubectl-debug --version
debug version v0.0.0-master+$Format:%h$
Can you please help how to solve it?
I have a scenario that I'm using Azure MSI and aad-pod-identity to assign identities to pods (in order to retrieve credentials for several azure services). The way this integration works is by assigning a label to the pods.
When I use the fork feature it doesn't copy any of the original labels and it causes the pod to fail before the point I need to debug.
As I see it, fork should copy the labels of the original pod.
I got this error when trying this against a GKE cluster:
2018/12/24 17:46:28 error loading file open /Users/kevin/.kube/debug-config: no such file or directory
No Auth Provider found for name "gcp"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1b459b8]
goroutine 1 [running]:
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run(0xc4202621c0, 0x0, 0x0)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:193 +0x48
github.com/aylei/kubectl-debug/pkg/plugin.NewDebugCmd.func1(0xc420295180, 0xc420322bd0, 0x1, 0x3)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:94 +0x134
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).execute(0xc420295180, 0xc4200d60d0, 0x3, 0x3, 0xc420295180, 0xc4200d60d0)
/Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:766 +0x2c1
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420295180, 0x1e5edf0, 0x1bed6c0, 0x1e5af50)
/Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:852 +0x30a
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420295180, 0xc4200dc000, 0x1e64980)
/Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:800 +0x2b
main.main()
/Users/alei/go/src/github.com/aylei/kubectl-debug/cmd/plugin/main.go:16 +0x112
I've seen this in my own projects before. It'll probably be fixed by importing the auth plugins: kubernetes/client-go#242 (comment).
follow up #78
i need to use strace for debug and i have ask how to set SYS_PTRACE for debug container?
like that:
securityContext:
capabilities:
add:
- SYS_PTRACE # <-- the privilege
thx
Start deleting agent pod debug-pod-585fcf9d59-njmwk
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x10c9a57]
goroutine 1 [running]:
fmt.Fprintf(0x0, 0x0, 0x20b845f, 0x4e, 0xc0007157f0, 0x2, 0x2, 0x39, 0x0, 0x0)
/usr/local/Cellar/go/1.12.4/libexec/src/fmt/print.go:200 +0x77
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run.func2.1()
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:472 +0x2c9
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Close.func1()
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:65 +0x46
sync.(*Once).Do(0xc0003ac080, 0xc000715888)
/usr/local/Cellar/go/1.12.4/libexec/src/sync/once.go:44 +0xb3
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Close(0xc0003ac060)
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:63 +0x54
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Run(0xc0003ac060, 0xc000758c40, 0x0, 0x0)
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:103 +0x113
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run.func2(0xc000000008, 0x20fac30)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:475 +0xf3
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Run(0xc00076e300, 0xc00076e2d0, 0x0, 0x0)
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:103 +0xff
github.com/aylei/kubectl-debug/pkg/util.TTY.Safe(0x2244600, 0xc00000e010, 0x2244620, 0xc0000bc000, 0x1, 0x0, 0xc00015a2d0, 0xc00076e2d0, 0x0, 0x0)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/util/term.go:110 +0x189
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run(0xc000182c40, 0x0, 0x20fac28)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:478 +0x8ea
github.com/aylei/kubectl-debug/pkg/plugin.NewDebugCmd.func1(0xc00037a280, 0xc00015a190, 0x2, 0x5)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:150 +0xe7
github.com/spf13/cobra.(*Command).execute(0xc00037a280, 0xc0000be0d0, 0x5, 0x5, 0xc00037a280, 0xc0000be0d0)
/Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0xc00037a280, 0xc00000e010, 0x2244620, 0xc0000bc000)
/Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
/Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:800
main.main()
/Users/alei/go/src/github.com/aylei/kubectl-debug/cmd/plugin/main.go:17 +0x110
When I was trying to run dep init command with kubectl-debug project, I received the following error:
init failed: unable to solve the dependency graph: Solving failure: package github.com/docker/docker/api/types/image does not exist within project githu
b.com/docker/docker
Can you please help how to solve it?
MESSAGE:error: error sending request: Post http://172.20.:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2F418d59de3a343e0bef223c6ee5c9c93712ed79bf01ae64324bbd15192f0fe17d&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 172.20.:10027: connect: connection refused
k8s-version:v1.14.3
Please see
kubernetes/enhancements#277 (comment)
https://github.com/verb/kubectl-debug
could you comment which way is better? use your tool? or wait when Ephermal Containers will support?
Could you merge your efforts with other developers?
报错
Error: failed to start container "coredns": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown
Excuse me, can kubectl debug support hostnetwork mode?
Hello!
chmod +x ./kubectl-debug
mv kubectdl-debug /usr/local/bin/
kubectl debug
should work from scratch as you described? Or maybe user should add also bash alias for kubectl-debug executable binary? I've tested your plugin on 1.10 version of cluster and client, and it's completely okay, but only when using just kubectl-debug
directly.hi aylei, this PR aimed at fixing some misleading metrics for targeted pod when using some tools like free
, top
which rely on procfs isolation. Those crucial data will be correct by running a fuse filesystem (lxcfs) in our debug-agent.
Once we started agent pod, no matter agentless mode or daemonset mode we choose, we ran the lxcfs process on targeted node accordingly. Then, we starting debugging container, by enabling lxcfs mode, filled 'isLxcfsEnabled: true' config in ~/.kube/debug-config, we can instantly set the targeted container's procfs correct.
I've tested some cases bellow:
kubectl-debug --agentless=false --port-forward=false debugtest
set container procfs correct true ..
pulling image nicolaka/netshoot:latest...
latest: Pulling from nicolaka/netshoot
Digest: sha256:8b020dc72d8ef07663e44c449f1294fc47c81a10ef5303dc8c2d9635e8ca22b1
Status: Image is up to date for nicolaka/netshoot:latest
starting debug container...
container created, open tty...
[1] 🐳 →
In targeted container,
root@iZuf6f9dx8ur1ouh5sb18hZ:~# docker exec -ti d51d185b37a4 free -h
total used free shared buff/cache available
Mem: 1.0G 256K 1.0G 0B 0B 1.0G
Swap: 0B 0B 0B
root@iZuf6f9dx8ur1ouh5sb18hZ:~# docker exec -ti d51d185b37a4 top
top - 15:27:55 up 4:01, 0 users, load average: 0.00, 0.00, 0.00
Tasks: 14 total, 1 running, 13 sleeping, 0 stopped, 0 zombie
%Cpu0 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu4 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu5 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu6 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu7 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1048576 total, 1048320 free, 256 used, 0 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 1048320 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
...
root@iZuf6f9dx8ur1ouh5sb18hZ:~# docker exec -ti d51d185b37a4 uptime
15:28:14 up 4:01, 0 users, load average: 0.00, 0.00, 0.00
[root@iZuf6f9dx8ur1ouh5sb18gZ ~]$ kubectl-debug --agentless=true --port-forward=false debugtest
Agent Pod info: [Name:debug-agent-pod-5bcf4426-0459-11ea-b12d-00163e06d596, Namespace:default, Image:registry.cn-hangzhou.aliyuncs.com/huya_zhangyi/debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-5bcf4426-0459-11ea-b12d-00163e06d596 to run...
set container procfs correct true ..
pulling image nicolaka/netshoot:latest...
latest: Pulling from nicolaka/netshoot
Digest: sha256:8b020dc72d8ef07663e44c449f1294fc47c81a10ef5303dc8c2d9635e8ca22b1
Status: Image is up to date for nicolaka/netshoot:latest
starting debug container...
container created, open tty...
root@iZuf6f9dx8ur1ouh5sb18hZ:~#docker exec -ti d51d185b37a4 uptime
16:02:18 up 4:35, 0 users, load average: 0.20, 0.14, 0.10
root@iZuf6f9dx8ur1ouh5sb18hZ:~# docker exec -ti d51d185b37a4 top
top - 16:03:03 up 4:36, 0 users, load average: 0.17, 0.14, 0.10
Tasks: 14 total, 1 running, 13 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 1048576 total, 1048320 free, 256 used, 0 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 1048320 avail Mem
Besides, in order to improve the availability of this function, support remount lxcfs file or hot update lxcfs process, we recommend that all pods mount a parent directory of lxcfs by supporting a mount point.
apiVersion: v1
kind: Pod
metadata:
name: targetcontainer
spec:
restartPolicy: Always
containers:
- name: nginx
image: nginx:1.12.2
stdin: true
tty: true
resources:
limits:
cpu: "2"
memory: "1Gi"
requests:
cpu: "2"
memory: "1Gi"
volumeMounts:
- name: lxcfs
mountPath: /var/lib/lxc
mountPropagation: HostToContainer
volumes:
- name: lxcfs
hostPath:
path: /var/lib/lxc
type: DirectoryOrCreate
kubectl-debug traefik-ingress-controller-v2-9vsvk -n kube-system
error execute remote, error sending request: Post http://192.168.0.183:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2F506c3ea81af0d7887a8c497072439225421392a1730d4fd720a6bb640567967c&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 192.168.0.183:10027: connect: connection refused
error: error sending request: Post http://192.168.0.183:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2F506c3ea81af0d7887a8c497072439225421392a1730d4fd720a6bb640567967c&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 192.168.0.183:10027: connect: connection refused
By running the command "kubectl exec -ti", we can get the output of the command result as follow; Using "kubectl-debug" can not get the desired result, Any Idea?
# kubectl exec -ti tkservice-5ffbb64854-6k2h6 -- ls -hl
total 30M
-rw-r--r-- 1 root root 263 Oct 31 09:59 Dockerfile
drwxr-xr-x 7 root root 4.0K Oct 31 09:59 config
-rw-r--r-- 1 root root 188 Oct 31 09:59 main.yml
# kubectl-debug tkservice-5ffbb64854-6k2h6 -- ls -hl
Agent Pod info: [Name:debug-agent-pod-146dc5f3-fc7d-11e9-a689-00163e03e155, Namespace:default, Image:debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-146dc5f3-fc7d-11e9-a689-00163e03e155 to run...
pod tkservice-5ffbb64854-6k2h6 PodIP 10.81.133.178, agentPodIP 172.xxx.xxx.211
wait for forward port to debug agent ready...
Forwarding from 127.0.0.1:10027 -> 10027
Handling connection for 10027
pulling image netshoot...
latest: Pulling from paas-dev/netshoot
Digest: sha256:8b020dc72d8ef07663e44c449f1294fc47c81a10ef5303dc8c2d9635e8ca22b1
Status: Image is up to date for netshoot:latest
starting debug container...
container created, open tty...
Start deleting agent pod tkservice-5ffbb64854-6k2h6
end port-forward...
网络情况:通过apiservice接口访问远程集群
配置:
# debug-agent 映射到宿主机的端口
# 默认 10027
agentPort: 10027
# 是否开启ageless模式
# 默认 false
agentless: false
# agentPod 的 namespace, agentless模式可用
# 默认 default
agentPodNamespace: kube-system
# agentPod 的名称前缀,后缀是目的主机名, agentless模式可用
# 默认 debug-agent-pod
agentPodNamePrefix: debug-agent-pod
# agentPod 的镜像, agentless模式可用
# 默认 aylei/debug-agent:latest
agentImage: aylei/debug-agent:latest
# debug-agent DaemonSet 的名字, port-forward 模式时会用到
# 默认 'debug-agent'
debugAgentDaemonset: debug-agent
# debug-agent DaemonSet 的 namespace, port-forward 模式会用到
# 默认 'default'
debugAgentNamespace: kube-system
# 是否开启 port-forward 模式
# 默认 false
portForward: true
# image of the debug container
# default as showed
image: nicolaka/netshoot:latest
# start command of the debug container
# default ['bash']
command:
- '/bin/bash'
- '-l'
效果:
➜ ~ kubectl debug -n runtime gateway-controller-7989c46dff-msdzh bash
error parsing configuration file: yaml: unmarshal errors:
line 7: field agentless not found in type plugin.Config
line 10: field agentPodNamespace not found in type plugin.Config
line 13: field agentPodNamePrefix not found in type plugin.Config
line 16: field agentImage not found in type plugin.Config
然后卡死
使用README中的命令也无效
➜ ~ kubectl debug -n runtime gateway-controller-7989c46dff-msdzh --port-forward --daemonset-ns=kube-system --daemonset-name=debug-agent
Error: unknown flag: --port-forward
(摊手
I am having an issue where the agent pod can not be created because it does not specify memory limits/requests.
For example:
kubectl debug <pod_name> --agentless --agent-pod-namespace=<namespace>
Output:
Agent Pod info: [Name:debug-agent-pod-<pod_name>, Namespace:<namespace>, Image:aylei/debug-agent:latest, HostPort:10027, ContainerPort:10027] Error from server (Forbidden): pods "debug-agent-pod-<pod_name>" is forbidden: [maximum memory usage per Pod is 1Ti. No limit is specified., memory max limit to request ratio per Pod is 2, but no request is specified or request is 0.]
Our kubernetes cluster enforces restrictions on CPU and memory requests/limits. Any pod without proper limits configured will not be allowed to boot.
It would be nice to allow the CPU and memory parameters to be configured in the config file so we could use the agentless setup.
Now that #31 introduced the agentless mode, we should document this feature accordingly.
执行:kubectl-debug --namespace test test-pod --port-forward --agentless
显示:
Agent Pod info: [Name:debug-agent-pod-36a6c112-ad3e-11e9-bd4c-acde48001122, Namespace:default, Image:aylei/debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-36a6c112-ad3e-11e9-bd4c-acde48001122 to run...
Error occurred while waiting for pod to run: pod ran to completion
error: pod ran to completion
[root@sz-5-centos163 src]# kubectl debug finance-payment-kingdee-realname-cgi--test5-6f46ff966c-rm6dl -n finance --fork
error parsing configuration file: yaml: line 36: found unexpected end of streamWaiting for pod finance-payment-kingdee-realname-cgi--test5-6f46ff966c-rm6dl-067b70f9-9197-11e9-8494-00163e340364-debug to run...
Error occurred while waiting for pod to run: pod ran to completion
error: pod ran to completion
但是查看debug-agent pod已经是running。
请问这个是什么情况?
Hi there!
It's FR. It would be just great, to have possibility to connect and debug crashing pods (I meant CrashLoopbackOff). So we see "bad" pod, and there are no meaningful logs in the pod describe and we could connect into it and do some useful things.
What do you think about such feature? Could such thing be done?
通过二进制文件下载好后在调用kubectl debug demo 的时候,出了异常
在master上运行的, pod中的镜像使用的是nginx镜像
是我在某个服务没有启用吗?
异常内容:
error execute remote, error sending request: Post http://10.211.55.112:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2F6a6200df293ff1270e809ca14f384a2de72b8a1b02eb262206b5916802a71609&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 10.211.55.112:10027: connect: connection refused error: error sending request: Post http://10.211.55.112:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2F6a6200df293ff1270e809ca14f384a2de72b8a1b02eb262206b5916802a71609&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 10.211.55.112:10027: connect: connection refused
使用kubectl debug pod_name -n ns-* --image harbor..com/base/bianque:v1.0.1
pulling image harbor..com/base/bianque:v1.0.1...
message: error execute remote, Internal error occurred: error attaching to container: Error response from daemon: pull access denied for harbor.*.com/base/bianque, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
已经在agent-ds文件中添加imagessecret 尝试debug的服务器也登录了该HARBOR 手动直接拉该镜像也无问题
kubectl-debug
only support docker as container runtime now. We should use CRI for container operations.
➜ kubectl-debug git:(master) ✗ kubectl debug
error pod not specified
pod name must be specified
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x68 pc=0x1b9a7c2]
goroutine 1 [running]:
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run(0xc0002481c0, 0x0, 0x0)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:201 +0x62
github.com/aylei/kubectl-debug/pkg/plugin.NewDebugCmd.func1(0xc00034f400, 0x2bad698, 0x0, 0x0)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:100 +0x134
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).execute(0xc00034f400, 0xc0000b8190, 0x0, 0x0, 0xc00034f400, 0xc0000b8190)
/Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:766 +0x2cc
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00034f400, 0x2080880, 0x1d54140, 0x207c238)
/Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:852 +0x2fd
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).Execute(0xc00034f400, 0xc0000ce000, 0x2088300)
/Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:800 +0x2b
main.main()
/Users/alei/go/src/github.com/aylei/kubectl-debug/cmd/plugin/main.go:17 +0x10e
Start deleting agent pod tidb-pd-2
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x10c9a57]
goroutine 1 [running]:
fmt.Fprintf(0x0, 0x0, 0x20b845f, 0x4e, 0xc0008297f0, 0x2, 0x2, 0x25, 0x0, 0x0)
/usr/local/Cellar/go/1.12.4/libexec/src/fmt/print.go:200 +0x77
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run.func2.1()
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:472 +0x2c9
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Close.func1()
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:65 +0x46
sync.(*Once).Do(0xc0000c74c0, 0xc000829888)
/usr/local/Cellar/go/1.12.4/libexec/src/sync/once.go:44 +0xb3
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Close(0xc0000c74a0)
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:63 +0x54
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Run(0xc0000c74a0, 0xc0000ac180, 0x0, 0x0)
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:103 +0x113
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run.func2(0xc000000008, 0x20fac30)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:475 +0xf3
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Run(0xc0000c7470, 0xc0000c7440, 0x0, 0x0)
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:103 +0xff
github.com/aylei/kubectl-debug/pkg/util.TTY.Safe(0x2244600, 0xc0000b4000, 0x2244620, 0xc0000b8000, 0x1, 0x0, 0xc00013e780, 0xc0000c7440, 0x0, 0x0)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/util/term.go:110 +0x189
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run(0xc000174700, 0x0, 0x20fac28)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:478 +0x8ea
github.com/aylei/kubectl-debug/pkg/plugin.NewDebugCmd.func1(0xc000551180, 0xc0000bb2c0, 0x1, 0x6)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:150 +0xe7
github.com/spf13/cobra.(*Command).execute(0xc000551180, 0xc00003a080, 0x6, 0x6, 0xc000551180, 0xc00003a080)
/Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0xc000551180, 0xc0000b4000, 0x2244620, 0xc0000b8000)
/Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
/Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:800
main.main()
/Users/alei/go/src/github.com/aylei/kubectl-debug/cmd/plugin/main.go:17 +0x110
When I try 'kubectl exec -it debug-pod /bin/sh', get a message:
OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused "exec: "/bin/sh": stat /bin/sh: no such file or directory": unknown
command terminated with exit code 126
Can't get into pod to copy tcpdump file.
Hello there,
We are trying to debug a pod which failed to correctly startup in some conditions, only on production. But when launching the debug command, it failed because the container of the application is not ready.
kubectl debug app-XXX -n production --agentless
Agent Pod info: [Name:debug-agent-pod-f0fb8c3c-d083-11e9-844f-9cb6d0eeb5ef, Namespace:default, Image:aylei/debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-f0fb8c3c-d083-11e9-844f-9cb6d0eeb5ef to run...
error: container [app-XXX] not ready
Does this condition is needed to debug pod ?
Best,
Matthieu
There isn't a 0.1.1 version of the debug-agent image in docker hub.
While investigating #52 I saw this:
$ kubectl -n default get pod
No resources found.
$ kubectl debug harbor-harbor-portal-68df4cdb58-9xx4l --agentless --port-forward
Agent Pod info: [Name:debug-agent-pod-853aee9e-db9b-11e9-b976-88e9fe6345ac, Namespace:default, Image:aylei/debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-853aee9e-db9b-11e9-b976-88e9fe6345ac to run...
error: container [portal] not ready
$ kubectl -n default get pod
NAME READY STATUS RESTARTS AGE
debug-agent-pod-853aee9e-db9b-11e9-b976-88e9fe6345ac 1/1 Running 0 30s
$
That should've been deleted by now, right?
kubectl-debug
connect to the node agent directly, which is not secure. We should provide an extend apiserver to do centralized authorization & authentication. Extend apiserver should be an opt-in component, we can always use the simplest setup just like now.
The extend apiserver will also proxy the debug connection like kube apiserver, which addresses the inaccessible hostIP issue in #2
Supporting init containers would be great for troubleshooting.
I have some troubleshooting to do on mongodb-replicaset, getting locked with init containers. So tried this beautiful tool but :
[test@test ~]$ kubectl debug --namespace test test-mongodb-replicaset-1
container mongodb-replicaset id not ready
[test@test ~]$ kubectl debug --namespace test test-mongodb-replicaset-1 -c bootstrap
cannot find specified container bootstrap
kubectl can see this bootsrap (still running, that is the problem) running :
[test@test ~]$ kubectl logs test-mongodb-replicaset-1 -n test -c bootstrap -f
2019/02/21 11:05:13 Peer list updated
... waiting for new logs that will never come...
Extract from describe :
kubectl -n test describe pod test-mongodb-replicaset-1
Name: test-mongodb-replicaset-1
Namespace: test
Priority: 0
PriorityClassName: <none>
Node: xxx
Start Time: Thu, 21 Feb 2019 11:05:09 +0000
Labels: app=mongodb-replicaset
controller-revision-hash=test-mongodb-replicaset-647db4c6c4
release=test
statefulset.kubernetes.io/pod-name=test-mongodb-replicaset-1
Annotations: <none>
Status: Pending
IP: 10.233.88.14
Controlled By: StatefulSet/test-mongodb-replicaset
Init Containers:
copy-config:
Container ID: docker://f6685231928ba6a843dd348f6cdd602f178aa5e132194dc8aaa44b8058d02c21
...
State: Terminated
install:
Container ID: docker://618fcfe64504376ece61dfc481641921f0bf1226a6b1394503f8622e0fed0cd9
State: Terminated
...
bootstrap:
Container ID: docker://34e529ecd86058e1c9e45a1a50f51d610ec65181c0bdfab17b46010c47083142
Image: mongo:3.6
Image ID: docker-pullable://mongo@sha256:89822fa6161c2ed77e73fb717f189e6ce5a95cb752fe2021508daeee366f9b69
Port: <none>
Host Port: <none>
Command:
/work-dir/peer-finder
Args:
-on-start=/init/on-start.sh
-service=test-mongodb-replicaset
State: Running
Started: Thu, 21 Feb 2019 11:05:13 +0000
Ready: False
Restart Count: 0
Environment:
POD_NAMESPACE: test (v1:metadata.namespace)
REPLICA_SET: rs0
Mounts:
/data/configdb from configdir (rw)
/data/db from datadir (rw)
/init from init (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f6p6g (ro)
/work-dir from workdir (rw)
Containers:
mongodb-replicaset:
Container ID:
Image: mongo:3.6
Image ID:
...
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
...
Events: <none>
kubectl debug [pod_name]
pulling image %s...
ImagePull requests the docker host to pull an image from a remote registry.
it is not support use image from local.
Hi there,
We developed a web based kubectl tool, which enables you to manage kubernetes credentials and run kubectl command in web browser.
https://github.com/webkubectl/webkubectl
Hopefully it's useful for you guys.
Not willing to do, but necessary😅
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.