Comments (13)
To add more information :
oc adm must-gather --keep --image=quay.io/jsaucier/must-gather-cnv:latest -- gather-cnv
namespace/openshift-must-gather-cr82g created
clusterrolebinding.rbac.authorization.k8s.io/must-gather-qf46z created
error: pod is not running: Failed
oc project openshift-must-gather-cr82g
oc logs -f must-gather-jrpll
Error from server (BadRequest): container "copy" in pod "must-gather-jrpll" is waiting to start: PodInitializing
oc logs -f must-gather-jrpll -c gather
2019/04/29 10:56:27 Gathering data for ns/kubevirt...
2019/04/29 10:56:27 Collecting resources for namespace "kubevirt"...
2019/04/29 10:56:27 Gathering pod data for namespace "kubevirt"...
2019/04/29 10:56:27 Gathering data for pod "virt-api-f8d6cd97-5l68b"
2019/04/29 10:56:28 Unable to gather previous container logs: previous terminated container "virt-api" in pod "virt-api-f8d6cd97-5l68b" not found
2019/04/29 10:56:28 Skipping /version info gathering for pod "virt-api-f8d6cd97-5l68b". Endpoint not found...
2019/04/29 10:56:28 Gathering data for pod "virt-api-f8d6cd97-6h4kx"
2019/04/29 10:56:29 Unable to gather previous container logs: previous terminated container "virt-api" in pod "virt-api-f8d6cd97-6h4kx" not found
2019/04/29 10:56:29 Skipping /version info gathering for pod "virt-api-f8d6cd97-6h4kx". Endpoint not found...
2019/04/29 10:56:29 Gathering data for pod "virt-controller-865b95f6c6-gtplf"
2019/04/29 10:56:30 Unable to gather previous container logs: previous terminated container "virt-controller" in pod "virt-controller-865b95f6c6-gtplf" not found
2019/04/29 10:56:30 Gathering data for pod "virt-controller-865b95f6c6-vl6cc"
2019/04/29 10:56:30 Unable to gather previous container logs: previous terminated container "virt-controller" in pod "virt-controller-865b95f6c6-vl6cc" not found
2019/04/29 10:56:30 Gathering data for pod "virt-handler-5zd2z"
2019/04/29 10:56:30 Unable to gather previous container logs: previous terminated container "virt-handler" in pod "virt-handler-5zd2z" not found
2019/04/29 10:56:30 Gathering data for pod "virt-handler-9q9cp"
2019/04/29 10:56:31 Unable to gather previous container logs: previous terminated container "virt-handler" in pod "virt-handler-9q9cp" not found
2019/04/29 10:56:31 Gathering data for pod "virt-handler-m9q8r"
2019/04/29 10:56:31 Unable to gather previous container logs: previous terminated container "virt-handler" in pod "virt-handler-m9q8r" not found
2019/04/29 10:56:31 Gathering data for pod "virt-handler-t6kz2"
2019/04/29 10:56:31 Unable to gather previous container logs: previous terminated container "virt-handler" in pod "virt-handler-t6kz2" not found
2019/04/29 10:56:31 Gathering data for pod "virt-operator-76b568d986-6x9rc"
2019/04/29 10:56:32 Unable to gather previous container logs: previous terminated container "virt-operator" in pod "virt-operator-76b568d986-6x9rc" not found
Error: one or more errors ocurred while gathering pod-specific data for namespace: kubevirt
[one or more errors ocurred while gathering container data for pod virt-api-f8d6cd97-5l68b:
unable to gather container /healthz: unable to find any available /healthz paths hosted in pod "virt-api-f8d6cd97-5l68b", one or more errors ocurred while gathering container data for pod virt-api-f8d6cd97-6h4kx:
unable to gather container /healthz: unable to find any available /healthz paths hosted in pod "virt-api-f8d6cd97-6h4kx", one or more errors ocurred while gathering container data for pod virt-controller-865b95f6c6-gtplf:
[unable to gather container /healthz: the server could not find the requested resource, unable to gather container /version: the server could not find the requested resource], one or more errors ocurred while gathering container data for pod virt-controller-865b95f6c6-vl6cc:
[unable to gather container /healthz: the server could not find the requested resource, unable to gather container /version: the server could not find the requested resource], one or more errors ocurred while gathering container data for pod virt-handler-5zd2z:
[unable to gather container /healthz: the server could not find the requested resource, unable to gather container /version: the server could not find the requested resource], one or more errors ocurred while gathering container data for pod virt-handler-9q9cp:
[unable to gather container /healthz: the server could not find the requested resource, unable to gather container /version: the server could not find the requested resource], one or more errors ocurred while gathering container data for pod virt-handler-m9q8r:
[unable to gather container /healthz: the server could not find the requested resource, unable to gather container /version: the server could not find the requested resource], one or more errors ocurred while gathering container data for pod virt-handler-t6kz2:
[unable to gather container /healthz: the server could not find the requested resource, unable to gather container /version: the server could not find the requested resource], one or more errors ocurred while gathering container data for pod virt-operator-76b568d986-6x9rc:
[unable to gather container /healthz: the server could not find the requested resource, unable to gather container /version: the server could not find the requested resource]]
from must-gather.
Do you get the same thing with:
$ oc adm must-gather -- /usr/bin/openshift-must-gather inspect ns/kubevirt
from must-gather.
No, same result :
oc adm must-gather -- /usr/bin/openshift-must-gather inspect ns/kubevirt
namespace/openshift-must-gather-68r65 created
clusterrolebinding.rbac.authorization.k8s.io/must-gather-vfc6l created
clusterrolebinding.rbac.authorization.k8s.io/must-gather-vfc6l deleted
namespace/openshift-must-gather-68r65 deleted
error: pod is not running: Failed
from must-gather.
And just to show that the command works under normal circonstance :
oc adm must-gather -- /usr/bin/openshift-must-gather inspect ns/default
namespace/openshift-must-gather-8qwh4 created
clusterrolebinding.rbac.authorization.k8s.io/must-gather-8c9sp created
receiving incremental file list
created directory must-gather.local.1940580096616587914
./
namespaces/
namespaces/default/
namespaces/default/default.yaml
namespaces/default/apps.openshift.io/
namespaces/default/apps.openshift.io/deploymentconfigs.yaml
namespaces/default/apps/
namespaces/default/apps/daemonsets.yaml
namespaces/default/apps/deployments.yaml
namespaces/default/apps/replicasets.yaml
namespaces/default/apps/statefulsets.yaml
namespaces/default/autoscaling/
namespaces/default/autoscaling/horizontalpodautoscalers.yaml
namespaces/default/batch/
namespaces/default/batch/cronjobs.yaml
namespaces/default/batch/jobs.yaml
namespaces/default/build.openshift.io/
namespaces/default/build.openshift.io/buildconfigs.yaml
namespaces/default/build.openshift.io/builds.yaml
namespaces/default/core/
namespaces/default/core/configmaps.yaml
namespaces/default/core/events.yaml
namespaces/default/core/pods.yaml
namespaces/default/core/replicationcontrollers.yaml
namespaces/default/core/secrets.yaml
namespaces/default/core/services.yaml
namespaces/default/image.openshift.io/
namespaces/default/image.openshift.io/imagestreams.yaml
namespaces/default/route.openshift.io/
namespaces/default/route.openshift.io/routes.yaml
sent 436 bytes received 94,291 bytes 63,151.33 bytes/sec
total size is 92,489 speedup is 0.98
clusterrolebinding.rbac.authorization.k8s.io/must-gather-8c9sp deleted
namespace/openshift-must-gather-8qwh4 deleted
from must-gather.
Unfortunately, the pod's logs don't tell you much. This is what I get:
Error from server (BadRequest): container "copy" in pod "must-gather-x7vgf" is waiting to start: PodInitializing
...
Error: namespaces "kubevirt" not found
...
Error from server (NotFound): pods "must-gather-x7vgf" not found
I got this by running:
$ oc adm must-gather -- /usr/bin/openshift-must-gather inspect ns/kubevirt
In a different terminal run the following:
$ MG_NAMESPACE="FILL_ME_IN_FROM_OUTPUT_ABOVE"; while true; do oc logs $(oc get pods -n $MG_NAMESPACE -o name) -n $MG_NAMESPACE -f; done
@sanchezl we may want to improve the logging around: https://github.com/openshift/origin/blob/master/pkg/oc/cli/admin/mustgather/mustgather.go#L239-L241 to output the pods logs to the user (or at the very least save them to a file).
@djfjeff can you try setting the following at the top of your bash script:
set +e
from must-gather.
@sferich888 Same result with set +e
, it seems this is the default in bash so it did not change the error reporting.
from must-gather.
If the script fails, the pod fails.
- Don't use
set -e
- add an
exit 0
at the end.
from must-gather.
@sanchezl Adding the exit 0
fixed the issue with the script.
However, it would be great if oc adm must-gather
still report back what it was able to gather instead of failing. For example, oc adm must-gather -- /usr/bin/openshift-must-gather inspect ns/kubevirt
still fail and report nothing instead of sending back what it is able to gather.
from must-gather.
@djfjeff @sanchezl and I discussed that today, and I think we want to make it so that all commands out of the 'gather' container returns 0
. This keeps commands like the one you denote from failing to get some type of an archive.
For now, adding exit 0
as I am doing with: #88 is the solution we are taking to avoid issues.
from must-gather.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
from must-gather.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
from must-gather.
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
from must-gather.
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from must-gather.
Related Issues (20)
- [RHOCP][4.6] Include missing Core namespace HOT 5
- must-gather log gathering window should not be infinite by default
- include CustomResourceDefinition in must-gather HOT 4
- must-gather takes too long to run HOT 4
- images produces from Dockerfile have an old version of oc HOT 4
- latest images of must-gather is not collecting OLM data HOT 9
- Must_Gather pod able to run on SchedulingDisabled node HOT 4
- How to collecte the info in namespaces gpu-operator-resources to Must-gather archive HOT 6
- Include insights-operator archive in must-gather HOT 3
- Include IPv4 and IPv6 Route Cache Entries in networking must gather HOT 3
- elide secret data from must-gather HOT 5
- `HostSubnet` resources not collected from must-gather HOT 14
- [RFE] Add the version of the branch to the version file HOT 13
- Gather the same information from OLM managed Operators as from ClusterOperators HOT 9
- Can't pull ose-must-gather container HOT 1
- [RFE] Include the output of oc describe nodes in the must-gather HOT 15
- [RFE] Collect additional information about all pods in the cluster HOT 8
- RFE - Create a dir in ceph/must_gather_commands to hold all ceph crash outputs HOT 3
- must-gather.log spams logs about egressfirewalls and egressqoses HOT 8
- Add the leases.coordination.k8s.io as one of the collected resources
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from must-gather.