Following the instructions in deployment-concepts/daemonsets.adoc - logstash fails with OOM. If I increase the memory, it fails for an unknown reason. I haven't had chance to debug it.
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
logstash-daemonset-r2775 0/1 CrashLoopBackOff 2 39s 100.96.1.16 ip-172-20-67-137.ec2.internal
logstash-daemonset-xsclg 0/1 CrashLoopBackOff 2 39s 100.96.2.17 ip-172-20-42-76.ec2.internal
kubectl describe po
Name: logstash-daemonset-xsclg
Namespace: default
Node: ip-172-20-42-76.ec2.internal/172.20.42.76
Start Time: Wed, 18 Oct 2017 21:06:53 +0800
Labels: app=logstash
controller-revision-hash=3313225235
pod-template-generation=1
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"DaemonSet","namespace":"default","name":"logstash-daemonset","uid":"360cd102-b405-11e7-a395-12ff179638ce",...
Status: Running
IP: 100.96.2.17
Created By: DaemonSet/logstash-daemonset
Controlled By: DaemonSet/logstash-daemonset
Containers:
logstash:
Container ID: docker://d4f6ec1b2d560f053605f85df1b2671d10eb9d81772fb030f3bc50b722a3c959
Image: logstash:5.5.2
Image ID: docker-pullable://logstash@sha256:6d5236d5a2371af15d19300f80be7e742e4fa15a19335c6a1372e685e803bc70
Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Wed, 18 Oct 2017 21:07:43 +0800
Finished: Wed, 18 Oct 2017 21:07:44 +0800
Ready: False
Restart Count: 3
Limits:
memory: 50Mi
Requests:
cpu: 50m
memory: 50Mi
Environment: <none>
Mounts:
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/log from varlog (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bsp6x (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
default-token-bsp6x:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bsp6x
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute
node.alpha.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 1m kubelet, ip-172-20-42-76.ec2.internal MountVolume.SetUp succeeded for volume "varlog"
Normal SuccessfulMountVolume 1m kubelet, ip-172-20-42-76.ec2.internal MountVolume.SetUp succeeded for volume "varlibdockercontainers"
Normal SuccessfulMountVolume 1m kubelet, ip-172-20-42-76.ec2.internal MountVolume.SetUp succeeded for volume "default-token-bsp6x"
Normal Pulled 24s (x4 over 1m) kubelet, ip-172-20-42-76.ec2.internal Container image "logstash:5.5.2" already present on machine
Normal Created 24s (x4 over 1m) kubelet, ip-172-20-42-76.ec2.internal Created container
Normal Started 24s (x4 over 1m) kubelet, ip-172-20-42-76.ec2.internal Started container
Warning BackOff 9s (x6 over 1m) kubelet, ip-172-20-42-76.ec2.internal Back-off restarting failed container
Warning FailedSync 9s (x6 over 1m) kubelet, ip-172-20-42-76.ec2.internal Error syncing pod