Giter VIP home page Giter VIP logo

specs's People

Contributors

abjyoti avatar amr-mokhtar avatar cjnolan avatar damiankopyto avatar groclawski avatar i-karina avatar i-kwilk avatar ipatrykx avatar jakubrym avatar jkossak avatar kamilpoleszczuk avatar konradja avatar krishnajs avatar lukaszxlesiecki avatar mariuszszczepanik avatar mateusz-szelest avatar mcping avatar mmx111 avatar mouliburla avatar nikbas-0 avatar niket-intc avatar patrykdiak avatar patrykxmatuszak avatar sebix112 avatar soniabha-intc avatar stephenjameson avatar sudhir-intc avatar sunil-parida avatar tomaszwesolowski avatar tongtongxiaopeng1979 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

specs's Issues

error while running OpenVino application on OpenNESS Network Edge

I am following below link while on boarding openvino with Network Edge openness. I also have GNOME Desktop on client machine and sending the video stream to the OpenVINO container.

link: https://github.com/open-ness/specs/blob/master/doc/applications-onboard/network-edge-applications-onboarding.md#onboarding-openvino-application

I also checked with tcpdump on clinet sim machine using "tcpdump -i any port 5001" command.

run-docker.sh shows me below output. there is no popup window is open.

[root@xxxx clientsim]# ./run-docker.sh
ffplay version 2.8.15 Copyright (c) 2003-2018 the FFmpeg developers
built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-36)
configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --extra-ldflags='-Wl,-z,relro ' --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-version3 --enable-bzlib --disable-crystalhd --enable-gnutls --enable-ladspa --enable-libass --enable-libcdio --enable-libdc1394 --enable-libfdk-aac --enable-nonfree --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libx264 --enable-libx265 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
No protocol specified
No protocol specified
Could not initialize SDL - No available video device
(Did you set the DISPLAY variable?)
./tx_video.sh

Additional interface is not adding to the pod

HI Team,

Am trying to attach "macvlan" type interface to pod.

Here is my "networkattachmentdefinition" for additional interface.
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-conf
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.1.0/24",
"rangeStart": "192.168.1.200",
"rangeEnd": "192.168.1.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.1.1"
}
}'

CNI config file::

{ "cniVersion": "0.3.1", "name": "multus-cni-network", "type": "multus", "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig", "delegates": [ { "name":"kube-ovn", "cniVersion":"0.3.1", "plugins":[ { "type":"kube-ovn", "server_socket":"/run/openvswitch/kube-ovn-daemon.sock" }, { "type":"portmap", "capabilities":{ "portMappings":true } } ] } ] }

Pod yml::

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: sgwc-deployment
namespace: openness
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
selector:
matchLabels:
app: sgwc
replicas: 1
template:
metadata:
labels:
app: sgwc
spec:
containers:
- name: sgwc-container
command: [ "/bin/bash" ]
tty: true
stdin: true
image: vepc-sgw-c:1.0
securityContext:
privileged: true
imagePullPolicy: Never
ports:
- containerPort: 40001
dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8

image

Can you please let me if am missing any thing here?

Use newer non-rt kernel (3.10.0-1062) || tuned_packages || Not Availabe

Hi All,

I was trying to use the tuned_packages:

as part of non-rt kernal configuration but link is not accessible , please advise

Retrying task - attempt 10 of 10
fatal: [node01]: FAILED! => {
"attempts": 10,
"changed": false
}

MSG:

Failure downloading http://linuxsoft.cern.ch/scientific/7x/x86_64/os/Packages/tuned-2.11.0-8.el7.noarch.rpm, HTTP Error 404: Not Found

NO MORE HOSTS LEFT *****************************************************************************************************************************************************

PLAY RECAP *************************************************************************************************************************************************************
controller : ok=67 changed=22 unreachable=0 failed=0 skipped=50 rescued=0 ignored=1
node01 : ok=71 changed=23 unreachable=0 failed=1 skipped=43 rescued=0 ignored=1

Regards,
Mohamed Sherif

how to get yaml to deploy iterfaceservice pod of network edge node

Hi All,

How should I get original yaml to deploy interfaceservice pod of network edge node? actually i wan to deploy new interfaceservice image in edge node which will reflect same as original interface service which will come along edge node deployment.

tried below command to get yaml of original interfaceservice pod, am getting pod status also. But I need exact yaml to deploy interface service.

kubectl get pod -n -o yaml

Broken links in HTML documentation

The intra-document links are broken on the HTML version of the documentation. Example:

The markdown version works on Github, i.e.:

Additional problem:

How it the HTML documentation converted from markdown and uploaded on the website?

how to solve the disk pressure issue on worker-node

Hi, I have a local OpenNESS network-edge cluster using Kubernetes as infrastructure management.
the disk utilization is showing 84%($ df -h) and the pods are getting evicted and crashLoopBack state.
also, the images from the worker-node are getting deleted automatically.
I tried deleting the containers($ docker rm cont_id) that are in exited status ($ docker ps -a)
but no use. I tried using the pruning method of docker as well but that also didn't work.
Please suggest an appropriate method for solving the disk pressure issue.
disk_usage
disk_usage2
pods-evicted

How to access OpenNess APIs ?

Hi All,

This is a question rather than an issue. I have successfully deployed OpenNess in a lab environment. However, I am not able to figure out how I can access REST APIs. Cannot find the endpoint and how to get authenticated.
I want try out a few APIs from https://www.openness.org/developers#apidoc - GET APIs for starting.

Thanks for any help.

My traffic generating host does not reach the OpenVino application in a Network Edge deployment

Hello, I have been studying and testing the OpenNESS Network Edge architecture for a couple of months but I encounter the following problems and doubts when trying to deploy the OpenVino application following this documentation: [https://github.com/open-ness/specs/blob /master/doc/applications-onboard/network-edge-applications-onboarding.md#onboarding-openvino-application]

In my case, OpenNESS for Network Edge is fully installed and set up in a VPN with two nodes: 10.0.1.217 for the Controller and 10.0.1.120 for the Edge Node (node01). And the traffic generating host is at 10.20.0.201.

The specs say "From the Edge Controller, set up the interface service to connect the Edge Node's physical interface used for the communication between Edge Node and traffic generating host to OVS" Does this mean I cannot use node01 interface 10.0.1.120 and that I need a second physical interface on said node? In any case, I have a second physical interface on node01 to receive the streaming of the traffic generating host, and it is 10.0.0.120. But after the "kubectl interfaceservice attach ..." process I no longer have access to the machine through that interface, is this normal?

Another question: the specs say "The subnet 192.168.1.0/24 is allocated by Ansible playbook to the physical interface which is attached to the first edge node", "node jfsdm001 is bridged to subnet 192.168.1.0/24", "Ingress traffic originating from 192.168.1.0/24 can only reach the pods deployed on jfsdm001 ", so: as I said, my host is on ip 10.20.0.201, how can I add a new subnet or modify subnet 192.168.1.0/24 to that the traffic of my host is accepted?

Any answer or documentation that you can provide me to resolve these doubts will be welcome, thank you very much.

Issue while running OpenVino application

I have followed and completed the setup for the OpenVino application as told in the documentation. After running the last ./run-docker.sh script I am not getting the pop up window with the video runnning even after rectifying the errors as told so in the documentation. I am getting the following warning on the screen, could you please help @amr-mokhtar


/run-docker.sh
ffplay version 2.8.15 Copyright (c) 2003-2018 the FFmpeg developers
built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-36)
configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --extra-ldflags='-Wl,-z,relro ' --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-version3 --enable-bzlib --disable-crystalhd --enable-gnutls --enable-ladspa --enable-libass --enable-libcdio --enable-libdc1394 --enable-libfdk-aac --enable-nonfree --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libx264 --enable-libx265 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
ALSA lib confmisc.c:767:(parse_card) cannot find card '0'
ALSA lib conf.c:4568:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory
ALSA lib confmisc.c:392:(snd_func_concat) error evaluating strings
ALSA lib conf.c:4568:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory
ALSA lib confmisc.c:1246:(snd_func_refer) error evaluating name
ALSA lib conf.c:4568:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:5047:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM default
[sdp @ 0x7f94b4000920] Could not find codec parameters for stream 0 (Video: mjpeg, none(bt470bg/unknown/unknown)): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
downstream.sdp: could not find codec parameters
nan : 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
./tx_video.sh

Onboarding OpenVINO application - no packets arrive at the openvino-cons-app side

Hi Teams,

I am a newer of the OpenNESS. I already created a cluster by "CERA Media Analytics Flavor" and accomplished 1th Onboarding sample application successfully. But, I got fail on Onboarding OpenVINO application. There are no inference results pop up at the client simulator.

It seems that there are no packets arriving at the openvino-cons-app(hosted by the edge node) side. (I used the tcpdump to monitoring the eth0 in the openvino-cons-app pod). But, the edge node got packets from the client simulator.

Could anyone help me to resolve this issue? Thank you!

The client host hangs on the following situation:
[root@localhost clientsim]# ./run-docker.sh
ffplay version 2.8.17-0ubuntu0.1 Copyright (c) 2003-2020 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
ALSA lib confmisc.c:768:(parse_card) cannot find card '0'
ALSA lib conf.c:4292:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory
ALSA lib confmisc.c:392:(snd_func_concat) error evaluating strings
ALSA lib conf.c:4292:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory
ALSA lib confmisc.c:1251:(snd_func_refer) error evaluating name
ALSA lib conf.c:4292:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory
ALSA lib conf.c:4771:(snd_config_expand) Evaluate error: No such file or directory
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM default
[sdp @ 0x7fc88c000920] Could not find codec parameters for stream 0 (Video: mjpeg, none(bt470bg/unknown/unknown)): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
downstream.sdp: could not find codec parameters
nan : 0.000 fd= 0 aq= 0KB vq= 0KB sq= 0B f=0/0
./tx_video.sh

Kubectl command to delete pod permanently

Hi All,

1.When I run kubectl delete pod , pod it getting restarted and stuck in below state.
openness interfaceservice-ws5k9 0/1 Init:0/1 0 30m

  1. When run kubectl describe pod , below is the error

Events:
Type Reason Age From Message


Normal Scheduled 21m default-scheduler Successfully assigned openness/interfaceservice-ws5k9 to worker-node
Warning FailedCreatePodSandBox 20m kubelet, worker-node Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "2e6d08ffc273d1ce9a6b57ee391a64a4c221824bfe2015116643cd14a0c74115" network for pod "interfaceservice-ws5k9": networkPlugin cni failed to set up pod "interfaceservice-ws5k9_openness" network: request ip return 500 configure nic failed add nic to ovs failed exit status 1: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)
, failed to clean up sandbox container "2e6d08ffc273d1ce9a6b57ee391a64a4c221824bfe2015116643cd14a0c74115" network for pod "interfaceservice-ws5k9": networkPlugin cni failed to teardown pod "interfaceservice-ws5k9_openness" network: delete ip return 500 {
"protocol": "",
"address": "",
"mac_address": "",
"cidr": "",
"gateway": "",
"mtu": 0,
"error": "del nic failed failed to delete ovs port exit status 1, ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Connection refused)\n"
}]
Normal SandboxChanged 0s (x95 over 20m) kubelet, worker-node Pod sandbox changed, it will be killed and re-created.

Can you please suggest how to delete pod permanently and why pod is not running eventhough its restarted?

Issue while running OpenVino application on network edge openness 20.06

HI @amr-mokhtar

I have deployed openvino application on openness network edge and client-sim is running on a separate machine.

client-sim sending the traffic to the openvino consumer application but window pop-up is not coming up (it was came for 1 second and then lost).

please refer screenshot for reference

note that I have x11 window terminal but due to blur screen, i am sending the container logs from terminal.

image

consumer application logs:
image

openness[20.12]wait for running ovs-ovn pod failed

openness:20.12
kubernets:1.19.3

2021-02-04 12:05:31,827 p=29223 u=root n=ansible | ...ignoring
2021-02-04 12:05:32,018 p=29223 u=root n=ansible | TASK [kubernetes/cni/kubeovn/node : wait for running ovs-ovn pod] ***************************************************************************************
2021-02-04 12:20:48,325 p=29223 u=root n=ansible | fatal: [node02 -> 192.168.1.7]: FAILED! => {
"attempts": 30,
"changed": false,
"cmd": "set -o pipefail && kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name,STATUS:.status.phase --no-headers --field-selector spec.nodeName=node2 | grep -E "ovs-ovn"\n",
"delta": "0:00:00.192296",
"end": "2021-02-04 12:20:48.275860",
"rc": 0,
"start": "2021-02-04 12:20:48.083564"
}

STDOUT:

ovs-ovn-xwhbp Pending

[root@openness ~]# set -o pipefail && kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name,STATUS:.status.phase --no-headers --field-selector spec.nodeName=node2
kube-ovn-cni-6z9bb Pending
kube-ovn-pinger-9f6bd Pending
kube-proxy-nncv9 Pending
ovs-ovn-xwhbp Pending
[root@openness ~]#

I tried to compile many times, but I still haven't found the problem,hope someone can help me

OpenVINO host and client setup

Please provide the steps regarding how to setup the client simulator. I am facing issues everytime while running the command 'kubectl interfaceservice attach <edge_node_host_name> 0000:86:00.0'. As soon as the command is run, the node becomes unreachable. Basically we are not clear about setting the network interface. Please provide the detailed overview.

Issue with proxy settings

Can somebody help me to understand the need of group_vars/all.yml file ? I have set proxy_os_enable: false
But what should other proxy_ variables contain? Please help me in understanding this

Announcement: Support for offline package for Native On-Premises deployment

This is an announcement from the OpenNESS team about the Offline installation feature in Native On-Premises deployment. Offline installation feature for Native On-Premises is used for installing Native On-Premises on nodes without the need for the internet. The feature allows the users to create an installable package where there is internet access and install it in offline mode on edge nodes where there is no internet. This is supported in OpenNESS 20.03. In the upcoming release, the Offline installation feature in Native On-Premises deployment feature support is planned to be stopped. Any customers planning to adopt this feature is suggested to use OpenNESS 20.03 for reference.
Any concerns about this announcement please provide your comments. This thread will be open for comments till the 9th of May 2020.

Stateful VM deployment: CDI image upload fails when CMK is enabled

Hi,
I think the example script in "CDI image upload fails when CMK is enabled" part have some invalid cmd.
For example: kubectl get pod cdi-upload-centos-dv -o yaml --export > cdiUploadCentosDv.yaml

May i ask for the correct cmd? I have to use the stateful deployment for my project, but i stuck in the time out problem.
Thank you very much.
截圖 2022-03-03 下午10 19 07 1

BRs
Oneal Lee

Facing error while creating pod in sample-app

1.kubectl apply -f sample_policy.yml
2. kubectl create -f sample_producer.yml
In the second step, pod is not getting created and the error is ErrImageNeverPulled
[root@controller sample-app]# kubectl create -f sample_producer.yml
deployment.apps/producer created
[root@controller sample-app]# kubectl get pods | grep producer
producer-685fcbc569-bgv8c 0/1 ErrImageNeverPull 0 10s

Error while installing OpenNESS On-Premises

fatal: [xxxx-xx.xxxxxx.x86-01]: FAILED! => {"changed": false, "cmd": "/usr/bin/gmake build", "msg": "mysql uses an image, skipping\nBuilding cups-ui\nBuilding ui\nBuilding cnca-ui\nBuilding cce\nService 'cce' failed to build: The command '/bin/sh -c apk add git' returned a non-zero code: 1\ngmake: *** [build] Error 1", "rc": 2, "stderr": "mysql uses an image, skipping\nBuilding cups-ui\nBuilding ui\nBuilding cnca-ui\nBuilding cce\nService 'cce' failed to build: The command '/bin/sh -c apk add git' returned a non-zero code: 1\ngmake: *** [build] Error 1\n", "stderr_lines": ["mysql uses an image, skipping", "Building cups-ui", "Building ui", "Building cnca-ui", "Building cce", "Service 'cce' failed to build: The command '/bin/sh -c apk add git' returned a non-zero code: 1", "gmake: *** [build] Error 1"], "stdout": "docker-compose build\nStep 1/20 : FROM node:lts-alpine AS cups-ui-deps-image\n ---> 927d03058714\nStep 2/20 : WORKDIR /usr/src/app\n ---> Using cache\n ---> fb254ead378b\nStep 3/20 : COPY ./server/package.json ./server/yarn.lock ./\n ---> Using cache\n ---> 7c67677a227a\nStep 4/20 : RUN yarn install --pure-lockfile --production\n ---> Using cache\n ---> 1b0ccaa051c6\nStep 5/20 : COPY ./server ./\n ---> Using cache\n ---> 09c7e40250d7\nStep 6/20 : WORKDIR /app\n ---> Using cache\n ---> 80f7f30377b9\nStep 7/20 : COPY package.json yarn.lock ./\n ---> Using cache\n ---> 09434d79e657\nStep 8/20 : RUN yarn install --pure-lockfile --production\n ---> Using cache\n ---> 8ef7e110a78e\n\nStep 9/20 : FROM cups-ui-deps-image\n ---> 8ef7e110a78e\nStep 10/20 : ARG REACT_APP_CUPS_API\n ---> Using cache\n ---> 8daa77da178d\nStep 11/20 : ENV REACT_APP_CUPS_API=$REACT_APP_CUPS_API\n ---> Using cache\n ---> a955d538ec96\nStep 12/20 : ARG CONTROLLER_UI_URL\n ---> Using cache\n ---> 255e5cada90d\nStep 13/20 : ENV REACT_APP_CONTROLLER_UI_URL=$CONTROLLER_UI_URL\n ---> Running in 14414fc111a8\nRemoving intermediate container 14414fc111a8\n ---> a7d4deac0695\nStep 14/20 : ENV INLINE_RUNTIME_CHUNK=false\n ---> Running in c556216bfabe\nRemoving intermediate container c556216bfabe\n ---> 0cf7f720de89\nStep 15/20 : COPY . ./\n ---> 004ad17975d8\nStep 16/20 : RUN yarn build\n ---> Running in dfdcb3c30205\nyarn run v1.22.0\n$ react-scripts build\nCreating an optimized production build...\n\u001b[91mBrowserslist: caniuse-lite is outdated. Please run next command yarn upgrade\n\u001b[0mCompiled successfully.\n\nFile sizes after gzip:\n\n 185.58 KB build/static/js/2.e2121035.chunk.js\n 4.27 KB build/static/js/main.12c5224c.chunk.js\n 762 B build/static/js/runtimemain.a8a9905a.js\n 580 B build/static/css/2.94155858.chunk.css\n 138 B build/static/css/main.090348ab.chunk.css\n\nThe project was built assuming it is hosted at the server root.\nYou can control this with the homepage field in your package.json.\nFor example, add this to build it for GitHub Pages:\n\n "homepage" : "http://myname.github.io/myapp",\n\nThe build folder is ready to be deployed.\nYou may serve it with a static server:\n\n yarn global add serve\n serve -s build\n\nFind out more about deployment here:\n\n https://bit.ly/CRA-deploy\n\nDone in 39.23s.\nRemoving intermediate container dfdcb3c30205\n ---> 7dda5ce1e18e\nStep 17/20 : WORKDIR /usr/src/app\n ---> Running in d24c2188be5f\nRemoving intermediate container d24c2188be5f\n ---> 85e1ceaaf4e5\nStep 18/20 : RUN cp -r /app/build ./build\n ---> Running in f7e20ad5a440\nRemoving intermediate container f7e20ad5a440\n ---> 8a7d6c099e5b\nStep 19/20 : EXPOSE 80\n ---> Running in dcdaa8923bc0\nRemoving intermediate container dcdaa8923bc0\n ---> e356b30e21e2\nStep 20/20 : CMD ["node", "server.js"]\n ---> Running in fde18b2ba558\nRemoving intermediate container fde18b2ba558\n ---> 37064adb739e\n\nSuccessfully built 37064adb739e\nSuccessfully tagged cups:latest\nStep 1/26 : FROM node:lts-alpine AS cce-ui-deps-image\n ---> 927d03058714\nStep 2/26 : WORKDIR /usr/src/app\n ---> Using cache\n ---> fb254ead378b\nStep 3/26 : COPY ./server/package.json ./server/yarn.lock ./\n ---> Using cache\n ---> 7c67677a227a\nStep 4/26 : RUN yarn install --pure-lockfile --production\n ---> Using cache\n ---> 1b0ccaa051c6\nStep 5/26 : COPY ./server ./\n ---> Using cache\n ---> 84396d6fcaed\nStep 6/26 : WORKDIR /app\n ---> Using cache\n ---> ff7dc78d1677\nStep 7/26 : COPY package.json yarn.lock ./\n ---> Using cache\n ---> 95cd3842345c\nStep 8/26 : RUN yarn install --pure-lockfile --production\n ---> Using cache\n ---> 37418efac03b\n\nStep 9/26 : FROM cce-ui-deps-image\n ---> 37418efac03b\nStep 10/26 : ARG CCE_ORCHESTRATION_MODE\n ---> Using cache\n ---> 60d8ca8d639b\nStep 11/26 : ENV REACT_APP_ORCHESTRATION_MODE=$CCE_ORCHESTRATION_MODE\n ---> Using cache\n ---> b6fc30473dc6\nStep 12/26 : ARG REACT_APP_CONTROLLER_API\n ---> Using cache\n ---> 1e186b2c6222\nStep 13/26 : ENV REACT_APP_CONTROLLER_API=$REACT_APP_CONTROLLER_API\n ---> Using cache\n ---> 403b4f5501e5\nStep 14/26 : ARG CONTROLLER_UI_URL\n ---> Using cache\n ---> c1f7eeef1d93\nStep 15/26 : ENV REACT_APP_CONTROLLER_UI_URL=$CONTROLLER_UI_URL\n ---> Using cache\n ---> 9d334f862812\nStep 16/26 : ARG CNCA_UI_URL\n ---> Using cache\n ---> 9e6c894526f3\nStep 17/26 : ENV REACT_APP_CNCA_UI_URL=$CNCA_UI_URL\n ---> Using cache\n ---> 39dc6b9818df\nStep 18/26 : ARG CUPS_UI_URL\n ---> Using cache\n ---> 238d9a74b461\nStep 19/26 : ENV REACT_APP_CUPS_UI_URL=$CUPS_UI_URL\n ---> Using cache\n ---> 1d58dda10207\nStep 20/26 : ENV INLINE_RUNTIME_CHUNK=false\n ---> Using cache\n ---> 029f0a42ab0c\nStep 21/26 : COPY . ./\n ---> Using cache\n ---> 89fff0c71311\nStep 22/26 : RUN yarn build\n ---> Using cache\n ---> 4a45a6072e37\nStep 23/26 : WORKDIR /usr/src/app\n ---> Using cache\n ---> 5f720deaed1b\nStep 24/26 : RUN cp -r /app/build ./build\n ---> Using cache\n ---> 779361689b52\nStep 25/26 : EXPOSE 80\n ---> Using cache\n ---> e2f119dcee9e\nStep 26/26 : CMD ["node", "server.js"]\n ---> Using cache\n ---> 33397079ae6f\n\n[Warning] One or more build-args [REACT_APP_ORCHESTRATION_MODE] were not consumed\nSuccessfully built 33397079ae6f\nSuccessfully tagged ui:latest\nStep 1/22 : FROM node:lts-alpine AS cnca-ui-deps-image\n ---> 927d03058714\nStep 2/22 : WORKDIR /usr/src/app\n ---> Using cache\n ---> fb254ead378b\nStep 3/22 : COPY ./server/package.json ./server/yarn.lock ./\n ---> Using cache\n ---> 7c67677a227a\nStep 4/22 : RUN yarn install --pure-lockfile --production\n ---> Using cache\n ---> 1b0ccaa051c6\nStep 5/22 : COPY ./server ./\n ---> Using cache\n ---> cf8b715cf65e\nStep 6/22 : WORKDIR /app\n ---> Using cache\n ---> c2cdb3b2b966\nStep 7/22 : COPY package.json yarn.lock ./\n ---> Using cache\n ---> a0b7a8e7e8a5\nStep 8/22 : RUN yarn install --pure-lockfile --production\n ---> Using cache\n ---> e159f3541aa0\n\nStep 9/22 : FROM cnca-ui-deps-image\n ---> e159f3541aa0\nStep 10/22 : ARG REACT_APP_CNCA_AF_API\n ---> Using cache\n ---> f5d6d3ac2a45\nStep 11/22 : ENV REACT_APP_CNCA_AF_API=$REACT_APP_CNCA_AF_API\n ---> Using cache\n ---> e397c1d9b4af\nStep 12/22 : ARG REACT_APP_CNCA_5GOAM_API\n ---> Using cache\n ---> ec6e8745c974\nStep 13/22 : ENV REACT_APP_CNCA_5GOAM_API=$REACT_APP_CNCA_5GOAM_API\n ---> Using cache\n ---> 3c984fe474b1\nStep 14/22 : ARG CONTROLLER_UI_URL\n ---> Using cache\n ---> 9bbac8107a74\nStep 15/22 : ENV REACT_APP_CONTROLLER_UI_URL=$CONTROLLER_UI_URL\n ---> Running in bf2257973d4f\nRemoving intermediate container bf2257973d4f\n ---> 31ea4be9b42b\nStep 16/22 : ENV INLINE_RUNTIME_CHUNK=false\n ---> Running in 5fd76e432544\nRemoving intermediate container 5fd76e432544\n ---> 220052d0cdef\nStep 17/22 : COPY . ./\n ---> e21285a73647\nStep 18/22 : RUN yarn build\n ---> Running in d0730b0d5a76\nyarn run v1.22.0\n$ react-scripts build\nCreating an optimized production build...\n\u001b[91mBrowserslist: caniuse-lite is outdated. Please run next command yarn upgrade\n\u001b[0mCompiled successfully.\n\nFile sizes after gzip:\n\n 186.03 KB build/static/js/2.78149bfa.chunk.js\n 6.01 KB build/static/js/main.c5d45c5a.chunk.js\n 762 B build/static/js/runtimemain.a8a9905a.js\n 580 B build/static/css/2.94155858.chunk.css\n 138 B build/static/css/main.090348ab.chunk.css\n\nThe project was built assuming it is hosted at the server root.\nYou can control this with the homepage field in your package.json.\nFor example, add this to build it for GitHub Pages:\n\n "homepage" : "http://myname.github.io/myapp",\n\nThe build folder is ready to be deployed.\nYou may serve it with a static server:\n\n yarn global add serve\n serve -s build\n\nFind out more about deployment here:\n\n https://bit.ly/CRA-deploy\n\nDone in 39.20s.\nRemoving intermediate container d0730b0d5a76\n ---> 1e5161f5d11b\nStep 19/22 : WORKDIR /usr/src/app\n ---> Running in 1ebf15e6202f\nRemoving intermediate container 1ebf15e6202f\n ---> 2f4bd93a9e34\nStep 20/22 : RUN cp -r /app/build ./build\n ---> Running in 5c6b91419d2a\nRemoving intermediate container 5c6b91419d2a\n ---> f910b393fd38\nStep 21/22 : EXPOSE 80\n ---> Running in 7e09d8805ed2\nRemoving intermediate container 7e09d8805ed2\n ---> 7af1186e0062\nStep 22/22 : CMD ["node", "server.js"]\n ---> Running in 09b0a7a5d444\nRemoving intermediate container 09b0a7a5d444\n ---> 2bedc4ea4e74\n\nSuccessfully built 2bedc4ea4e74\nSuccessfully tagged cnca:latest\nStep 1/15 : FROM golang:1.12-alpine\n ---> 76bddfb5e55e\nStep 2/15 : ENV GO111MODULE on\n ---> Using cache\n ---> 4dfe2d41c6e4\nStep 3/15 : WORKDIR /go/src/github.com/open-ness/edgecontroller\n ---> Using cache\n ---> d9832e91215b\nStep 4/15 : RUN apk add git\n ---> Running in 134d0a63f240\nfetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz\n\u001b[91mERROR: http://dl-cdn.alpinelinux.org/alpine/v3.11/main: temporary error (try again later)\nWARNING: Ignoring APKINDEX.70f61090.tar.gz: No such file or directory\n\u001b[0mfetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz\n\u001b[91mERROR: http://dl-cdn.alpinelinux.org/alpine/v3.11/community: temporary error (try again later)\nWARNING: Ignoring APKINDEX.ca2fea5b.tar.gz: No such file or directory\n\u001b[0m\u001b[91mERROR: unsatisfiable constraints:\n\u001b[0m git (missing):\n required by: world[git]\n", "stdout_lines": ["docker-compose build", "Step 1/20 : FROM node:lts-alpine AS cups-ui-deps-image", " ---> 927d03058714", "Step 2/20 : WORKDIR /usr/src/app", " ---> Using cache", " ---> fb254ead378b", "Step 3/20 : COPY ./server/package.json ./server/yarn.lock ./", " ---> Using cache", " ---> 7c67677a227a", "Step 4/20 : RUN yarn install --pure-lockfile --production", " ---> Using cache", " ---> 1b0ccaa051c6", "Step 5/20 : COPY ./server ./", " ---> Using cache", " ---> 09c7e40250d7", "Step 6/20 : WORKDIR /app", " ---> Using cache", " ---> 80f7f30377b9", "Step 7/20 : COPY package.json yarn.lock ./", " ---> Using cache", " ---> 09434d79e657", "Step 8/20 : RUN yarn install --pure-lockfile --production", " ---> Using cache", " ---> 8ef7e110a78e", "", "Step 9/20 : FROM cups-ui-deps-image", " ---> 8ef7e110a78e", "Step 10/20 : ARG REACT_APP_CUPS_API", " ---> Using cache", " ---> 8daa77da178d", "Step 11/20 : ENV REACT_APP_CUPS_API=$REACT_APP_CUPS_API", " ---> Using cache", " ---> a955d538ec96", "Step 12/20 : ARG CONTROLLER_UI_URL", " ---> Using cache", " ---> 255e5cada90d", "Step 13/20 : ENV REACT_APP_CONTROLLER_UI_URL=$CONTROLLER_UI_URL", " ---> Running in 14414fc111a8", "Removing intermediate container 14414fc111a8", " ---> a7d4deac0695", "Step 14/20 : ENV INLINE_RUNTIME_CHUNK=false", " ---> Running in c556216bfabe", "Removing intermediate container c556216bfabe", " ---> 0cf7f720de89", "Step 15/20 : COPY . ./", " ---> 004ad17975d8", "Step 16/20 : RUN yarn build", " ---> Running in dfdcb3c30205", "yarn run v1.22.0", "$ react-scripts build", "Creating an optimized production build...", "\u001b[91mBrowserslist: caniuse-lite is outdated. Please run next command yarn upgrade", "\u001b[0mCompiled successfully.", "", "File sizes after gzip:", "", " 185.58 KB build/static/js/2.e2121035.chunk.js", " 4.27 KB build/static/js/main.12c5224c.chunk.js", " 762 B build/static/js/runtimemain.a8a9905a.js", " 580 B build/static/css/2.94155858.chunk.css", " 138 B build/static/css/main.090348ab.chunk.css", "", "The project was built assuming it is hosted at the server root.", "You can control this with the homepage field in your package.json.", "For example, add this to build it for GitHub Pages:", "", " "homepage" : "http://myname.github.io/myapp",", "", "The build folder is ready to be deployed.", "You may serve it with a static server:", "", " yarn global add serve", " serve -s build", "", "Find out more about deployment here:", "", " https://bit.ly/CRA-deploy", "", "Done in 39.23s.", "Removing intermediate container dfdcb3c30205", " ---> 7dda5ce1e18e", "Step 17/20 : WORKDIR /usr/src/app", " ---> Running in d24c2188be5f", "Removing intermediate container d24c2188be5f", " ---> 85e1ceaaf4e5", "Step 18/20 : RUN cp -r /app/build ./build", " ---> Running in f7e20ad5a440", "Removing intermediate container f7e20ad5a440", " ---> 8a7d6c099e5b", "Step 19/20 : EXPOSE 80", " ---> Running in dcdaa8923bc0", "Removing intermediate container dcdaa8923bc0", " ---> e356b30e21e2", "Step 20/20 : CMD ["node", "server.js"]", " ---> Running in fde18b2ba558", "Removing intermediate container fde18b2ba558", " ---> 37064adb739e", "", "Successfully built 37064adb739e", "Successfully tagged cups:latest", "Step 1/26 : FROM node:lts-alpine AS cce-ui-deps-image", " ---> 927d03058714", "Step 2/26 : WORKDIR /usr/src/app", " ---> Using cache", " ---> fb254ead378b", "Step 3/26 : COPY ./server/package.json ./server/yarn.lock ./", " ---> Using cache", " ---> 7c67677a227a", "Step 4/26 : RUN yarn install --pure-lockfile --production", " ---> Using cache", " ---> 1b0ccaa051c6", "Step 5/26 : COPY ./server ./", " ---> Using cache", " ---> 84396d6fcaed", "Step 6/26 : WORKDIR /app", " ---> Using cache", " ---> ff7dc78d1677", "Step 7/26 : COPY package.json yarn.lock ./", " ---> Using cache", " ---> 95cd3842345c", "Step 8/26 : RUN yarn install --pure-lockfile --production", " ---> Using cache", " ---> 37418efac03b", "", "Step 9/26 : FROM cce-ui-deps-image", " ---> 37418efac03b", "Step 10/26 : ARG CCE_ORCHESTRATION_MODE", " ---> Using cache", " ---> 60d8ca8d639b", "Step 11/26 : ENV REACT_APP_ORCHESTRATION_MODE=$CCE_ORCHESTRATION_MODE", " ---> Using cache", " ---> b6fc30473dc6", "Step 12/26 : ARG REACT_APP_CONTROLLER_API", " ---> Using cache", " ---> 1e186b2c6222", "Step 13/26 : ENV REACT_APP_CONTROLLER_API=$REACT_APP_CONTROLLER_API", " ---> Using cache", " ---> 403b4f5501e5", "Step 14/26 : ARG CONTROLLER_UI_URL", " ---> Using cache", " ---> c1f7eeef1d93", "Step 15/26 : ENV REACT_APP_CONTROLLER_UI_URL=$CONTROLLER_UI_URL", " ---> Using cache", " ---> 9d334f862812", "Step 16/26 : ARG CNCA_UI_URL", " ---> Using cache", " ---> 9e6c894526f3", "Step 17/26 : ENV REACT_APP_CNCA_UI_URL=$CNCA_UI_URL", " ---> Using cache", " ---> 39dc6b9818df", "Step 18/26 : ARG CUPS_UI_URL", " ---> Using cache", " ---> 238d9a74b461", "Step 19/26 : ENV REACT_APP_CUPS_UI_URL=$CUPS_UI_URL", " ---> Using cache", " ---> 1d58dda10207", "Step 20/26 : ENV INLINE_RUNTIME_CHUNK=false", " ---> Using cache", " ---> 029f0a42ab0c", "Step 21/26 : COPY . ./", " ---> Using cache", " ---> 89fff0c71311", "Step 22/26 : RUN yarn build", " ---> Using cache", " ---> 4a45a6072e37", "Step 23/26 : WORKDIR /usr/src/app", " ---> Using cache", " ---> 5f720deaed1b", "Step 24/26 : RUN cp -r /app/build ./build", " ---> Using cache", " ---> 779361689b52", "Step 25/26 : EXPOSE 80", " ---> Using cache", " ---> e2f119dcee9e", "Step 26/26 : CMD ["node", "server.js"]", " ---> Using cache", " ---> 33397079ae6f", "", "[Warning] One or more build-args [REACT_APP_ORCHESTRATION_MODE] were not consumed", "Successfully built 33397079ae6f", "Successfully tagged ui:latest", "Step 1/22 : FROM node:lts-alpine AS cnca-ui-deps-image", " ---> 927d03058714", "Step 2/22 : WORKDIR /usr/src/app", " ---> Using cache", " ---> fb254ead378b", "Step 3/22 : COPY ./server/package.json ./server/yarn.lock ./", " ---> Using cache", " ---> 7c67677a227a", "Step 4/22 : RUN yarn install --pure-lockfile --production", " ---> Using cache", " ---> 1b0ccaa051c6", "Step 5/22 : COPY ./server ./", " ---> Using cache", " ---> cf8b715cf65e", "Step 6/22 : WORKDIR /app", " ---> Using cache", " ---> c2cdb3b2b966", "Step 7/22 : COPY package.json yarn.lock ./", " ---> Using cache", " ---> a0b7a8e7e8a5", "Step 8/22 : RUN yarn install --pure-lockfile --production", " ---> Using cache", " ---> e159f3541aa0", "", "Step 9/22 : FROM cnca-ui-deps-image", " ---> e159f3541aa0", "Step 10/22 : ARG REACT_APP_CNCA_AF_API", " ---> Using cache", " ---> f5d6d3ac2a45", "Step 11/22 : ENV REACT_APP_CNCA_AF_API=$REACT_APP_CNCA_AF_API", " ---> Using cache", " ---> e397c1d9b4af", "Step 12/22 : ARG REACT_APP_CNCA_5GOAM_API", " ---> Using cache", " ---> ec6e8745c974", "Step 13/22 : ENV REACT_APP_CNCA_5GOAM_API=$REACT_APP_CNCA_5GOAM_API", " ---> Using cache", " ---> 3c984fe474b1", "Step 14/22 : ARG CONTROLLER_UI_URL", " ---> Using cache", " ---> 9bbac8107a74", "Step 15/22 : ENV REACT_APP_CONTROLLER_UI_URL=$CONTROLLER_UI_URL", " ---> Running in bf2257973d4f", "Removing intermediate container bf2257973d4f", " ---> 31ea4be9b42b", "Step 16/22 : ENV INLINE_RUNTIME_CHUNK=false", " ---> Running in 5fd76e432544", "Removing intermediate container 5fd76e432544", " ---> 220052d0cdef", "Step 17/22 : COPY . ./", " ---> e21285a73647", "Step 18/22 : RUN yarn build", " ---> Running in d0730b0d5a76", "yarn run v1.22.0", "$ react-scripts build", "Creating an optimized production build...", "\u001b[91mBrowserslist: caniuse-lite is outdated. Please run next command yarn upgrade", "\u001b[0mCompiled successfully.", "", "File sizes after gzip:", "", " 186.03 KB build/static/js/2.78149bfa.chunk.js", " 6.01 KB build/static/js/main.c5d45c5a.chunk.js", " 762 B build/static/js/runtimemain.a8a9905a.js", " 580 B build/static/css/2.94155858.chunk.css", " 138 B build/static/css/main.090348ab.chunk.css", "", "The project was built assuming it is hosted at the server root.", "You can control this with the homepage field in your package.json.", "For example, add this to build it for GitHub Pages:", "", " "homepage" : "http://myname.github.io/myapp",", "", "The build folder is ready to be deployed.", "You may serve it with a static server:", "", " yarn global add serve", " serve -s build", "", "Find out more about deployment here:", "", " https://bit.ly/CRA-deploy", "", "Done in 39.20s.", "Removing intermediate container d0730b0d5a76", " ---> 1e5161f5d11b", "Step 19/22 : WORKDIR /usr/src/app", " ---> Running in 1ebf15e6202f", "Removing intermediate container 1ebf15e6202f", " ---> 2f4bd93a9e34", "Step 20/22 : RUN cp -r /app/build ./build", " ---> Running in 5c6b91419d2a", "Removing intermediate container 5c6b91419d2a", " ---> f910b393fd38", "Step 21/22 : EXPOSE 80", " ---> Running in 7e09d8805ed2", "Removing intermediate container 7e09d8805ed2", " ---> 7af1186e0062", "Step 22/22 : CMD ["node", "server.js"]", " ---> Running in 09b0a7a5d444", "Removing intermediate container 09b0a7a5d444", " ---> 2bedc4ea4e74", "", "Successfully built 2bedc4ea4e74", "Successfully tagged cnca:latest", "Step 1/15 : FROM golang:1.12-alpine", " ---> 76bddfb5e55e", "Step 2/15 : ENV GO111MODULE on", " ---> Using cache", " ---> 4dfe2d41c6e4", "Step 3/15 : WORKDIR /go/src/github.com/open-ness/edgecontroller", " ---> Using cache", " ---> d9832e91215b", "Step 4/15 : RUN apk add git", " ---> Running in 134d0a63f240", "fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz", "\u001b[91mERROR: http://dl-cdn.alpinelinux.org/alpine/v3.11/main: temporary error (try again later)", "WARNING: Ignoring APKINDEX.70f61090.tar.gz: No such file or directory", "\u001b[0mfetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz", "\u001b[91mERROR: http://dl-cdn.alpinelinux.org/alpine/v3.11/community: temporary error (try again later)", "WARNING: Ignoring APKINDEX.ca2fea5b.tar.gz: No such file or directory", "\u001b[0m\u001b[91mERROR: unsatisfiable constraints:", "\u001b[0m git (missing):", " required by: world[git]"]}

GTP filter Policy

Hello,

Is there somewhere an example, if we want to forward packets sent from a UE behind the eNB to the LBP server?
I have experimented with the GTP filter field but I cannot get hold of how it works.

Thank you!

Issue "Enrolling Nodes with Controller" w/ OnPremises

I'm having trouble to conclude the OnPremises setup with the 20.03 release. In more detail, I'm unable to Enroll Nodes with Controller because I cannot reach the http://<LANDING_UI_URL>.

Assuming that the .env file, where I should find the URL, is the one in the controller installation directory (it's the only .env file I was able to find), the URL is http://localhost:3000 but I cannot reach it. Neither from the browser of the ansible controller, changing localhost with the hostname of the machine, neither from the node itself.

I guess I should be looking at edgecontroller-ui, but it's not deployed. Ideas?

$ pwd
/opt/edgecontroller

$ grep .env -e "LANDING"
# LANDING UI URL
LANDING_UI_URL=http://localhost:3000

$ wget http://localhost:3000
--2020-04-21 16:05:41--  http://localhost:3000/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:3000... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:3000... failed: Connection refused.

And, as you can see, there is nothing listening to port 3000!

$ sudo netstat -tulpn | grep LISTEN
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1444/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1912/master         
tcp        0      0 127.0.0.1:36483         0.0.0.0:*               LISTEN      12816/kubelet       
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      12816/kubelet       
tcp        0      0 127.0.0.1:10665         0.0.0.0:*               LISTEN      23140/./kube-ovn-da 
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      13496/kube-proxy    
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      12503/etcd          
tcp        0      0 10.110.0.139:2379       0.0.0.0:*               LISTEN      12503/etcd          
tcp        0      0 10.110.0.139:2380       0.0.0.0:*               LISTEN      12503/etcd          
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      12503/etcd          
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      12482/kube-controll 
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      12495/kube-schedule 
tcp6       0      0 :::22                   :::*                    LISTEN      1444/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1912/master         
tcp6       0      0 :::10660                :::*                    LISTEN      19868/./kube-ovn-co 
tcp6       0      0 :::10250                :::*                    LISTEN      12816/kubelet       
tcp6       0      0 :::10251                :::*                    LISTEN      12495/kube-schedule 
tcp6       0      0 :::6443                 :::*                    LISTEN      12481/kube-apiserve 
tcp6       0      0 :::10252                :::*                    LISTEN      12482/kube-controll 
tcp6       0      0 :::10256                :::*                    LISTEN      13496/kube-proxy    
tcp6       0      0 :::6641                 :::*                    LISTEN      18232/ovsdb-server  
tcp6       0      0 :::6642                 :::*                    LISTEN      18241/ovsdb-server  

Follows the host_var/controller.yml file I'm using:

---

# This file lists variables that user might want to customize per-host.
# Default values are stored in the specific role's `defaults/main.yml` file.
# To override variable for specific node, put the variable in the `hosts_vars/inventory_name_of_node.yml` file.

# -- machine_setup
# --- machine_setup/custom_kernel
kernel_skip: false  # use this variable to disable custom kernel installation for host

# Use newer non-rt kernel (3.10.0-1062)
kernel_repo_url: ""                           # package is in default repository, no need to add new repository
kernel_package: kernel                        # instead of kernel-rt-kvm
kernel_devel_package: kernel-devel            # instead of kernel-rt-devel
kernel_version: 3.10.0-1062.18.1.el7.x86_64

kernel_dependencies_urls: []
kernel_dependencies_packages: []


# --- machine_setup/grub
hugepage_size: "2M" # Or 1G
hugepage_amount: "5000"

default_grub_params: "hugepagesz={{ hugepage_size }} hugepages={{ hugepage_amount }} intel_iommu=on iommu=pt"
additional_grub_params: ""


# --- machine_setup/configure_tuned
tuned_skip: true #false   # use this variable to skip tuned profile configuration for host
# Use newer non-rt kernel (3.10.0-1062)
# Since, we're not using rt kernel, we don't need a tuned-profiles-realtime but want to keep the tuned 2.11
tuned_packages:
- http://mirror.centos.org/altarch/7/updates/armhfp/Packages/tuned-2.11.0-5.el7_7.1.noarch.rpm
#- http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/tuned-2.11.0-5.el7_7.1.noarch.rpm
tuned_profile: balanced
tuned_vars: ""



# -- dpdk
# provide a package / URL to kernel-devel package
# Use newer non-rt kernel (3.10.0-1062)
dpdk_kernel_devel: ""  # kernel-devel is in the repository, no need for url with RPM


# -- sriov/worker
sriov:
  network_interfaces: {}


# -- ovs
ovs_ports: []

dial : timeout exception.

Hi Team,

I have installed and tested successfully OpenVINO application onboarding with OpenNESS 20.06

I have deleted OpenVINO consumer and producer pods post testing and now I am again created both pods and error occurred.

Please refer below screen shot. Can you please help me what is the issue ?

even i am not able to install kubernetes-dashboard on it. error is : no host to route.

image

Unable to connect to Kubernetes Dashboard

Hi. I'm following the guidelines in :
https://github.com/open-ness/specs/blob/master/doc/getting-started/network-edge/controller-edge-node-setup.md#installing-kubernetes-dashboard

When I follow the steps, the UI is inaccessible via the https (as it appears to only be listening on localhost).

  1. Deploy dashboard
  2. Grep pods/namespaces (not sure why this is necessary?)
  3. Create service account
  4. Apply admin user
  5. Change type to NodePort
  6. Note port on which dashboard is exposed
[root@mec-ctl ~]# kubectl -n kubernetes-dashboard get service kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.96.240.232   <none>        443:30417/TCP   19m
  1. Connect via https
    It appears to only be listening on localhost meaning that I cannot connect to the UI.

I believe that it should be listening on either the interface to default gateway for the controller host (in my case 10.175.169.90) or maybe even the configured ip from inventory.ini (in my case 10.175.156.90)

Issue running playbook to install controller and nodes

While trying to run the deploy scripts (deploy_ne.sh), I am getting errors of version compatibility of python. This is on a clean install of Centos on KVM. Full connectivity between controller node and edge node (tested with ansible -m ping all command).

I am getting the following error:
STDOUT:

File: ‘/opt/edgecontroller’
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fd00h/64768d Inode: 50761770 Links: 27
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Context: unconfined_u:object_r:usr_t:s0
Access: 2020-08-24 09:17:56.527509889 -0700
Modify: 2020-08-24 09:17:57.863589833 -0700
Change: 2020-08-24 09:17:57.863589833 -0700
Birth: -

TASK [openness/onprem/master : remove all Docker containers] ******************************************************************************************************************************************************************************
task path: /root/openness/openness-experience-kits/roles/openness/onprem/master/tasks/subtasks/remove_docker_containers.yml:19
fatal: [controller]: FAILED! => {
"changed": true,
"cmd": [
"docker-compose",
"rm",
"--stop",
"-f"
],
"delta": "0:00:00.192591",
"end": "2020-08-24 09:18:23.738305",
"rc": 1,
"start": "2020-08-24 09:18:23.545714"
}

STDERR:

/usr/lib64/python2.7/site-packages/cryptography/init.py:39: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
CryptographyDeprecationWarning,
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 7, in
from compose.cli.main import main
File "/usr/lib/python2.7/site-packages/compose/cli/main.py", line 17, in
import docker
File "/usr/lib/python2.7/site-packages/docker/init.py", line 2, in
from .api import APIClient
File "/usr/lib/python2.7/site-packages/docker/api/init.py", line 2, in
from .client import APIClient
File "/usr/lib/python2.7/site-packages/docker/api/client.py", line 10, in
from .build import BuildApiMixin
File "/usr/lib/python2.7/site-packages/docker/api/build.py", line 6, in
from .. import auth
File "/usr/lib/python2.7/site-packages/docker/auth.py", line 9, in
from .utils import config
File "/usr/lib/python2.7/site-packages/docker/utils/init.py", line 3, in
from .decorators import check_resource, minimum_version, update_headers
File "/usr/lib/python2.7/site-packages/docker/utils/decorators.py", line 4, in
from . import utils
File "/usr/lib/python2.7/site-packages/docker/utils/utils.py", line 13, in
from .. import tls
File "/usr/lib/python2.7/site-packages/docker/tls.py", line 5, in
from .transport import SSLHTTPAdapter
File "/usr/lib/python2.7/site-packages/docker/transport/init.py", line 11, in
from .sshconn import SSHHTTPAdapter
File "/usr/lib/python2.7/site-packages/docker/transport/sshconn.py", line 1, in
import paramiko
File "/usr/lib/python2.7/site-packages/paramiko/init.py", line 22, in
from paramiko.transport import SecurityOptions, Transport
File "/usr/lib/python2.7/site-packages/paramiko/transport.py", line 89, in
from paramiko.dsskey import DSSKey
File "/usr/lib/python2.7/site-packages/paramiko/dsskey.py", line 25, in
from cryptography.hazmat.primitives import hashes, serialization
File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/primitives/serialization/init.py", line 22, in
from cryptography.hazmat.primitives.serialization.ssh import (
File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/primitives/serialization/ssh.py", line 27, in
from bcrypt import kdf as _bcrypt_kdf
File "/usr/lib64/python2.7/site-packages/bcrypt/init.py", line 57
def gensalt(rounds: int = 12, prefix: bytes = b"2b") -> bytes:
^
SyntaxError: invalid syntax

MSG:

non-zero return code

PLAY RECAP ********************************************************************************************************************************************************************************************************************************
controller : ok=67 changed=23 unreachable=0 failed=1 skipped=37 rescued=0 ignored=0

------------------[ My version of CentOS ]--------------------------
controller root ~/openness/openness-experience-kits:> hostnamectl
Static hostname: controller
Icon name: computer-vm
Chassis: vm
Machine ID: ........2860dd8b4c6d97a0a5ef4...........
Boot ID: .......4bf219a408699bf83b27.............
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-957.el7.x86_64
Architecture: x86-64

------------------[ My inventory file ]--------------------------

SPDX-License-Identifier: Apache-2.0

Copyright (c) 2019-2020 Intel Corporation

[all]
controller ansible_ssh_user=root ansible_host=192.168.122.113
edge ansible_ssh_user=root ansible_host=192.168.122.22

[controller_group]
controller

[edgenode_group]
edge

[edgenode_vca_group]

[ptp_master]
controller

[ptp_slave_group]
edge

------------------[ Additional effects ]--------------------------

Additionally, this renders my ansible installation unusable, even if I remove and install again (yum remove / install ). I have tried updating python to later versions, but then get other syntax errors).

controller root ~:> ansible
/usr/lib64/python2.7/site-packages/cryptography/init.py:39: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
CryptographyDeprecationWarning,
ERROR! Unexpected Exception, this is probably a bug: invalid syntax (init.py, line 57)
the full traceback was:

Traceback (most recent call last):
File "/bin/ansible", line 92, in
mycli = getattr(import("ansible.cli.%s" % sub, fromlist=[myclass]), myclass)
File "/usr/lib/python2.7/site-packages/ansible/cli/init.py", line 22, in
from ansible.inventory.manager import InventoryManager
File "/usr/lib/python2.7/site-packages/ansible/inventory/manager.py", line 38, in
from ansible.plugins.loader import inventory_loader
File "/usr/lib/python2.7/site-packages/ansible/plugins/loader.py", line 23, in
from ansible.parsing.utils.yaml import from_yaml
File "/usr/lib/python2.7/site-packages/ansible/parsing/utils/yaml.py", line 17, in
from ansible.parsing.yaml.loader import AnsibleLoader
File "/usr/lib/python2.7/site-packages/ansible/parsing/yaml/loader.py", line 30, in
from ansible.parsing.yaml.constructor import AnsibleConstructor
File "/usr/lib/python2.7/site-packages/ansible/parsing/yaml/constructor.py", line 30, in
from ansible.parsing.vault import VaultLib
File "/usr/lib/python2.7/site-packages/ansible/parsing/vault/init.py", line 52, in
CRYPTOGRAPHY_BACKEND = default_backend()
File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/backends/init.py", line 15, in default_backend
from cryptography.hazmat.backends.openssl.backend import backend
File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/backends/openssl/init.py", line 7, in
from cryptography.hazmat.backends.openssl.backend import backend
File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 16, in
from cryptography import utils, x509
File "/usr/lib64/python2.7/site-packages/cryptography/x509/init.py", line 8, in
from cryptography.x509.base import (
File "/usr/lib64/python2.7/site-packages/cryptography/x509/base.py", line 22, in
from cryptography.x509.extensions import Extension, ExtensionType
File "/usr/lib64/python2.7/site-packages/cryptography/x509/extensions.py", line 22, in
from cryptography.hazmat.primitives import constant_time, serialization
File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/primitives/serialization/init.py", line 22, in
from cryptography.hazmat.primitives.serialization.ssh import (
File "/usr/lib64/python2.7/site-packages/cryptography/hazmat/primitives/serialization/ssh.py", line 27, in
from bcrypt import kdf as _bcrypt_kdf
File "/usr/lib64/python2.7/site-packages/bcrypt/init.py", line 57
def gensalt(rounds: int = 12, prefix: bytes = b"2b") -> bytes:
^
SyntaxError: invalid syntax

Any help would be greatly appreciated.

Thanks in advance!

openness Controller deploy failed

Hi,

I am trying to deploy openness controller but failed at below step.
Can you please help on this? How to resolve?

TASK [kubernetes/cni/kubeovn/master : wait for running ovs-ovn & ovn-central pods] *******************************************************************
task path: /home/sysadmin/Murali/openness/openness-experience-kits/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:149

./deploy_ne.sh controller last prints:
image

Pods status is:
image

Thanks & Regards,
Devika

Issue with IP

  1. Initially when I was deploying controller(172.16.20.14) and edge node(172.16.20.76), hosts were connected to public network in my work place.
  2. Controller and Edge node were brought up successfully by running ./deploy_ne.sh script.
  3. But due to some reason, we had move to vpn environment and those hosts are connected to local network with static IPs. IPs are different from what it was initially used for deploying controller and edge node.
  4. Now I entered new IPs in inventory.ini file(controller=172.16.20.56, edge node=172.16.20.54) and tried to run deploy_ne.sh script. Should I cleanup and deploy again since IPs are changed.
  5. I have attached the error. Please help me to resolve

new_error_1

new_error_2

error when running deploy_onprem.sh

I don't know what this error means. It appers in the process of deploying edgenode. What is online cpu? Maybe I did something wrong when prepare?

TASK [machine_setup/configure_tuned : apply tuned profile] ****************************
task path: /home/git_repos/openness-experience-kits/roles/machine_setup/configure_tuned/tasks/configure_tuned.yml:43
fatal: [node01]: FAILED! => {
"changed": true,
"cmd": [
"tuned-adm",
"profile",
"realtime"
],
"delta": "0:00:00.228128",
"end": "2020-06-09 11:31:37.750965",
"rc": 1,
"start": "2020-06-09 11:31:37.522837"
}

STDERR:

Cannot load profile(s) 'realtime': Assertion 'isolated_cores contains online CPU(s)' failed.

MSG:

non-zero return code

Issue while doing get/attach/detach interfaceservice list command

Hi all,

  1. I have deployed my new image of interfaceservice pod and its running. Then removed base interfaceservice pod which was came along with edge node deployment.

  2. And can see bootup logs on new pod.

  3. but tried below command and getting below error
    kubectl get interfaceservice worker-node

[root@master-node Devika]# kubectl interfaceservice get worker-node
Error when dialing: 10.16.0.8:42101 err:context deadline exceeded

Can you please help me to check step3 issue?

Failed to initialize LBP port

Hello all,

I have managed to built the on premises setup on a set of servers, and also succeeded to setup a connection on a LTE system, using the edge node as a bridge. However, when I try to add the LBP port the process fails to enable and the nts container fails.

I manage to restart it, and attach to it and the error that it ouputs for the LBP port is:
NES: [ERR] Missing: section PORT0, entry lbp-mac, in config file.
NES: [ERR] Failed to add dataplane rules.
NES: [ERR] Could not initialize nes ACL.
NES: [ERR] Could not initialize ctrl.

Am I missing something in the configuration, that i should add? the error points somewhere that the mac of the lbp port is missing from somewhere. Is it missing from the portal or i should add it somewhere else also?

Note: that the UE/eNB port and the EPC port are pinging just fine, and the S1 establishment is being done properly.

Thank you

Issue with kubectl command

I have completed bringing up controller and edge node. While I'm trying to work with kubectl command. I'm getting the below error.

kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: proxyconnect tcp: dial tcp: lookup proxy.example.org on 172.16.2.31:53: no such host

openness deploy failed for "RPC failed"

While trying to run the deploy scripts "sudo ./deploy_ne.sh single", and got an error :

TASK [telemetry/tas : build TAS] *********************************************************************************************************************************************************************************
task path: /home/pega/OpenNESS/openness-experience-kits/roles/telemetry/tas/tasks/main.yml:154
fatal: [controller]: FAILED! => {
    "changed": true, 
    "cmd": "source /etc/profile && make build", 
    "delta": "0:00:06.614269", 
    "end": "2020-09-22 16:35:13.560206", 
    "rc": 2, 
    "start": "2020-09-22 16:35:06.945937"
}

STDOUT:

CGO_ENABLED=0 GO111MODULE=on go build -ldflags="-s -w" -o ./bin/controller ./cmd/tas-policy-controller


STDERR:

go: finding modernc.org/cc v1.0.0
go: finding modernc.org/golex v1.0.0
go: finding modernc.org/xc v1.0.0
go: finding modernc.org/mathutil v1.0.0
go: finding modernc.org/strutil v1.0.0
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/cc refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /root/go/pkg/mod/cache/vcs/3dac616a9d80602010c4792ef9c0e9d9812a1be8e70453e437e9792978075db6: exit status 128:
        error: RPC failed; result=22, HTTP code = 404
        fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/xc refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /root/go/pkg/mod/cache/vcs/29fc2f846f24ce3630fdd4abfc664927c4ad22f98a3589050facafa0991faada: exit status 128:
        error: RPC failed; result=22, HTTP code = 404
        fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/strutil refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /root/go/pkg/mod/cache/vcs/f48599000415ab70c2f95dc7528c585820ed37ee15d27040a550487e83a41748: exit status 128:
        error: RPC failed; result=22, HTTP code = 404
        fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/mathutil refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /root/go/pkg/mod/cache/vcs/fb72eb2422fda47ac75ca695d44b06b82f3df3c5308e271486fca5e320879130: exit status 128:
        error: RPC failed; result=22, HTTP code = 404
        fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/golex refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /root/go/pkg/mod/cache/vcs/9aae2d4c6ee72eb1c6b65f7a51a0482327c927783dea53d4058803094c9d8039: exit status 128:
        error: RPC failed; result=22, HTTP code = 404
        fatal: The remote end hung up unexpectedly
go: error loading module requirements
make: *** [build] Error 1


MSG:

non-zero return code

and k8s status is:

kube-system   coredns-66bff467f8-hzktk                    1/1     Running             0          82m
kube-system   coredns-66bff467f8-wslws                    1/1     Running             0          82m
kube-system   etcd-pega81                                 1/1     Running             0          82m
kube-system   kube-apiserver-pega81                       1/1     Running             0          82m
kube-system   kube-controller-manager-pega81              1/1     Running             0          82m
kube-system   kube-ovn-cni-j65bd                          1/1     Running             0          73m
kube-system   kube-ovn-controller-96f89c68b-72npt         0/1     Pending             0          73m
kube-system   kube-ovn-controller-96f89c68b-pjghr         1/1     Running             0          73m
kube-system   kube-proxy-gr2g6                            1/1     Running             0          82m
kube-system   kube-scheduler-pega81                       1/1     Running             0          70m
kube-system   ovn-central-74986486f9-d4sw9                1/1     Running             0          73m
kube-system   ovs-ovn-r478j                               1/1     Running             0          73m
openness      docker-registry-deployment-54d5bb5c-w9wvv   1/1     Running             0          82m
openness      eaa-6f8b94c9d7-gxdff                        0/1     ErrImageNeverPull   0          71m
openness      edgedns-md69k                               0/1     ErrImageNeverPull   0          71m
openness      interfaceservice-q7hcd                      0/1     ErrImageNeverPull   0          71m
openness      syslog-master-m2kch                         1/1     Running             0          71m
openness      syslog-ng-bgkkr                             1/1     Running             0          71m
telemetry     collectd-jzm6w                              2/2     Running             0          70m
telemetry     custom-metrics-apiserver-54699b845f-kgtmh   1/1     Running             0          70m
telemetry     otel-collector-7d5b75bbdf-q9gtp             2/2     Running             0          70m
telemetry     prometheus-node-exporter-gnq62              1/1     Running             0          70m
telemetry     prometheus-server-76c96b9497-jgqqh          3/3     Running             0          70m
telemetry     telemetry-collector-certs-77kpj             0/1     Completed           0          70m
telemetry     telemetry-node-certs-btb92                  1/1     Running             0          70m

check the eaa container by "kubectl describe -n openness pods eaa-6f8b94c9d7-gxdff" and got this:

Events:
  Type     Reason             Age                    From             Message
  ----     ------             ----                   ----             -------
  Warning  Failed             27m (x213 over 72m)    kubelet, pega81  Error: ErrImageNeverPull
  Warning  ErrImageNeverPull  2m25s (x328 over 72m)  kubelet, pega81  Container image "eaa:1.0" is not present with pull policy of Never

Any help would be very appreciated.
Thanks.
Regards,
Wayne

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.