Giter VIP home page Giter VIP logo

poky-container's Introduction

What is CROPS (CROssPlatformS)?

CROPS is an open source, cross-platform development framework that leverages Docker containers to provide an easily managed, extensible environment which allows developers to build binaries for a variety of architectures of Windows, Linux and Mac OS X hosts.

CROPS components

  • CEED - runs on the development host and exposes an API to Integrated Development Environments (IDEs) or CLI callers
  • TURFF - runs in a container and services requests from CODI
  • CODI - COntainer DIspatcher runs in a container and keeps track of all available TURFF instances in its internal table. CODI also redirects CEED requests to the corresponding TURFF instance

USING CROPS FOR ZEPHYR OS DEVELOPMENT

Please refer to the GitHub Wiki

USING CROPS WITH YOUR OWN TOOLCHAIN

  1. Install Docker (Linux) or Docker Toolbox (Windows/Mac)

  2. Build CODI dependencies container

Execute the following command from within the dockerfiles directory:

docker build -t crops/codi:deps -f Dockerfile.codi.deps .
  1. Build CODI container

Execute the following command from within the dockerfiles directory:

docker build -t crops/codi:version -f Dockerfile.codi ../
  1. Build toolchain dependencies container

Execute the following command from within the dockerfiles directory:

docker build -t crops/toolchain:deps -f Dockerfile.toolchain.deps .
  1. Open the Dockerfile.toolchain file and provide the URL to your toolchain

Example:

ENV TOOLCHAIN_NAME poky-glibc-x86_64-core-image-sato-i586-toolchain-2.0.sh
ENV TOOLCHAIN_PATH http://downloads.yoctoproject.org/releases/yocto/yocto-2.0/toolchain/x86_64/
  1. Build your toolchain container
docker build -t crops/toolchain:my_toolchain -f Dockerfile.toolchain ../
  1. Start CODI container
docker run -d --name codi-test -v /var/run/docker.sock:/var/run/docker.sock --net=host crops/codi:version
  1. Start toolchain container
mkdir -p $HOME/crops-workspace && docker run -d --name crops-toolchain-my_toolchain -v $HOME/crops-workspace/:/crops/   --env TURFFID=crops-toolchain-my_toolchain --net=host crops/toolchain:my_toolchain
  1. Run the Zephyr installer and answer "Yes" when prompted to install CEED. Answer "No" to all other questions
curl -kOs https://raw.githubusercontent.com/crops/crops/master/installers/zephyr-installer.sh && source   ./zephyr-installer.sh
  1. Place your project in the shared workspace

Example:

$HOME/crops-workspace/my_project/
  1. Build your project

Example:

$HOME/.crops/ceed/ceed -d crops-toolchain-my_toolchain -g "make -C /crops/my_project/"
  1. Share your toolchain with other developers by pushing it to Docker Hub

Example:

$docker push crops/toolchain:my_toolchain"

CONTRIBUTING TO CROPS

COMPILE CEED, TURFF AND CODI ON LINUX

Required Prerequisites

  • libsqlite3-dev - "SQLite is a C library that implements an SQL database engine."
  • libcurl4-openssl-dev (7.40 or later) - "libcurl is an easy-to-use client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, TELNET and TFTP."
  • libjansson-dev - "Jansson is a C library for encoding, decoding and manipulating JSON data."
  1. Install Prerequisites On Debian / Ubuntu
apt-get install libsqlite3-dev libcurl4-openssl-dev libjansson-dev
  1. Compile
  • GCC
CC=gcc make all
  • CLANG
CC=clang make all
  1. Debug Compile
  • GCC
CC=gcc make debug
  • CLANG
CC=clang make debug

RUNNING A CLANG STATIC ANALYSYS

  1. Run the static analyzer
scan-build -V make
  1. Point your browser at the following URL to view the static analysis results
http://127.0.0.1:8181

poky-container's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

poky-container's Issues

Missing 32bit glibc libraries

I need to build image for am57xx-evm machine (from meta-ti) using crops/poky:opensuse-42.2.
During build process (bitbake core-image-minimal) there are displayed following messages:

WARNING: ti-cgt-pru-native-2.2.1-r0 do_unpack: TI installer requires 32bit glibc libraries for proper operation
run 'yum install glibc.i686' on Fedora or 'apt-get install libc6:i386' on Ubuntu/Debian
ERROR: ti-cgt-pru-native-2.2.1-r0 do_install: Function failed: do_install (log file is located at /workdir/poky/build/tmp/work/x86_64-linux/ti-cgt-pru-native/2.2.1-r0/temp/log.do_install.8461)
ERROR: Logfile of failure stored in: /workdir/poky/build/tmp/work/x86_64-linux/ti-cgt-pru-native/2.2.1-r0/temp/log.do_install.8461

Yocto build run fine if I create my own container (based on crops/poky:opensuse-42.2) with following instruction in Dockerfile:
RUN zypper install -y glibc-32bit

I tried build only for Rocko (2.4.2) release and the the meta-ti at ti2018.00 tag.

QEMU hangs in Docker after the login prompt

In summary, QEMU hangs in the CROPS/Poky Docker container after the login prompt.

The problem may be reproduced from a Ubuntu 18.04 host by following these steps:

  • In a terminal, type:

    docker pull crops/poky:ubuntu-18.04
    mkdir workdir
    cd workdir
    git clone git://git.yoctoproject.org/poky
    docker run --name yocto-dev --rm -it \
    	-v $PWD/workdir:/workdir \
    	crops/poky:ubuntu-18.04 --workdir=/workdir
    
  • In a different terminal, create a file named compile.sh:

    #!/bin/bash
    
    source ./poky/oe-init-build-env > /dev/null
    bitbake core-image-minimal
    
  • Create a file named run.sh:

    #!/bin/bash
    
    source ./poky/oe-init-build-env > /dev/null
    runqemu qemux86-64 core-image-minimal qemuparams="-m 256" slirp nographic
    
  • Finally, type:

    chmod 755 compile.sh
    chmod 755 run.sh
    docker exec -w /workdir -u pokyuser -i yocto-dev bash < compile.sh
    docker exec -w /workdir -u pokyuser -i yocto-dev bash < run.sh
    

Actual result:

  • QEMU boots the core-image-minimal.
  • It displays the login prompt.
  • After entering "root", QEMU hangs and never displays the prompt.
  • It is not possible to execute a command from the emulated OS.

Expected result:

  • QEMU boots the core-image-minimal.
  • It displays the login prompt.
  • After entering "root", QEMU displays the regular prompt.
  • It is possible to execute commands in the emulated OS.

On the other hand, compiling and running the core-image-minimal from the Ubuntu 18.04 host in QEMU works as expected.

Is there a workaround to run QEMU from a Docker container?

PS: I am new to the Yocto project and QEMU. It may not be a good idea to run QEMU on top of a Docker container. I am simply trying to see if I can run everything from a container.

m4 Configure fails with "C compiler cannot create executables"

Just trying the default (latest) container for the first time and bitbake fails with:

DEBUG: Executing shell function autotools_preconfigure
DEBUG: Shell function autotools_preconfigure finished
DEBUG: Executing python function autotools_aclocals
DEBUG: SITE files ['endian-little', 'common-linux', 'common-glibc', 'bit-64', 'x86_64-linux', 'common']
DEBUG: Python function autotools_aclocals finished
DEBUG: Executing shell function do_configure
NOTE: Running ../m4-1.4.18/configure --build=x86_64-linux --host=x86_64-linux --target=x86_64-linux --prefix=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr --exec_prefix=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr --bindir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr/bin --sbindir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr/sbin --libexecdir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr/libexec --datadir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr/share --sysconfdir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/etc --sharedstatedir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/com --localstatedir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/var --libdir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr/lib --includedir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr/include --oldincludedir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr/include --infodir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr/share/info --mandir=/workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/recipe-sysroot-native/usr/share/man --disable-silent-rules --disable-dependency-tracking --without-libsigsegv-prefix --disable-static
configure: WARNING: unrecognized options: --disable-static
checking for a BSD-compatible install... /workdir/build/tmp/hosttools/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /workdir/build/tmp/hosttools/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking whether make supports nested variables... (cached) yes
checking for x86_64-linux-gcc... gcc
checking whether the C compiler works... no
configure: error: in /workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/build': configure: error: C compiler cannot create executables See config.log' for more details
NOTE: The following config.log files may provide further information.
NOTE: /workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/build/config.log
ERROR: configure failed
WARNING: exit code 1 from a shell command.
ERROR: Function failed: do_configure (log file is located at /workdir/build/tmp/work/x86_64-linux/m4-native/1.4.18-r0/temp/log.do_configure.475)

Am I missing something?

Does not work with gitlab-ci

I think the entrypoint is causing issues with GitLab CI.

Maybe I'm doing something wrong, but the following should work, imho.

image: crops/poky

stages:
  - build

build_job:
  stage: build
  script:
    - echo 'Hello, world!'

The error I get is

usage: poky-entry.py [-h] [--workdir WORKDIR] [--id ID] [--cmd CMD]
poky-entry.py: error: unrecognized arguments: sh -c if [ -x /usr/local/bin/bash ]; then
	exec /usr/local/bin/bash 
elif [ -x /usr/bin/bash ]; then
	exec /usr/bin/bash 
elif [ -x /bin/bash ]; then
	exec /bin/bash 
elif [ -x /usr/local/bin/sh ]; then
	exec /usr/local/bin/sh 
elif [ -x /usr/bin/sh ]; then
	exec /usr/bin/sh 
elif [ -x /bin/sh ]; then
	exec /bin/sh 
else
	echo shell not found
	exit 1
fi


usage: poky-entry.py [-h] [--workdir WORKDIR] [--id ID] [--cmd CMD]
poky-entry.py: error: unrecognized arguments: sh -c if [ -x /usr/local/bin/bash ]; then
	exec /usr/local/bin/bash 
elif [ -x /usr/bin/bash ]; then
	exec /usr/bin/bash 
elif [ -x /bin/bash ]; then
	exec /bin/bash 
elif [ -x /usr/local/bin/sh ]; then
	exec /usr/local/bin/sh 
elif [ -x /usr/bin/sh ]; then
	exec /usr/bin/sh 
elif [ -x /bin/sh ]; then
	exec /bin/sh 
else
	echo shell not found
	exit 1
fi

I guess a workaround is just using "crops/yocto:ubuntu-14.04-base"..

Poky build on macOS fails to patch mpfe

macOS 10.12..6
Docker for Mac version 17.06.0-ce, build 02c1d87
Poky path krogoth

Following your instructions the build stops at the mpfr package

ERROR: mpfr-native-3.1.3-r0 do_patch: [Errno 20] Not a directory
ERROR: mpfr-native-3.1.3-r0 do_patch: Function failed: patch_do_patch
ERROR: Logfile of failure stored in: /workdir/build/tmp/work/x86_64-linux/mpfr-native/3.1.3-r0/temp/log.do_patch.113
ERROR: Task 930 (virtual:native:/workdir/meta/recipes-support/mpfr/mpfr_3.1.3.bb, do_patch) failed with exit code '1'
NOTE: Tasks Summary: Attempted 87 tasks of which 85 didn't need to be rerun and 1 failed.
Waiting for 0 running tasks to finish:

Can't run commands with arguments any more after removing `--cmd` option from entry point

Example: docker run -it --rm -v $PWD:/workdir --workdir /workdir crops/poky:ubuntu-16.04 ls /

I would expect this command to list files in the container's root, but it lists files in $PWD. This is because it's ignoring everything after the first command line argument; it's only running ls not ls /.

Ideally, I'd be able to run entire command lines like source foo && some_command && bash like before.

Lets Encrypt root ssl cert expired

The lets-encrypt root certificate expired 4 days ago. Since then some of our builds have been failing to checkout.
https://letsencrypt.org/docs/certificate-compatibility/

The ubuntu 16.04 image appears to work but the 18.04 and 20.04 likely need to be rebuilt. For non-docker builds, an apt update && apt upgrade was enough to fix the build.

 $ docker run -it --rm crops/poky:ubuntu-16.04  git clone https://git.linaro.org/toolchain/gcc.git/
Cloning into 'gcc'...
remote: Enumerating objects: 74482, done.
...
 $ docker run -it --rm crops/poky:ubuntu-18.04  git clone https://git.linaro.org/toolchain/gcc.git/
Cloning into 'gcc'...
fatal: unable to access 'https://git.linaro.org/toolchain/gcc.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none

 $ docker run -it --rm crops/poky:ubuntu-20.04  git clone https://git.linaro.org/toolchain/gcc.git/
Cloning into 'gcc'...
fatal: unable to access 'https://git.linaro.org/toolchain/gcc.git/': server certificate verification failed. CAfile: none CRLfile: none

How to use SSH with qemu running within docker

Sorry if this doesn't belong here, but now since I can run qemu within the docker container, I tried to map the port 22 within to the host using -p 2222:22 and it didn't worked as expected, I guess that map to the docker container itself and not to the qemu instance running within, so, how would one access the qemu ssh from the host?

How to forward ssh-agent to fetch private repos?

To fetch private repository, I would like to forward my ssh-agent.
I tried to but it seems the container doesn't have the sufficient permission to do so in the poky container.

Would there be any other clean way to solve this?

apt-get install

I need to install within the crops/poky container some packages (see in the linked document Chapter 3.1 Host packages) since I have no password - how to do it?

(base) PS C:\> docker run --rm -it -v myvolume:/workdir crops/poky --workdir=/workdir
pokyuser@06932ce45056:/workdir$ apt-get install curl
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
pokyuser@06932ce45056:/workdir$

Can't extend crops/poky image with new dockerfile commands

Dockerfile looks like

`
FROM crops/poky
USER root

RUN apt-get install npm -y

USER usersetup
I build the docker image with
docker build -t crops/poky-canoga .
`

and then try to run the image with

brett@cp-ubuntu2:~/yocto/crops-canoga$ docker run --rm -it -vpwd/work:/workdir crops/poky-canoga --workdir=/workdir sudo: unknown user: pokyuser sudo: unable to initialize policy plugin

This dockerfile is right out of issue#19

How do I switch to `pokyuser` or set to correct user when running the container?

Hi

I'm following your standard guide for setting up Yocto in the crops/yocto docker image. Instead of using workdir as a folder name I'm using yocto.

However, when I try to clone Poky into this folder, I get these error messages:

sh-4.4$ git clone git://git.yoctoproject.org/poky
fatal: could not create work tree dir 'poky': Permission denied
sh-4.4$ ls -halt
total 12K
drwxr-xr-x. 1 root     root     4.0K Jun 16 11:34 ..
drwxrwxr-x. 2 pokyuser pokyuser 4.0K Jun 16 11:19 .
sh-4.4$ pwd
/yocto
sh-4.4$ whoami
usersetup
sh-4.4$ sudo su pokyuser
[sudo] password for usersetup: 
Sorry, try again.
[sudo] password for usersetup: 
^C
sudo: 1 incorrect password attempt

How do I switch to pokyuser or set to correct user when running the container?

/Henrik

Add a more helpful error message if the workdir is owned by uid/gid 0

Currently if the uid or gid of the workdir are 0, then the user will get a relatively cryptic message

Refusing to use a gid of 0
Traceback (most recent call last):
  File "/usr/bin/usersetup.py", line 62, in <module>
    subprocess.check_call(cmd.split(), stdout=sys.stdout, stderr=sys.stderr)
  File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', 'restrict_groupadd.sh', '0', 'pokyuser']' returned non-zero exit status 1

This should be changed to something more user friendly explaining what may have happened.

In the most common case, the argument passed as --workdir and was bind mounted wasn't yet created before docker starts. For example if the below command was used:

docker run -it --rm -v /foo:/workdir crops/poky --workdir=/workdir

If /foo didn't exist before running docker, then docker will create /foo and give it a uid:gid of 0. While this behavior seems like the wrong action to take, docker chose to preserve it due to reasons of "not breaking workflows".

At that point all the user has to remove the directory docker created, and instead create it manually with the appropriate uid:gid.

Since the "--mount" option to docker will instead error if the directory doesn't exist, then the documentation could be changed to point out the "--mount" option as well. It however, can't be used on older version of docker.

Can not execute: bitbake -c menuconfig my-linux

Hi Markdown,
My envs is : YP2.4_M1, on host PC 14.04
docker run --rm -it -v /data:/workdir crops/poky:ubuntu-16.04 --workdir=/workdir
I got some issues as below:

scripts/kconfig/lxdialog/menubox.o: In function do_scroll': menubox.c:(.text+0x38): undefined reference to wrefresh'
scripts/kconfig/lxdialog/menubox.o: In function do_print_item': menubox.c:(.text+0x17e): undefined reference to wrefresh'
scripts/kconfig/lxdialog/menubox.o: In function print_buttons': menubox.c:(.text+0x2b8): undefined reference to wrefresh'
scripts/kconfig/lxdialog/menubox.o: In function print_arrows.constprop.0': menubox.c:(.text+0x3c3): undefined reference to wrefresh'
collect2: error: ld returned 1 exit status
scripts/Makefile.host:116: recipe for target 'scripts/kconfig/mconf' failed
make[3]: *** [scripts/kconfig/mconf] Error 1

Regards,

"lz4c pzstd zstd" appear to be unavailable in PATH

Hi, All,

ERROR: The following required tools (as specified by HOSTTOOLS) appear to be unavailable in PATH, please install them in order to proceed:
  lz4c pzstd zstd

There will be the above error when I build the most recent Yocto Project (up to https://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/?id=be665a79831c3dab7623e75d112bb35d72e39a82) following the steps in https://www.yoctoproject.org/docs/2.4.2/yocto-project-qs/yocto-project-qs.html#releases and https://www.yoctoproject.org/docs/2.4.2/dev-manual/dev-manual.html#setting-up-to-use-crops.

My steps as below:

# In host
$ mkdir work_dir && cd work_dir
$ git clone git://git.yoctoproject.org/poky
$ docker run --rm -it --net=host -v `pwd`:/home/pokyuser/yocto -w /home/pokyuser/yocto crops/poky

# In docker
$ source poky/oe-init-build-env
$ bitbake core-image-minimal

I notice that required tools lz4c pzstd zstd were added to HOSTTOOLS in change https://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/?id=1c51e6535bfead57c7913ae74d0d71af4dfe8195

Want to ask if there's plan to add these unavailable tools.

Thanks
Robbie Cao

Missing program: xxd

For the karo-nxp-bsp fork of poky, the program xxd is needed for various recipes. I've had to copy this over from my host machine to get builds to work. It would be nice if this were included in the docker image.

Unable to startup bitbake server

On my PC,

  • Docker for Windows installed,
  • a portable hard disk shared to docker, such as drive I
  • poky downloaded at drive I:/WORKSPACE/yocto3-ws

In PowerShell console, running command
docker run -it --name yocto-ws --mount type=bind,source=I:/WORKSPACE,target=/workdir crops/poky, yocto-ws container can start up. Everything is ok until now.

But when running bitbake core-image-minimal in poky build dir, something wrong,
server = bb.server.process.BitBakeServer(lock, sockname, configuration, featureset)
File "/workdir/yocto3_ws/poky/bitbake/lib/bb/server/process.py", line 392, in init
self.sock.bind(os.path.basename(sockname))
PermissionError: [Errno 1] Operation not permitted
......
NOTE: Reconnecting to bitbake server...
NOTE: Previous bitbake instance shutting down?, waiting to retry...
NOTE: Retrying server connection (#8)...
ERROR: Unable to connect to bitbake server, or start one (server startup failures would be in bitbake-cookerdaemon.log).

And when ./x86_64-buildtools-nativesdk-standalone-3.0.1.sh script execution tried, also error occured,
Build tools installer version 3.0.1
Enter target directory for SDK (default: /opt/poky/3.0.1): /workdir/yocto3_ws/buildtools
You are about to install the SDK to "/workdir/yocto3_ws/buildtools". Proceed [Y/n]? Y
Extracting SDK...........done
Setting it up...Traceback (most recent call last):
File "/workdir/yocto3_ws/buildtools/relocate_sdk.py", line 219, in
perms = os.stat(e)[stat.ST_MODE]
OSError: [Errno 2] No such file or directory: '/workdir/yocto3_ws/buildtools/sysroots/x86_64-pokysdk-linux/usr/bin/find'
SDK could not be set up. Relocate script failed. Abort!

What's wrong with crops/poky?
Howto?

Dockerfile, lines 31 & 32

Seems does not work in macOS Sierra:

Status: Downloaded newer image for crops/poky:latest
Refusing to use a uid less than 101
Traceback (most recent call last):
  File "/usr/bin/usersetup.py", line 66, in <module>
    subprocess.check_call(cmd.split(), stdout=sys.stdout, stderr=sys.stderr)
  File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', 'restrict_useradd.sh', '70', '70', 'pokyuser']' returned non-zero exit status 1

This article has an universal approach, have to avoid using constants in gid and uid:
https://denibertovic.com/posts/handling-permissions-with-docker-volumes/

rsync not found

when i tried to build core-image-base which would invoke install "linux-libc-headers"
it turns out that "rsync not found"
raw msg below

| /bin/sh: rsync: command not found
| Makefile:1186: recipe for target 'headers_install' failed
| make: *** [headers_install] Error 127

Question: Does it make any sense to enable the TUN network interface in a Docker container?

Hi

Does it make any sense to enable the TUN network interface in a Docker container?:

pokyuser@2bd6d55fa9fe:/yocto/poky/build$ bitbake core-image-sato
Loading cache: 100% |##################################################################################| Time: 0:00:00
Loaded 1438 entries from dependency cache.
NOTE: Resolving any missing task queue dependencies

Build Configuration:
BB_VERSION           = "1.50.0"
BUILD_SYS            = "x86_64-linux"
NATIVELSBSTRING      = "universal"
TARGET_SYS           = "x86_64-poky-linux"
MACHINE              = "qemux86-64"
DISTRO               = "poky"
DISTRO_VERSION       = "3.3.1"
TUNE_FEATURES        = "m64 core2"
TARGET_FPU           = ""
meta                 
meta-poky            
meta-yocto-bsp       = "my-yocto-3.3.1:05a8aad57ce250b124db16705acec557819905ae"

Initialising tasks: 100% |#############################################################################| Time: 0:00:06
Sstate summary: Wanted 0 Local 0 Network 0 Missed 0 Current 2725 (0% match, 100% complete)
NOTE: Executing Tasks
NOTE: Tasks Summary: Attempted 6851 tasks of which 6851 didn't need to be rerun and all succeeded.
pokyuser@2bd6d55fa9fe:/yocto/poky/build$ runqemu qemux86-64
runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...
runqemu - ERROR - TUN control device /dev/net/tun is unavailable; you may need to enable TUN (e.g. sudo modprobe tun)
runqemu - INFO - Cleaning up
pokyuser@2bd6d55fa9fe:/yocto/poky/build$ 

Thank you.

Unable to forward the taskexp UI

Hi everybody,

I'm trying to run taskexp UI to check all dependencies of an image, however doing this in crops container it is being a little bit painful. I've created a new Dockerfile to add your image some dependencies that apparently taskexp UI needs:

FROM crops/poky:latest

USER root
RUN apt-get update && apt-get install -y vim trash-cli curl tree htop python3-gi gobject-introspection gir1.2-gtk-3.0 libcanberra-gtk3-0

# We need to add the pokyuser because in crops/poky container
# it's probably created at runtime with ENTRYPOINT instruction
# see https://github.com/crops/poky-container/blob/master/Dockerfile#L55
RUN useradd -ms /bin/bash pokyuser

RUN chown pokyuser:pokyuser -R /home/pokyuser

So I can build the image

docker build . -t crops/poky/latest:extra_deps

and then run it with

docker run \
-e DISPLAY=$DISPLAY \
--network host \
--name warrior \
--volume=/home/jfernandz/Projects/displays/yocto/Ops/building:/home/pokyuser/building/ \
-it crops/poky/latest:extra_deps \
--workdir=/home/pokyuser/building/

but now, when I try to run bitbake to have a sight at dependencies of an image, I have a strange error with no more details Command execution failed: as you can see in the gif:

Peek 2021-05-26 14-45

What do you think might this be due? Thank you all! ๐Ÿ˜

Error installing openssl / building toolchain

Hi all,
Bitbake fails when populating the target toolchain into the container (bitbake -c populate_sdk core-image-base). I'm using the docker image crops/poky:ubuntu-18.04, and the repo yocto warrior from nxp 4.19.35.
Dnf cannot install openssl-1.1.1b for the sdk (see logs).
If I remove it from my local.conf (TOOLCHAIN_HOST_TASK_append = " openssl"), then the task ends well. But I really do need openssl to build my application. My current yocto project used to run fine without the use of docker on an other computer, running Ubuntu 14.04.

bitbake logs :

ERROR: core-image-base-1.0-r0 do_populate_sdk: Could not invoke dnf. Command '/workdir/build/tmp/work/imx8mm_tac2-poky-linux/core-image-base/1.0-r0/recipe-sysroot-native/usr/bin/dnf -v --rpmverbosity=info -y -c /workdir/build/tmp/work/imx8mm_tac2-poky-linux/core-image-base/1.0-r0/sdk/image/etc/dnf/dnf.conf --setopt=reposdir=/workdir/build/tmp/work/imx8mm_tac2-poky-linux/core-image-base/1.0-r0/sdk/image/etc/yum.repos.d --installroot=/workdir/build/tmp/work/imx8mm_tac2-poky-linux/core-image-base/1.0-r0/sdk/image --setopt=logdir=/workdir/build/tmp/work/imx8mm_tac2-poky-linux/core-image-base/1.0-r0/temp --repofrompath=oe-repo,/workdir/build/tmp/work/imx8mm_tac2-poky-linux/core-image-base/1.0-r0/oe-sdk-repo --nogpgcheck install nativesdk-packagegroup-sdk-host openssl packagegroup-cross-canadian-imx8mm-tac2' returned 1:
DNF version: 4.1.0
cachedir: /workdir/build/tmp/work/imx8mm_tac2-poky-linux/core-image-base/1.0-r0/sdk/image/var/cache/dnf
Added oe-repo repo from /workdir/build/tmp/work/imx8mm_tac2-poky-linux/core-image-base/1.0-r0/oe-sdk-repo
repo: using cache for: oe-repo
not found other for:
not found modules for:
not found deltainfo for:
not found updateinfo for:
oe-repo: using metadata from Wed 08 Sep 2021 10:11:44 AM UTC.
Last metadata expiration check: 0:00:01 ago on Wed 08 Sep 2021 10:11:44 AM UTC.
No module defaults found
--> Starting dependency resolution
--> Finished dependency resolution
Error:
Problem: conflicting requests

  • package openssl-1.1.1b-r0.aarch64 does not have a compatible architecture
  • nothing provides openssl-conf needed by openssl-1.1.1b-r0.aarch64
    (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

ERROR: core-image-base-1.0-r0 do_populate_sdk:
ERROR: core-image-base-1.0-r0 do_populate_sdk: Function failed: do_populate_sdk
ERROR: Logfile of failure stored in: /workdir/build/tmp/work/imx8mm_tac2-poky-linux/core-image-base/1.0-r0/temp/log.do_populate_sdk.23262
ERROR: Task (/workdir/sources/poky/meta/recipes-core/images/core-image-base.bb:do_populate_sdk) failed with exit code '1'

/usr/lib/sudo/sudoers.so must be only be writable by owner

Hi,

I'm trying to use/build crops/poky on Fedora 28 (Docker version 18.05.0-ce, build f150324) and it fails with following error:

Running Test run-build.sh
sudo: error in /etc/sudo.conf, line 0 while loading plugin `sudoers_policy'
sudo: /usr/lib/sudo/sudoers.so must be only be writable by owner
sudo: fatal error, unable to load plugins
Traceback (most recent call last):
  File "/usr/bin/usersetup.py", line 62, in <module>
    subprocess.check_call(cmd.split(), stdout=sys.stdout, stderr=sys.stderr)
  File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', 'restrict_groupadd.sh', '0', 'pokyuser']' returned non-zero exit status 1
Test run-build.sh failed
Running Test run-build.sh
sudo: error in /etc/sudo.conf, line 0 while loading plugin `sudoers_policy'
sudo: /usr/lib/sudo/sudoers.so must be only be writable by owner
sudo: fatal error, unable to load plugins
Traceback (most recent call last):
  File "/usr/bin/usersetup.py", line 62, in <module>
    subprocess.check_call(cmd.split(), stdout=sys.stdout, stderr=sys.stderr)
  File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', 'restrict_groupadd.sh', '0', 'pokyuser']' returned non-zero exit status 1
Test run-build.sh failed

I added RUN chmod 755 /usr/lib/sudo/sudoers.so /etc/sudoers /etc/sudoers.d /etc/sudoers.d/README to get it run. I'm not sure if it is the right solution.

Regards,
Vincent

unknown user: poky user

Just following the instruction on MacBook Pro 2018 w/ MacOS Mojave. I believe I have followed the steps word for word.

docker run --rm -it -v yocto-workder:/workdir crops/poky --workdir=/workdir
Unable to find image 'crops/poky:latest' locally
latest: Pulling from crops/poky
16c48d79e9cc: Pull complete 
3c654ad3ed7d: Pull complete 
6276f4f9c29d: Pull complete 
a4bd43ad48ce: Pull complete 
c3d94dbdeb70: Pull complete 
52df6c0c42bb: Pull complete 
751ac3ce3f9d: Pull complete 
db8ada821869: Pull complete 
dff7c078f160: Pull complete 
7a0fa8e2a2a4: Pull complete 
a8971d003903: Pull complete 
ea78bfaaa2fa: Pull complete 
5ec634062ee8: Pull complete 
Digest: sha256:8eefa1cef8ebc987188cada7e83a315285d099be05d7fb55451c5e2a481f2e9b
Status: Downloaded newer image for crops/poky:latest
sudo: unknown user: pokyuser
sudo: unable to initialize policy plugin

Container doesn't start correctly in Jenkins pipeline

I run Jenkins in Docker on Ubuntu 16.04 machine. I added a new pipeline item in Jenkins. The declarative pipeline is shown below.

Jenkinsfile (Declarative Pipeline)

pipeline {
    agent {
        docker {
            image 'crops/poky'
        }
    }
    stages {
        stage('Stage 1') {
            steps {
                sh 'uname -a'
            }
        }
    }
}

I got the following error when I run the build.

ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option --entrypoint=''.

What's wrong with the ENTRPOINT?

Is this correct way to use crops/poky in Jenkins pipeline?

All Docker images are currently Ubuntu 18.04

Analysis

Recent commit 7389f73 changed the base image from ubuntu 16.04 to 18.04.

However, the sed command in build-and-test.sh here :

sed -e "s#FROM crops/yocto:ubuntu-16.04#FROM crops/yocto:${BASE_DISTRO}#" Dockerfile > $DOCKERFILE

are still trying to pattern-match against 16.04. This results in all images in docker hub to become Ubuntu 18.04 images since the base image isn't updated to opensuse, fedora, etc.

.travis.yml probably also needs updating, if this previous commit is an indication: a7959cc

How to reproduce:

  1. Run any of these commands:

    a. docker run -it crops/poky:fedora-30
    b. docker run -it crops/poky:opensuse-15.1
    c. docker run -it crops/poky:ubuntu-19.04

  2. Inside the container, run cat /etc/os-release

Expected result:

Different information on different distros, and none of them saying ubuntu 18.04

Got result:

On all distro tags:

NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

Need a way to get .ssh/config and .ssh/known_hosts to work well with the poky container

The closed issue #29 is biting me as well. Currently there does not seem to be a viable way to share the host's .ssh/config and/or .ssh/known_hosts with the poky container since it appears the pokyuser account is created on-the-fly when the container is run based on the uid/gid of the workdir. I desparately need a mechanism to share this in order for my container to build my Yocto image correctly. Do you have any suggestions. It was mentioned in Issue #29 that this remained an outstanding issue that needs to be addressed. I'm hoping a solution has been discovered but just not documented.

Consider keeping EOL distros for build reproducibility

Would it be possible to keep the EOL distro builds, rather than removing them from Docker Hub when the distro goes EOL? Is the reasoning/policy of removing the EOL distro builds documented anywhere?

I am using the images built from this repo in a CI environment to build firmware images for a product. Having the ability to reproduce old builds for maintenance and support purposes is critical. It is unfortunate that the reproducibility depends slightly on the host distro and, of course, BitBake tries to keep this dependency to an absolute minimum. However, this does mean that it is not always possible to build a given Poky release from an arbitrarily new host distro. It is therefore very important that older host distro images are available to be able to keep building images for older products.

I appreciate that an EOL distro means that it is no longer going to receive the normal level of support and maintenance upstream, however that doesn't mean that it is made unavailable. One can still download a very old version of a distro and run it if desired/necessary. For my particular use case I see the images built from this repo akin to the ability to download an old version of a distro out of necessity. Only keeping images for the very latest/supported distro releases makes them too much of a moving target and unfortunately not useful for my purpose.

Would you consider keeping EOL distro image builds, rather than removing them when adding newer releases? It would be a shame if I had to duplicate/extend the effort put into creating these images all for the sake of keeping a few images hanging around.

P.S. Right now I have a build that requires the fedora-30 image, which was removed 6 days ago. Due to a perfect storm of changes (GCC 10, glib 2.31, python2 removal) using fedora-31/32 requires a significant amount of work to rebase to a newer Poky release and perform product verification.

Request: offer arm64v8 image

At the time of writing, the image offered in DockerHub supports only amd64; as of Q2 2022, it is estimated Arm holds 9.5% consumer and 7.1% server CPU market share 1, which indicates the usefulness of adding arm64v8 support.

bitbake not found

Hi, I've just tried to work with your image, following what you've written in the readme. Starting the container, bitbake cannot be found. I've tried to find oe-init-build-env to source it, but I can't find it.

no license file

Can you add a license file to indicate the license for this software?

Thanks!

Add pip3

Can you add pip3 to the containers, the usecase being to install further Python modules. Specifically the bitbake build tool kas.

python-setuptools-scm in bionic is too outdated for some workflows

Hi guys, I've noticed that setuptools-scm package in bionic it's pretty outdated so version cannot be determined at build time from scm(git) tags ... I've tested my software with a newer setuptools-scm version and it works as expected, but when I install python3-setuptools-scm and run python3 setup.py sdist inside the crops/poky container the setuptools sets 0.0.0 version.

Not sure if some fix is possible for this issue or I should wait until your crops/poky image be based over some newer Ubuntu version.

host uid contamination detected at do_package

Hi all,

I have this error message when I exit the poky container between the do_install and do_package tasks:

Path ./package is owned by uid 1001, gid 1001, which doesn't match any user/group on target. This may be due to host contamination.

I can reproduce the issue with different recipes for instance:

Run this command in the container:
bitbake -c cleansstate connman && bitbake -c install connman && exit

Then, run again this command in the container:
bitbake -c package connman

The command to run the container is:
docker run --rm -it -v="$HOME:$HOME" --workdir="$PWD" crops/poky:debian-10

The weird thing is that if I add a sleep 20 before exiting the container, the problem is gone !
bitbake -c cleansstate connman && bitbake -c install connman && sleep 20 && exit

It works fine also if I never exit the container or without the container

I would really appreciate some help to understand what I am missing and if I could avoid the long delay before exiting the container.

Thanks
Seb

Getting graphics under qemu?

I've created a core-image-sato ARM build using the container. I'd now like to emulate it but I'm getting a "Could not initialize SDL(No available video device)" error.
Running the nographic option lets me boot to a login prompt but ideally, I need to get graphics working.
I tried using the publicvnc option but a server never starts and the system never boots.

Is this possible under the container?

runqemu and devtool inside poky-container

Hi!

I'm trying to use devtool inside the container which is already running the qemu image. Would this be possible?

docker run --rm -it -p 2222:2222 -v $PWD:/workdir crops/poky:debian-9 --workdir=/workdir
source poky/oe-init-build-env
MACHINE=qemux86-64 core-image-minimal
runqemu qemux86-64 slirp nographic

Now, how can I connect a new container to use devtool in there and send the devtool results to the running image?

Thanks in advance!

Target file system contaminated with wrong permissions

When I execute the commands below to build my linux image inside the container, the resulted file system inside the image is contaminated with wrong (non-root) permissions :

docker run -d --name="crops-cont" -v "/home/user/workdir:/workdir" --workdir="/workdir" crops/poky:ubuntu-18.04

sleep 1

docker exec -u pokyuser "crops-cont" bash -c "source src/poky/oe-init-build-env build-dir/ && bitbake my-image"

When building the image, Bitbake runs smoothly with no errors/warnings. Below is a capture of the obtained file system :
image

I eliminated all the suspects when analyzing the origin of the problem, and I found out that the container is what is causing the issue. When I build my image without it, the problem disappears.

Does anyone have any clue to what may cause such a problem when using the container ?

also, if I don't add the sleep 1 between the two docker calls inside my script I get :
unable to find user pokyuser: no matching entries in passwd file
is this a normal behavior ?

any help is greatly appreciated :)

PATH environment variable can not be modified

I tried to add a directory to the PATH, but it is not possible in the poky containers. (Tried with crops/poky and poky:ubuntu-18.04).

dockerfile

`FROM crops/poky

ENV SOME_PATH=/SOME/PATH
ENV PATH="$SOME_PATH:$PATH"

CMD bash`

Result: PATH is not updated

pokyuser@7eab69c2b7c5:~$ env HOSTNAME=7eab69c2b7c5 SHELL=/bin/sh TERM=xterm USER=pokyuser LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36: SUDO_USER=usersetup SUDO_UID=70 USERNAME=pokyuser PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin PWD=/home/pokyuser LANG=en_US.UTF-8 SHLVL=2 SUDO_COMMAND=/usr/bin/poky-launch.sh /home/pokyuser /bin/sh -c bash HOME=/home/pokyuser SOME_PATH=/SOME/PATH LOGNAME=pokyuser LESSOPEN=| /usr/bin/lesspipe %s SUDO_GID=70 LESSCLOSE=/usr/bin/lesspipe %s %s _=/usr/bin/env

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.