pypa / manylinux Goto Github PK
View Code? Open in Web Editor NEWPython wheels that work on any linux (almost)
License: MIT License
Python wheels that work on any linux (almost)
License: MIT License
Are there one or more official maintainers for this image? I ask because I am getting the impression maintenance is starting to drift, and it would be good to establish whether there is anyone who feels able to take responsibility for maintenance.
https://github.com/pypa/manylinux/blob/master/docker/build_scripts/build.sh doesn't seem to have any support for pypy, it'd be awesome if it did :-)
We package our Python software and all its dependencies as a Debian package for easy installation and distribution.
We run lintian
on the deb package files and recently it began issuing the following warning:
W: clusterhq-python-flocker: hardening-no-relro opt/flocker/lib/python2.7/site-packages/msgpack/_packer.so
W: clusterhq-python-flocker: hardening-no-relro opt/flocker/lib/python2.7/site-packages/msgpack/_unpacker.so
(Our ref https://clusterhq.atlassian.net/browse/FLOC-4383)
We think it's because we recently updated to pip==8.1.1
which installs manylinux binary wheel files.
And the binaries in these wheels are not compiled with the hardening features that are required of binaries in Debian packages:
(venv)root@6dcaee731129:/# hardening-check /tmp/venv/lib/python2.7/site-packages/msgpack/*.so
/tmp/venv/lib/python2.7/site-packages/msgpack/_packer.so:
Position Independent Executable: no, regular shared library (ignored)
Stack protected: no, not found!
Fortify Source functions: no, only unprotected functions found!
Read-only relocations: no, not found!
Immediate binding: no, not found!
/tmp/venv/lib/python2.7/site-packages/msgpack/_unpacker.so:
Position Independent Executable: no, regular shared library (ignored)
Stack protected: no, not found!
Fortify Source functions: no, only unprotected functions found!
Read-only relocations: no, not found!
Immediate binding: no, not found!
Perhaps the manylinux build environment could set the necessary environment variables e.g:
dpkg-buildflags --export
export CFLAGS="-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security"
export CPPFLAGS="-D_FORTIFY_SOURCE=2"
export CXXFLAGS="-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security"
export FFLAGS="-g -O2 -fstack-protector --param=ssp-buffer-size=4"
export GCJFLAGS="-g -O2 -fstack-protector --param=ssp-buffer-size=4"
export LDFLAGS="-Wl,-Bsymbolic-functions -Wl,-z,relro"
Or use http://manpages.ubuntu.com/manpages/wily/man1/hardening-wrapper.1.html
Maybe related to #46
CentOS5 just went end-of-life - see #96 .
In consequence (I guess) the repositories that the docker image expect have disappeared, and any manylinux build trying to access these repositories has broken - e.g.:
It looks like we need to update the docker image to use still-existing repos such as
See: https://mail.python.org/pipermail/distutils-sig/2017-April/030362.html
First of all, apologies in advance -- I'm new to building wheels, so this might be a very basic question but...
I'm trying to build a manylinux wheel for OpenCV, but the OpenCV build process for python, requires PYTHON_LIBRARY
to be set to your libpython<version>.so
.
As far as I understand, it is not recommended to link to libpython when building wheels. But what would be the alternative?
Is it the case that, as long as OpenCV links their library to libpython
, it will not be possible to build a manylinux wheel for it?
Should we add /usr/local/lib:/usr/local/lib64
to LD_LIBRARY_PATH
by default?
I notice openblas, for example, builds to /usr/loca/lib
.
hello,
i was trying to use the quay.io/pypa/manylinux1_i686 container to build 32bit python binaries. However, unfortunately the wheels created were marked with the
Tag: cp*-cp*m-linux_x86_64
(with *
being the version being compiled against). Now I would have been expecting:
linux_i686
is that supposed to happen?
thanks
Frank
There are manylinux1 packages on PyPI that contain unstripped shared libraries that cause strip
to fail.
dev@devenv:~$ virtualenv manylinux1_relocatable_test
New python executable in /home/dev/manylinux1_relocatable_test/bin/python
Installing setuptools, pip, wheel...done.
dev@devenv:~$ . manylinux1_relocatable_test/bin/activate
(manylinux1_relocatable_test) dev@devenv:~$ pip install cffi numpy
Collecting cffi
Using cached cffi-1.7.0-cp27-cp27mu-manylinux1_x86_64.whl
Collecting numpy
Using cached numpy-1.11.1-cp27-cp27mu-manylinux1_x86_64.whl
Collecting pycparser (from cffi)
Installing collected packages: pycparser, cffi, numpy
Successfully installed cffi-1.7.0 numpy-1.11.1 pycparser-2.14
(manylinux1_relocatable_test) dev@devenv:~$ file manylinux1_relocatable_test/lib/python2.7/site-packages/_cffi_backend.so manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/libopenblasp-r0-39a31c03.2.18.so
manylinux1_relocatable_test/lib/python2.7/site-packages/_cffi_backend.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=d272756d64640cc3e6584d1a3db4ae0ae2990b48, not stripped
manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/libopenblasp-r0-39a31c03.2.18.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=85e6780ba62dd077bd2fbe1e765763d30ae25f10, not stripped
(manylinux1_relocatable_test) dev@devenv:~$ strip manylinux1_relocatable_test/lib/python2.7/site-packages/_cffi_backend.so manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/libopenblasp-r0-39a31c03.2.18.so
BFD: manylinux1_relocatable_test/lib/python2.7/site-packages/stG8EFkj: Not enough room for program headers, try linking with -N
strip:manylinux1_relocatable_test/lib/python2.7/site-packages/stG8EFkj[.note.gnu.build-id]: Bad value
BFD: manylinux1_relocatable_test/lib/python2.7/site-packages/stG8EFkj: Not enough room for program headers, try linking with -N
strip:manylinux1_relocatable_test/lib/python2.7/site-packages/stG8EFkj: Bad value
BFD: manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/stZ7PY7C: Not enough room for program headers, try linking with -N
strip:manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/stZ7PY7C[.note.gnu.build-id]: Bad value
BFD: manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/stZ7PY7C: Not enough room for program headers, try linking with -N
strip:manylinux1_relocatable_test/lib/python2.7/site-packages/numpy/.libs/stZ7PY7C: Bad value
This behavior has been observed on both Debian 8, Ubuntu 14.04, and Ubuntu 16.04.
If this is not a general artifact of the manylinux build process (and therefor not applicable to this project specifically), but a common (mis)configuration that affects multiple packages' manylinux1 build configuration/settings (eg cffi, numpy), then please let me know and I'll close this issue (and reopen it against the relevant projects).
Due to the EOL-ing of Centos 5, #102 recently patched the repo config. However, this still doesn't work:
$ docker run -it quay.io/pypa/manylinux1_x86_64 bash
[root@dac4bb86fa5b /]# yum update
http://107.158.252.35/centos/5/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: base. Please verify its path and try again
[root@dac4bb86fa5b /]#
There are two issues I can see:
vault.centos.org
?Patching coming shortly.
I got the python-manylinux-demo repo fixed and running with the current docker layout. It might be a good idea to link to it in the README as a demonstration.
I saw https://github.com/pypa/python-manylinux-demo which is neat.
But it would also be nice to have instructions that I can simply run on my computer to create the wheels.
Something like:
Step 1: Install docker from the instructions at https://docs.docker.com/engine/installation/
Step 2: Pull the docker image: docker pull quay.io/pypa/manylinux1_x86_64
(For 64bit)
And so on ...
The Travis build example at https://github.com/pypa/python-manylinux-demo isn't as helpful in case I want to do it locally.
Sorry if this is obvious, but I tried for fun to manually create an anylinux wheel for h5py from a shell inside the provided x64 Docker container. However, it seems the rpath fixup didn't go too well for libz:
After the fixed up wheel was installed:
[root@a6722ca968b0 h5py-master]# ldd /opt/_internal/cpython-3.4.4/lib/python3.4/site-packages/h5py/h5a.cpython-34m.so
linux-vdso.so.1 => (0x00007ffe2095d000)
libhdf5-eb619e28.so.100.0.0 => /opt/_internal/cpython-3.4.4/lib/python3.4/site-packages/h5py/.libs/libhdf5-eb619e28.so.100.0.0 (0x00007f3e80eb2000)
libhdf5_hl-250248ee.so.100.0.0 => /opt/_internal/cpython-3.4.4/lib/python3.4/site-packages/h5py/.libs/libhdf5_hl-250248ee.so.100.0.0 (0x00007f3e80c89000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f3e80a67000)
libc.so.6 => /lib64/libc.so.6 (0x00007f3e8070e000)
librt.so.1 => /lib64/librt.so.1 (0x00007f3e80504000)
libz-a147dcb0.so.1.2.3 => not found
libdl.so.2 => /lib64/libdl.so.2 (0x00007f3e80300000)
libm.so.6 => /lib64/libm.so.6 (0x00007f3e8007c000)
libz-a147dcb0.so.1.2.3 => /opt/_internal/cpython-3.4.4/lib/python3.4/site-packages/h5py/.libs/./libz-a147dcb0.so.1.2.3 (0x00007f3e7fe67000)
/lib64/ld-linux-x86-64.so.2 (0x00005606a1cff000)
Notice the two references to libz-a147dcb0.so.1.2.3
, one of them found, the other one not. Any idea what could have gone wrong, or how I can debug further? The bundling of libhdf5 seems to have worked fine.
Just as a thought, one could actually combine the functionality of these two images. See this comment and this Dockerfile.
I am getting the following linking error when trying to build on with the latest Docker image:
/opt/rh/devtoolset-2/root/usr/libexec/gcc/x86_64-CentOS-linux/4.8.2/ld: /opt/2.7mu/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32S against `_Py_NotImplementedStruct' can not be used when making a shared object; recompile with -fPIC
/opt/2.7mu/lib/libpython2.7.a: could not read symbols: Bad value
I am testing manylinux numpy wheels.
In particular, I am testing this guy: http://nipy.bic.berkeley.edu/manylinux/numpy-1.10.4-cp27-none-linux_x86_64.whl
With a default install, test of Python, starting with either Wheezy or Jessie:
docker run -ti --rm -v $PWD:/io debian:latest /bin/bash
docker run -ti --rm -v $PWD:/io tianon/debian:wheezy /bin/bash
and running this script:
apt-get update
apt-get install -y python curl
curl -sLO https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install -f https://nipy.bic.berkeley.edu/manylinux numpy nose
python -c "import numpy; numpy.test()"
I get this:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/__init__.py", line 180, in <module>
from . import add_newdocs
File "/usr/local/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/usr/local/lib/python2.7/dist-packages/numpy/core/__init__.py", line 14, in <module>
from . import multiarray
ImportError: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
Sure enough, in order for the wheel to work, I need:
apt-get install libpython2.7
Maybe we need to add a check / informative error message for the presence of libpython?
Travis-CI just announced a new feature for build stages:
"For example, you can configure a deployment stage to run only after several test jobs have all completed successfully."
Maybe we can/should use it?
Currently:
~$ sudo docker run --rm quay.io/manylinux/manylinux /opt/2.7/bin/python -c 'import sys; print(hex(sys.maxunicode))'
0xffff
Hi there,
While using the suggested Docker containers I ran into a problem, I couldn't upload some of the wheels it had generated. Turns out, the problem was a broken rename by auditwheel 1.3.0, the version installed in the container. Updating to 1.4.0 solves it (looks like this commit, so it might be a good idea to update the container image.
FTR, the broken renamed wheels looked like this:
$ twine upload -r pypi wheelhouse/*.whl
Uploading distributions to https://pypi.python.org/pypi
Uploading pyuv-1.3.0-cp27-cp27m-linux_i686.manylinux1__i686.whl
HTTPError: 400 Client Error: Binary wheel for an unsupported platform for url: https://pypi.python.org/pypi
I guess it would be good at some point to provide a 32-bit manylinux1
image.
Actually arranging this will be a pain in the butt, because we'll definitely want to share build scripts between the 32-bit and 64-bit builds, but docker has a strong idea that every docker "build context" must be totally self-contained. Like, you're not even allowed to symlink out of it, because that would be bad. Maybe this will get better in the future, but AFAICT right now the only way to make this work reasonably well given docker + quay.io's limitations is:
In theory the git-64 and git-32 repositories could be the same, with the Dockerfiles in different subdirectories. But we'd still need git-common to be separate (so that it could be checked out into both subdirectories), and we'd still need quay-64 and quay-32 to be separate so that users can specify which docker image they want to pull.
[Hopefully in the future quay.io will start allowing different Dockerfiles to share an overlapping build context (currently impossible - hopefully they'll update this doc page if that changes :-)), and then we could combine all the git repositories into one git repo that feeds multiple quay.io repos.]
This is all doable enough, I guess. The biggest hassle is that getting CI working will be difficult with everything spread out over multiple repos.
Given discussion here and here, it sounds like we can/should drop the UCS4-only requirement from PEP 513 after all.
If that's right, then I guess there are a few things to do:
check-manylinux.py
scriptFor the docker images, I propose the following layout:
/opt/2.7.11-ucs2
/opt/2.7.11-ucs4
/opt/2.7-ucs2 -> 2.7.11-ucs2
/opt/2.7-ucs4 -> 2.7.11-ucs4
/opt/2.7 -> 2.7-ucs4
What are the plans if any for musl based distributions? Musl based distributions are popular in container world because they are much lighter. Docker announced that they will be redoing all their official images to be Alpine Linux based. Will there be a separate tag?
I have the manylinux1_x86_64 image to create python wheels and using this docker file
FROM quay.io/pypa/manylinux1_x86_64
ENV GLPK_VER="4.60"
RUN wget http://ftp.gnu.org/gnu/glpk/glpk-${GLPK_VER}.tar.gz -O - | tar xz
WORKDIR glpk-${GLPK_VER}
RUN ./configure && make install
and last week I could with this test.sh
#!/bin/bash
auditwheel show /io/wheelhouse/cobra-0.5.1b1-cp35-cp35m-linux_x86_64.whl
run
docker run --rm -v `pwd`:/io cobrapy_builder /io/test.sh
to get expected output
cobra-0.5.1b1-cp35-cp35m-linux_x86_64.whl is consistent with the
following platform tag: "linux_x86_64".
The wheel references the following external versioned symbols in
system-provided shared libraries: GLIBC_2.4.
The following external shared libraries are required by the wheel:
{
"libc.so.6": "/lib64/libc-2.5.so",
"libglpk.so.40": "/usr/local/lib/libglpk.so.40.1.0",
"libm.so.6": "/lib64/libm-2.5.so",
"libpthread.so.0": "/lib64/libpthread-2.5.so"
}
In order to achieve the tag platform tag "manylinux1_x86_64" the
following shared library dependencies will need to be eliminated:
libglpk.so.40
now however, I only get
nothing to do
cobra-0.5.1b1-cp35-cp35m-linux_x86_64.whl is consistent with the
following platform tag: "manylinux1_x86_64".
The wheel references no external versioned symbols from system-
provided shared libraries.
The wheel requires no external shared libraries! :)
I can circumvent that by installing pyelftools==0.23 (replacing the 0.24 version) but that only fixes the problem for python 3.5 not 3.4 or 2.7 (can't reproduce that, seems to fix for all python's), i.e.
#!/bin/bash
/opt/python/cp35-cp35m/bin/pip install pyelftools==0.23
auditwheel show /io/wheelhouse/cobra-0.5.1b1-cp35-cp35m-linux_x86_64.whl
any ideas? the wheel file I try with is here
https://drive.google.com/open?id=0B9Jk-Vpwjhb0ME1HNEx0OFNNbVU
I understand from #69 that it is recommended that extension modules be compiled with references to the CPython ABI left undefined, and that libpythonX.Y.so
is therefore intentionally left omitted from the docker image.
However, the python-config
scripts in the Python installations still contain -lpythonX.Y
options. It would be great if this could be removed so that build scripts that use python-config to detect Python installations work correctly.
There are repositories, and tags within those repositories, and it's all a bit confusing.
I'm currently inclined towards making multiple repositories: pypa/manylinux1_x86_64
, pypa/manylinux1_i686
(and then later maybe there will be pypa/manylinux2_x86_64
, etc.), and avoiding routine use of any tags except latest
.
But, as previously stated, I have no idea what I'm doing with docker, so would welcome any thoughts :-)
I need cmake to build my manylinux library and yumming it takes quite a bit of time. Any chance of installing it in the the main image?
Hey Nathaniel. Thanks for inviting me.
Do you want to do discussion here or on the google group?
Our wheel building process contacts an https server, but fails with an [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
error.
Looking into it a bit more into detail I found that the system has a CA file at /etc/pki/tls/certs/ca-bundle.crt
, but it's not loaded by default, probably because the Pythons expect it at a different location?
As a workaround I call export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt
, but it would be great to wire it up such that the system CA file is used by default.
I'm curious whether there are any plans to have a Python3.6 manylinux image out soon after the final release next week.
Greetings,
I'm going to be putting in a proposal to teach a tutorial at SciPy this year on package building. I'm hoping to cover everything, including setup.py, wheels, auditwheel/manylinux, binary compatibility, conda-build, and conda-forge. Would anyone like to also join in on that proposal?
I would like to submit this proposal by Friday, Feb 24.
Cross-link similar issue at conda-forge: conda-forge/conda-forge.github.io#338
I'm trying to build cryptography using the docker image. The openssl version that is installed there is 0.9.8u, which generates the following error in cryptography:
$ python
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from cryptography.hazmat.backends import default_backend
>>> default_backend()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/__init__.py", line 35, in default_backend
_default_backend = MultiBackend(_available_backends())
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/__init__.py", line 22, in _available_backends
"cryptography.backends"
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2235, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/__init__.py", line 7, in <module>
from cryptography.hazmat.backends.openssl.backend import backend
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/backend.py", line 47, in <module>
from cryptography.hazmat.bindings.openssl import binding
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 250, in <module>
_verify_openssl_version(Binding.lib.SSLeay())
File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/bindings/openssl/binding.py", line 230, in _verify_openssl_version
"You are linking against OpenSSL 0.9.8, which is no longer "
RuntimeError: You are linking against OpenSSL 0.9.8, which is no longer support by the OpenSSL project. You need to upgrade to a newer version of OpenSSL.
Running yum install openssl
is unhelpful, as 0.9.8e is the latest that's available for Centos 5.
I noticed that the image build script already builds a newer openssl (for curl), but then deletes it. Maybe this can stay in the image and be available for modules like cryptography?
The version of autoconf shipping with CentOS 5 is sufficiently out of date to prevent building various projects (among them libffi). It's easy to install it yourself, but since this seems likely to be a common problem for users of the manylinux1 image I'd be happy to send a PR to install the latest autoconf (2.69, released 4 years ago) as part of the build process if people think it belongs here.
(It really just looks like RUN wget http://ftp.gnu.org/gnu/autoconf/autoconf-latest.tar.gz && tar zxvf autoconf-latest.tar.gz && cd autoconf* && ./configure && make install
)
Patch has been submitted here: pypa/pip#3446
The application I'm trying to build has a particular need to embed the interpreter into an executable, which requires static linking with libpython.
Would it perhaps be possible not to delete libpython.a from the image, but keep it in the _internal directory for those who are absolutely certain they need to use it?
I am opening this issue to keep track of the open questions raised in the discussion at #44 (comment).
An attacker might find ways to silently install a rootkit in the binaries (especially the gcc and patchelf commands) of our quay.io hosted docker images. The attack could happen on quay.io, on github.com, on the travis build machine or on one of the third party resources we fetch software from in our build scripts (centos repositories, patchelf source repository and others). At the moment we have no easy way to detect such attacks.
One way we could at least detect that something is wrong would be to compute the sha256sum of all the binaries of our docker images and store that list of hashes offline and maybe a also hash the hash list could be pushed to an independent append-only time-stamped public log (for instance a dedicated twitter account).
We should also probably setup some automated CI bot to periodically recompute the sha256sum list of all the files in the public quay.io hosted images and compare them to the matching entry of the append-only time-stamped public log.
Builds are failing; last two commits to master haven't actually been built: https://travis-ci.org/pypa/manylinux/builds
We may also need to refractor .travis.yml to split the two dockerfiles into two parallel travis jobs, before master will build again.
Possibly related to #29 (?), gevent got a report in gevent/gevent#789 about bad unicode symbols from the wheel downloaded from PyPI. The 1.1.0 wheels worked, but the 1.1.1 wheels, built on an image that supported both ABIs, didn't. I'm not enough of an ABI expert to know exactly what's going on at first glance.
Since this wheel was just built and uploaded with the default manylinux docker image, I'm hoping somebody here might have an idea. Thanks!
CC @woozyking
Some projects quite reasonably assume that C99 inline semantics are used when compiling in -std=c99
or -std=c11
modes, which are well supported by GCC 4.8.
However, the default for the compiler seems to be -fgnu89-inline
, which forces incompatible pre-C99 inline semantics even in C99/C11 mode, which can cause quite some confusing "multiple definition" errors in libraries that rely on the correct semantics.
Unfortunately, adding -fno-gnu89-inline
isn't quite enough to fix this, because the libc headers in /usr/include
are apparently not compatible with C99 semantics. More recent versions of the headers explicitly force gnu89 inline semantics for these headers by using the __gnu_inline__
compiler attribute, which was introduced around the time that C99 support was.
So, besides adding -fno-gnu89-inline
to my CFLAGS, I have to add this monkey patch to my dockerfile to make libraries like OpenAL compile:
find /usr/include/ -type f -exec sed -i 's/\bextern _*inline_*\b/extern __inline __attribute__ ((__gnu_inline__))/g' {} +
Assuming we no longer need to support GCC 4.2, perhaps we could add this command to the manylinux dockerfile? It would still only be half of the solution, since I believe it is still rather odd for GCC to apply gnu89 inline semantics in C99/C11 modes, but the latter is much easier to work around by adding said flag to CFLAGS.
Alternatively, would it be possible to update the libc headers without breaking compatibility?
I've found a situation where manylinux wheels don't work. I think it might be bug. To reproduce:
(On cpython source directory cloned from github)
$ git checkout v3.5.3
$ ./configure --with-pydebug
$ make -j 7
$ ./python -m venv test_env
$ source test_env/bin/activate
Then install some package that has a manylinux wheel, for example sip and pillow:
(test_env) $ pip install sip
Collecting sip
Could not find a version that satisfies the requirement sip (from versions: )
No matching distribution found for sip
(test_env) $ pip install pillow
Collecting pillow
Downloading Pillow-4.0.0.tar.gz (11.1MB)
(the source package is being downloaded, not the manylinux wheel)
The same packages can be installed via manylinux wheel if using the system's python on the same system, only fails on a self-compiled python. (My system is Arch Linux x86_64)
The manylinux/auditwheel repo was copied to pypa/auditwheel, but the build script in this pypa/manylinux repo still clone from manylinux/auditwheel. Right now they are synchronized, but it would feel more official to clone from the pypa repo, and would allow removing the other repo later.
Here is the relevant snippet from the build log:
*** WARNING: renaming "_sqlite3" since importing it failed: build/lib.linux-x86_64-3.6/_sqlite3.cpython-36m-x86_64-linux-gnu.so: undefined symbol: sqlite3_stmt_readonly
The following modules found by detect_modules() in setup.py, have been
built by the Makefile instead, as configured by the Setup files:
atexit pwd time
Following modules built successfully but were removed because they could not be imported:
_sqlite3
This issue is being discussed here: ghaering/pysqlite#85
The problem is that apparently Python 3.6 requires a more recent version of libsqlite3 that the one available on old centos. We could build or own sqlite3 from source in the docker image but the configure script currently does not make it possible to pass a path to a custom sqlite3 install.
CentOS 5 support ends in about a month and a half:
CentOS-5 updates until March 31, 2017
https://wiki.centos.org/FAQ/General#head-fe8a0be91ee3e7dea812e8694491e1dde5b75e6d
Is there a plan to migrate PEP-513 to a new minimum subset of libraries+versions eventually?
Please direct me to an existing discussion or a more appropriate forum if one exists.
wget and I both seem to be getting a 404 here: http://www.openssl.org/source/openssl-1.0.2e.tar.gz
(Also, should probably use https I guess, even if we are checking the sha256 -- belt and suspenders.)
It would be nice if the docker images contained a function similar to travis_retry. See jolicode/docker-images#8 for an identical request in another docker image.
distlib has its own implementation of the platform tag stuff, see: https://bitbucket.org/pypa/distlib/src/66e48052714fc7060999c9b5bea12cf59edeb3b7/distlib/wheel.py?at=default&fileviewer=file-view-default#wheel.py-918
It looks like pip doesn't actually use this, but it would still be good to fix
It looks like #72 was merged a month ago, but still hasn't landed on the quay.io image, because the build timed out. I restarted it last night, and it timed out again. I just restarted it a second time -- who knows, maybe it'll work.
Offending build: https://travis-ci.org/pypa/manylinux/builds/138611073
The limit appears to be just under 50 minutes, and all of our recent master builds (including the successful ones) have been right up against this limit: https://travis-ci.org/pypa/manylinux/builds
I'm not sure what the right solution is but we need to do something about this :-(
This is a very speculative issue. I'm aware that PEP 513 says manylinux1 is only defined for i686 and x86_64. But it would be nice if instructions telling people to pip install
compiled modules worked easily and quickly on things like the Raspberry Pi too. It might also be valuable to the few brave souls trying to make Python work on mobile platforms.
So this issue is a place to work out what would be needed.
ARMv7
) and cores (e.g. ARM11
). There may also be optional modules. Debian supports three buckets of ARM architectures: armhf
requires ARMv7 and a floating point unit (hf = hardware float). armel
supports older architectures, and cores without an FPU. arm64
is for the newer 64-bit processors. In terms of raspis:Minimum library versions: These probably don't need to be all that old, because the systems I think this would target are likely to be running a relatively recent distro. But we'd need to work out what they are.
Distro for building: What distro would be the equivalent of the Centos 5.11 we use for building x86 and x64 manylinux wheels?
Build platform: Are there any CI services which offer ARM builds? Is virtualisation fast enough? Can we make it easy for people to build wheels on their own Raspberry Pi or similar device.
It would be good to have images with a version rather than always overwriting the latest
version.
Thus, one could use explicit versions.
My proposal would be to use the date as a version, e.g.
pypa/manylinux1_x86_64:20170502
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp26-cp26m/bin/python -c "import sys; print(sys.platform)"
linux4
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp26-cp26mu/bin/python -c "import sys; print(sys.platform)"
linux4
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp27-cp27m/bin/python -c "import sys; print(sys.platform)"
linux2
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp27-cp27mu/bin/python -c "import sys; print(sys.platform)"
linux2
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp33-cp33m/bin/python -c "import sys; print(sys.platform)"
linux
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp34-cp34m/bin/python -c "import sys; print(sys.platform)"
linux
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp35-cp35m/bin/python -c "import sys; print(sys.platform)"
linux
$ docker run quay.io/pypa/manylinux1_x86_64 /opt/python/cp36-cp36m/bin/python -c "import sys; print(sys.platform)"
linux
I am attempting to build a manylinux wheel from the nupic.core project in github. I am using the docker image quay.io/pypa/manylinux1_x86_64. nupic.core builds and statically links against the capnproto library, which relies on signalfd.h by default. Unfortunately, the docker image quay.io/pypa/manylinux1_x86_64 does not provide signalfd.h, so my build fails like this:
Linking CXX static library libkj.a
[ 27%] Built target kj
[ 29%] Building CXX object src/kj/CMakeFiles/kj-async.dir/async.c++.o
[ 30%] Building CXX object src/kj/CMakeFiles/kj-async.dir/async-unix.c++.o
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-unix.c++:36:26: fatal error: sys/signalfd.h: No such file or directory
#include <sys/signalfd.h>
I even tried a capnproto-specific work-around to steer capnproto from using signalfd.h by setting -DKJ_USE_EPOLL=0
. However, in that case, the build failed due to lack of other functions and constants (in glibc?):
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++: In member function ‘int kj::{anonymous}::SocketAddress::socket(int) const’:
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++:348:13: error: ‘SOCK_NONBLOCK’ was not declared in this scope
type |= SOCK_NONBLOCK | SOCK_CLOEXEC;
^
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++:348:29: error: ‘SOCK_CLOEXEC’ was not declared in this scope
type |= SOCK_NONBLOCK | SOCK_CLOEXEC;
^
In file included from /nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++:24:0:
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++: In lambda function:
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++:648:38: error: ‘O_CLOEXEC’ was not declared in this scope
KJ_SYSCALL(pipe2(fds, O_NONBLOCK | O_CLOEXEC));
^
/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-io.c++:648:47: error: ‘pipe2’ was not declared in this scope
KJ_SYSCALL(pipe2(fds, O_NONBLOCK | O_CLOEXEC));
I recursively grep'ed for these symbols in the docker image's /usr/include/
directory, and they were all absent.
Regarding pipe2
, linux man has this to say:
pipe2() was added to Linux in version 2.6.27; glibc support is available starting with version 2.9.
Regarding signalfd
:
signalfd() is available on Linux since kernel 2.6.22. Working support is provided in glibc since version 2.8. The signalfd4() system call (see NOTES) is available on Linux since kernel 2.6.27
A newer version of autoconf was added through #72 unfortunately that is insufficient to build some libraries, leading to errors such as error: Libtool library used but 'LIBTOOL' is undefined
.
Installing newer versions of automake and libtool fixes this.
I'm currently using the following as a workaround in my build script:
wget -q https://ftp.gnu.org/gnu/autoconf/autoconf-latest.tar.gz && tar zxf autoconf-latest.tar.gz && cd autoconf* && ./configure > /dev/null && make install > /dev/null && cd ..
wget -q https://ftp.gnu.org/gnu/automake/automake-1.15.tar.gz && tar zxf automake-*.tar.gz && cd automake* && ./configure > /dev/null && make install > /dev/null && cd ..
wget -q https://ftp.gnu.org/gnu/libtool/libtool-2.4.5.tar.gz && tar zxf libtool-*.tar.gz && cd libtool* && ./configure > /dev/null && make install > /dev/null && cd ..
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.