dotrun's Issues
Ignored commands exit status
When running this command:
dotrun exec cp
echo $?
# prints 0 instead of 1
The command exit code is ignored, the same goes for package.json
scripts (e.g. if dotrun lint
fails, the exit code of the script is 1
but dotrun returns 0
)
Cannot run site locally on macOS using Multipass and dotrun snap
- I followed @hatched's instructions and have setup Multipass on my macOS machine.
- I shell into my Multipass instance and clone this repo
- I cd into the project root and run
docker-compose up -d
I hit this error:
ERROR: Couldn't connect to Docker daemon - you might need to run docker-machine start default
.
I then run that suggested command, and see this message;
Docker machine "default" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I then run the suggested action docker-machine create default
and I get this error:
Error with pre-create check: "VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path"
Now I be like ¯\_(ツ)_/¯
Disks do not mount on Beth’s machine
we may need to rerun Jeff’s steps to setup up multipass
Dotrun errors on Mac M1
The install script didn't install the dotrun snap so I had to install it manually.
Running dotrun
throws the following errors:
ERROR: ld.so: object '/snap/dotrun/54/lib/semwraplib.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/snap/dotrun/54/lib/semwraplib.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
ERROR: ld.so: object '/snap/dotrun/54/lib/semwraplib.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
/snap/dotrun/54/usr/bin/python3: 1: /snap/dotrun/54/usr/bin/python3: �ELF: not found
/snap/dotrun/54/usr/bin/python3: 2: /snap/dotrun/54/usr/bin/python3: Syntax error: ";" unexpected
Specifying version of node in dotrun?
dotrun uses a large amount of disk space
In my multipass dotrun is currently using:
919M /snap/dotrun
4.2G /home/multipass/snap/dotrun
This is a lot of space for a multipass VM. Is there any way I can reduce the amount of disk space it uses?
`dotrun exec` shows "Python-dotenv could not parse statement starting at line 4"
Running dotrun exec
shows an error.
Although everything seems to work as expected. Shell starts, command provided runs.
ubuntu@ubuntu ~/vanilla-framework (fix-release-event) $ dotrun exec yarn
Python-dotenv could not parse statement starting at line 4
[ $ yarn ] ( virtualenv `.venv` )
yarn install v1.22.0
[1/4] Resolving packages...
success Already up-to-date.
Done in 0.56s.
Replace pip with poetry, to introduce lockfiles
We are seeing a bunch of issues because we don't have lockfiles:
- With fuzzy versions, e.g. loose versioning inside flask-base, different versions of the site can be running different versions of a module
- The order in which you put dependencies in requirements.txt matters, when it shouldn't, as you'll end up with different versions of sub-dependencies depending on what was installed first
On the other hand:
- We need to check it works in dotrun and docker.
Error when setting up dotrun on Mac
I'm currently helping Min with trying to get dotrun
working on her Mac. We've followed these steps so far:
- Followed MacOS instructions to install
curl -s https://raw.githubusercontent.com/canonical-web-and-design/dotrun/main/scripts/install-with-multipass.sh | bash
- cd into
dotrun-projects
folder - download dotrun on VM (
multipass shell dotrun
thensudo snap install dotrun
) - exit VM, clone ubuntu.com repo in
dotrun-projects
folder - try to run
dotrun
on ubuntu.com.
Error we are seeing is: package.json
not found in /home/ubuntu/dotrun-projects/ubuntu.com
. When we ls
into the ubuntu.com
repo the package.json
is there
Action Required: Fix Renovate Configuration
There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.
Error type: undefined
Display available commands
It might be nice for users who are new to a project to be able to see what commands are available for a particular project. Most of the time we open the package.json to see this, but it's probably not obvious to users that this is where to look.
Fortunately Yarn has a convenient way of outputting the scripts:
dotrun exec yarn run --non-interactive
It would be nice if this was available via something like dotrun help
.
Change DOTRUN_DOCKER_COMPOSE_ACTIONS so it can be set in .env
The DOTRUN_DOCKER_COMPOSE_ACTIONS
setting would be super helpful to me as I'd like to run the database on test-python
in ubuntu-com-security-api.
However, at present this setting has to be on the local computer, rather than in the .env
file. This isn't so useful, as this setting should really be per project, not per user.
Let's change it so it can be set within .env
.
Add host mode option
I understand that the network host option isn't available on MacOS, in some cases the project need to connect with other local docker containers, the simplest solution for that is using the host mode and leave the env variable at localhost.
Is it possible to provide and option (e.g. --host-mode
) which will start the docker container in host mode?
Install script doesn't seem to install the dotrun snap
I've seen this issue before when setting up with Beth, and today it happened to me as well when I was setting up dotrun from scratch.
When running the install script (on Mac), everything works as expected, no errors in console. Multipass instance is created, foldeer is shared, alias added. But when you try to run dotrun
you get bash: dotrun: command not found
Using multipass shell and installing snap manually in the instance fixes the issue.
Expected
Install script should install dotrun snap in the multipass instance.
The snap is huge
Error when passing specific params to `dotrun exec` command
I noticed some bug with dotrun
command, maybe something around how parameters are passed or something, quite weird
so, I tried to run this command dotrun exec yarn add @canonical/react-components --dev
and it failed with weird error:
$ dotrun exec yarn add @canonical/react-components --dev
Checking for dotrun image updates...
Traceback (most recent call last):
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/containers/c953dbf3c84bafedddc1c76d3812e7d902aef94a4e14848c3c574e101888d015/start
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/dotrun", line 8, in <module>
sys.exit(cli())
File "/Library/Python/3.8/site-packages/dotrun.py", line 132, in cli
dockerpty.start(dotrun.docker_client.api, container.id)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/dockerpty/__init__.py", line 30, in start
PseudoTerminal(client, operation).start()
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/dockerpty/pty.py", line 328, in start
pumps = self.operation.start(sockets=sockets)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/dockerpty/pty.py", line 149, in start
self.client.start(self.container, **kwargs)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/container.py", line 1109, in start
self._raise_for_status(res)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error for http+docker://localhost/v1.41/containers/c953dbf3c84bafedddc1c76d3812e7d902aef94a4e14848c3c574e101888d015/start: Internal Server Error ("failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: sethostname: invalid argument: unknown")
if I do the same without --dev
it works
dotrun exec yarn add @canonical/react-components
or if a package name is not namespaced
dotrun exec yarn add vanilla-framework --dev
but when there is a namespace and dev flag, it fails
dotrun exec yarn add @canonical/react-components --dev
Add update checks
The dotrun-image is kept up to date with dotrun, but the dotrun python package must be updated manually.
As @nottrobin suggested, we should implement something like:
$ dotrun
Dotrun version 2.0.1 is available. Would you like to update (waiting 10 seconds for response) (y/N): Y
password for fran: _
Fix `dotrun clean-cache` command
Currently, the only way to get rid of old cache is to uninstall and reinstall the snap. I had two folders, 32 and 33, which I assume are some sort of versions, full of 1GB of (mostly repeating) caches.
Cypress run fails
When trying to run cypress with dotrun in canonical/vanilla-framework#4540 the following error occurs:
Your system is missing the dependency: Xvfb
Install Xvfb and run Cypress again.
Read our documentation on dependencies for more information:
https://on.cypress.io/required-dependencies
If you are using Docker, we provide containers with all required dependencies installed.
----------
Error: spawn Xvfb ENOENT
----------
Platform: linux-arm64 (Ubuntu - 20.04)
Cypress Version: 10.6.0
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
= `yarn run cypress:run` errored =
Cypress has an official docker image which might be helpful in resolving this issue: https://docs.cypress.io/guides/continuous-integration/introduction#Official-Cypress-Docker-Images
Add more library dependencies
python-launchpadlib libsodium-dev psycopg libjpeg libpq-dev libjpeg-dev zlib1g-dev libpng-dev libmagickwand-dev, libjpeg-progs optipng
Install scripts
The steps to install dotrun are now likely to be fairly complex.
I think for macos people will need to run roughly these steps. And then for both macos and ubuntu, they'll need to run further commands to set up connection with Docker for database support, something like:
snap install dotrun
apt install docker-engine # (probably not the right name)
snap install docker
# something to tell docker snap to talk to system docker engine
snap connect dotrun docker # Also not the right command
I think this mean that we should write bash scripts for both macos and ubuntu, so that devs can simply run these scripts and end up with dotrun
and docker
installed in such a way that they can simply run them on our projects with no further work. So the install step would be e.g.
curl -s https://raw.githubusercontent.com/canonical-web-and-design/dotrun/master/scripts/install-macos.sh | bash
Snap does not support arm64 architecture
Multipass is now available for M1 Macs (canonical/multipass#1857)
This snap cannot install because it is not available for the arm64 architecture
cchardet build error (M1)
Attempting to run dotrun
in maas.io on M1 results in fatal errors:
- Yarn dependencies up to date
- Python dependenices have changed, reinstalling
[ $ pip3 install --requirement requirements.txt ] ( virtualenv `.venv` )
Collecting canonicalwebteam.flask_base==0.9.1
Using cached canonicalwebteam.flask_base-0.9.1-py3-none-any.whl (9.6 kB)
Collecting canonicalwebteam.blog==6.4.0
Using cached canonicalwebteam.blog-6.4.0-py3-none-any.whl (14 kB)
Collecting canonicalwebteam.discourse-docs==1.0.1
Using cached canonicalwebteam.discourse_docs-1.0.1-py3-none-any.whl (18 kB)
Collecting canonicalwebteam.http==1.0.3
Using cached canonicalwebteam.http-1.0.3-py3-none-any.whl (8.4 kB)
Collecting canonicalwebteam.templatefinder==1.0.0
Using cached canonicalwebteam.templatefinder-1.0.0-py3-none-any.whl (8.5 kB)
Collecting canonicalwebteam.search==1.0.0
Using cached canonicalwebteam.search-1.0.0-py3-none-any.whl
Requirement already satisfied: canonicalwebteam.image-template==1.3.1 in ./.venv/lib/python3.8/site-packages (from -r requirements.txt (line 7)) (1.3.1)
Requirement already satisfied: feedparser==6.0.2 in ./.venv/lib/python3.8/site-packages (from -r requirements.txt (line 8)) (6.0.2)
Requirement already satisfied: requests-cache==0.7.5 in ./.venv/lib/python3.8/site-packages (from -r requirements.txt (line 9)) (0.7.5)
Requirement already satisfied: lxml==4.6.4 in ./.venv/lib/python3.8/site-packages (from -r requirements.txt (line 10)) (4.6.4)
Collecting cchardet==2.1.7
Using cached cchardet-2.1.7.tar.gz (653 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: canonicalwebteam.yaml-responses[flask]<2,>=1 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (1.2.0)
Requirement already satisfied: Werkzeug<0.16,>=0.15 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (0.15.6)
Requirement already satisfied: gevent==21.8.0 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (21.8.0)
Requirement already satisfied: flask<2,>=1 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (1.1.2)
Requirement already satisfied: talisker[flask,gevent,gunicorn,prometheus,raven]==0.19.0 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (0.19.0)
Requirement already satisfied: greenlet==1.1.2 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (1.1.2)
Requirement already satisfied: requests in /snap/dotrun/59/lib/python3.8/site-packages (from canonicalwebteam.blog==6.4.0->-r requirements.txt (line 2)) (2.26.0)
Requirement already satisfied: feedgen in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.blog==6.4.0->-r requirements.txt (line 2)) (0.9.0)
Requirement already satisfied: beautifulsoup4 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.blog==6.4.0->-r requirements.txt (line 2)) (4.10.0)
Requirement already satisfied: humanize in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.discourse-docs==1.0.1->-r requirements.txt (line 3)) (3.13.1)
Requirement already satisfied: validators in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.discourse-docs==1.0.1->-r requirements.txt (line 3)) (0.18.2)
Requirement already satisfied: python-dateutil in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.discourse-docs==1.0.1->-r requirements.txt (line 3)) (2.8.2)
Requirement already satisfied: lockfile>=0.12.2 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.http==1.0.3->-r requirements.txt (line 4)) (0.12.2)
Requirement already satisfied: mockredispy>=2.9.3 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.http==1.0.3->-r requirements.txt (line 4)) (2.9.3)
Requirement already satisfied: freezegun>=0.3.11 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.http==1.0.3->-r requirements.txt (line 4)) (1.1.0)
Requirement already satisfied: redis>=3.0.1 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.http==1.0.3->-r requirements.txt (line 4)) (4.0.2)
Requirement already satisfied: CacheControl>=0.12.5 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.http==1.0.3->-r requirements.txt (line 4)) (0.12.10)
Requirement already satisfied: HTTPretty>=1.0.2 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.http==1.0.3->-r requirements.txt (line 4)) (1.1.4)
Requirement already satisfied: python-frontmatter>=0.4.5 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.templatefinder==1.0.0->-r requirements.txt (line 5)) (1.0.0)
Requirement already satisfied: bleach>=3.1 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.templatefinder==1.0.0->-r requirements.txt (line 5)) (4.1.0)
Requirement already satisfied: mistune==0.8.4 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.templatefinder==1.0.0->-r requirements.txt (line 5)) (0.8.4)
Requirement already satisfied: jinja2>=2 in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.image-template==1.3.1->-r requirements.txt (line 7)) (3.0.3)
Requirement already satisfied: sgmllib3k in ./.venv/lib/python3.8/site-packages (from feedparser==6.0.2->-r requirements.txt (line 8)) (1.0.0)
Requirement already satisfied: pyyaml>=5.4 in /snap/dotrun/59/lib/python3.8/site-packages (from requests-cache==0.7.5->-r requirements.txt (line 9)) (5.4.1)
Requirement already satisfied: attrs<22.0,>=21.2 in /snap/dotrun/59/lib/python3.8/site-packages (from requests-cache==0.7.5->-r requirements.txt (line 9)) (21.2.0)
Requirement already satisfied: itsdangerous>=2.0.1 in ./.venv/lib/python3.8/site-packages (from requests-cache==0.7.5->-r requirements.txt (line 9)) (2.0.1)
Requirement already satisfied: url-normalize<2.0,>=1.4 in ./.venv/lib/python3.8/site-packages (from requests-cache==0.7.5->-r requirements.txt (line 9)) (1.4.3)
Requirement already satisfied: zope.event in ./.venv/lib/python3.8/site-packages (from gevent==21.8.0->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (4.5.0)
Requirement already satisfied: setuptools in ./.venv/lib/python3.8/site-packages (from gevent==21.8.0->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (58.3.0)
Requirement already satisfied: zope.interface in ./.venv/lib/python3.8/site-packages (from gevent==21.8.0->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (5.4.0)
Requirement already satisfied: future<0.17,>=0.15.2 in ./.venv/lib/python3.8/site-packages (from talisker[flask,gevent,gunicorn,prometheus,raven]==0.19.0->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (0.16.0)
Requirement already satisfied: statsd<4.0,>=3.2.1 in ./.venv/lib/python3.8/site-packages (from talisker[flask,gevent,gunicorn,prometheus,raven]==0.19.0->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (3.3.0)
Requirement already satisfied: gunicorn<21.0,>=19.7.0 in ./.venv/lib/python3.8/site-packages (from talisker[flask,gevent,gunicorn,prometheus,raven]==0.19.0->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (20.1.0)
Requirement already satisfied: raven<7.0,>=6.4.0 in ./.venv/lib/python3.8/site-packages (from talisker[flask,gevent,gunicorn,prometheus,raven]==0.19.0->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (6.10.0)
Requirement already satisfied: prometheus-client<0.8.0,>=0.5.0 in ./.venv/lib/python3.8/site-packages (from talisker[flask,gevent,gunicorn,prometheus,raven]==0.19.0->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (0.7.1)
Requirement already satisfied: blinker<2.0,>=1.4 in ./.venv/lib/python3.8/site-packages (from talisker[flask,gevent,gunicorn,prometheus,raven]==0.19.0->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (1.4)
Requirement already satisfied: six>=1.9.0 in /snap/dotrun/59/lib/python3.8/site-packages (from bleach>=3.1->canonicalwebteam.templatefinder==1.0.0->-r requirements.txt (line 5)) (1.16.0)
Requirement already satisfied: webencodings in ./.venv/lib/python3.8/site-packages (from bleach>=3.1->canonicalwebteam.templatefinder==1.0.0->-r requirements.txt (line 5)) (0.5.1)
Requirement already satisfied: packaging in ./.venv/lib/python3.8/site-packages (from bleach>=3.1->canonicalwebteam.templatefinder==1.0.0->-r requirements.txt (line 5)) (21.3)
Requirement already satisfied: msgpack>=0.5.2 in ./.venv/lib/python3.8/site-packages (from CacheControl>=0.12.5->canonicalwebteam.http==1.0.3->-r requirements.txt (line 4)) (1.0.3)
Requirement already satisfied: yamlloader in ./.venv/lib/python3.8/site-packages (from canonicalwebteam.yaml-responses[flask]<2,>=1->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (1.1.0)
Requirement already satisfied: click>=5.1 in /snap/dotrun/59/lib/python3.8/site-packages (from flask<2,>=1->canonicalwebteam.flask_base==0.9.1->-r requirements.txt (line 1)) (8.0.3)
Requirement already satisfied: MarkupSafe>=2.0 in ./.venv/lib/python3.8/site-packages (from jinja2>=2->canonicalwebteam.image-template==1.3.1->-r requirements.txt (line 7)) (2.0.1)
Requirement already satisfied: deprecated in ./.venv/lib/python3.8/site-packages (from redis>=3.0.1->canonicalwebteam.http==1.0.3->-r requirements.txt (line 4)) (1.2.13)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /snap/dotrun/59/lib/python3.8/site-packages (from requests->canonicalwebteam.blog==6.4.0->-r requirements.txt (line 2)) (1.26.7)
Requirement already satisfied: certifi>=2017.4.17 in /snap/dotrun/59/lib/python3.8/site-packages (from requests->canonicalwebteam.blog==6.4.0->-r requirements.txt (line 2)) (2021.10.8)
Requirement already satisfied: charset-normalizer~=2.0.0 in /snap/dotrun/59/lib/python3.8/site-packages (from requests->canonicalwebteam.blog==6.4.0->-r requirements.txt (line 2)) (2.0.8)
Requirement already satisfied: idna<4,>=2.5 in /snap/dotrun/59/lib/python3.8/site-packages (from requests->canonicalwebteam.blog==6.4.0->-r requirements.txt (line 2)) (3.3)
Requirement already satisfied: soupsieve>1.2 in ./.venv/lib/python3.8/site-packages (from beautifulsoup4->canonicalwebteam.blog==6.4.0->-r requirements.txt (line 2)) (2.3.1)
Requirement already satisfied: decorator>=3.4.0 in /snap/dotrun/59/lib/python3.8/site-packages (from validators->canonicalwebteam.discourse-docs==1.0.1->-r requirements.txt (line 3)) (5.1.0)
Requirement already satisfied: wrapt<2,>=1.10 in ./.venv/lib/python3.8/site-packages (from deprecated->redis>=3.0.1->canonicalwebteam.http==1.0.3->-r requirements.txt (line 4)) (1.13.3)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./.venv/lib/python3.8/site-packages (from packaging->bleach>=3.1->canonicalwebteam.templatefinder==1.0.0->-r requirements.txt (line 5)) (3.0.6)
Building wheels for collected packages: cchardet
Building wheel for cchardet (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /home/parallels/parallels-code/maas.io/.venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-c3d3syop/cchardet_4c21db0d28f9480385d455c20509056d/setup.py'"'"'; __file__='"'"'/tmp/pip-install-c3d3syop/cchardet_4c21db0d28f9480385d455c20509056d/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-1r4ekiop
cwd: /tmp/pip-install-c3d3syop/cchardet_4c21db0d28f9480385d455c20509056d/
Complete output (23 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.8
creating build/lib.linux-aarch64-3.8/cchardet
copying src/cchardet/__init__.py -> build/lib.linux-aarch64-3.8/cchardet
copying src/cchardet/version.py -> build/lib.linux-aarch64-3.8/cchardet
running build_ext
building 'cchardet._cchardet' extension
creating build/temp.linux-aarch64-3.8
creating build/temp.linux-aarch64-3.8/src
creating build/temp.linux-aarch64-3.8/src/cchardet
creating build/temp.linux-aarch64-3.8/src/ext
creating build/temp.linux-aarch64-3.8/src/ext/uchardet
creating build/temp.linux-aarch64-3.8/src/ext/uchardet/src
creating build/temp.linux-aarch64-3.8/src/ext/uchardet/src/LangModels
aarch64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -Isrc/ext/uchardet/src -I/home/parallels/parallels-code/maas.io/.venv/include -I/usr/include/python3.8 -c src/cchardet/_cchardet.cpp -o build/temp.linux-aarch64-3.8/src/cchardet/_cchardet.o
src/cchardet/_cchardet.cpp:4:10: fatal error: Python.h: No such file or directory
4 | #include "Python.h"
| ^~~~~~~~~~
compilation terminated.
error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for cchardet
Running setup.py clean for cchardet
Failed to build cchardet
Installing collected packages: cchardet, canonicalwebteam.templatefinder, canonicalwebteam.search, canonicalwebteam.http, canonicalwebteam.flask-base, canonicalwebteam.discourse-docs, canonicalwebteam.blog
Running setup.py install for cchardet ... error
ERROR: Command errored out with exit status 1:
command: /home/parallels/parallels-code/maas.io/.venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-c3d3syop/cchardet_4c21db0d28f9480385d455c20509056d/setup.py'"'"'; __file__='"'"'/tmp/pip-install-c3d3syop/cchardet_4c21db0d28f9480385d455c20509056d/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-aw1c1vga/install-record.txt --single-version-externally-managed --compile --install-headers /home/parallels/parallels-code/maas.io/.venv/include/site/python3.8/cchardet
cwd: /tmp/pip-install-c3d3syop/cchardet_4c21db0d28f9480385d455c20509056d/
Complete output (25 lines):
running install
/home/parallels/parallels-code/maas.io/.venv/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.8
creating build/lib.linux-aarch64-3.8/cchardet
copying src/cchardet/__init__.py -> build/lib.linux-aarch64-3.8/cchardet
copying src/cchardet/version.py -> build/lib.linux-aarch64-3.8/cchardet
running build_ext
building 'cchardet._cchardet' extension
creating build/temp.linux-aarch64-3.8
creating build/temp.linux-aarch64-3.8/src
creating build/temp.linux-aarch64-3.8/src/cchardet
creating build/temp.linux-aarch64-3.8/src/ext
creating build/temp.linux-aarch64-3.8/src/ext/uchardet
creating build/temp.linux-aarch64-3.8/src/ext/uchardet/src
creating build/temp.linux-aarch64-3.8/src/ext/uchardet/src/LangModels
aarch64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -Isrc/ext/uchardet/src -I/home/parallels/parallels-code/maas.io/.venv/include -I/usr/include/python3.8 -c src/cchardet/_cchardet.cpp -o build/temp.linux-aarch64-3.8/src/cchardet/_cchardet.o
src/cchardet/_cchardet.cpp:4:10: fatal error: Python.h: No such file or directory
4 | #include "Python.h"
| ^~~~~~~~~~
compilation terminated.
error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /home/parallels/parallels-code/maas.io/.venv/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-c3d3syop/cchardet_4c21db0d28f9480385d455c20509056d/setup.py'"'"'; __file__='"'"'/tmp/pip-install-c3d3syop/cchardet_4c21db0d28f9480385d455c20509056d/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-aw1c1vga/install-record.txt --single-version-externally-managed --compile --install-headers /home/parallels/parallels-code/maas.io/.venv/include/site/python3.8/cchardet Check the logs for full command output.
= `pip3 install --requirement requirements.txt` errored =
./run
fails as well with:
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
standard_init_linux.go:228: exec user process caused: exec format error
Fix Cypress dependencies
It's not possible to run Cypress with dotrun
some libraries are missing:
https://docs.cypress.io/guides/getting-started/installing-cypress#System-requirements
M1 Mac - multipass error when installing via script
Running the install script on fresh M1 Macbook Pro fails with:
Configuring multipass
--------
launch failed: The following errors occurred:
qemu-system-aarch64: Number of SMP CPUs requested (10) exceeds max CPUs supported by machine 'mach-virt' (8)
dotrun: shutdown called while starting
Support for databases / docker-compose.yaml
We have some sites, now crucially including ubuntu.com, that need to spin up a postgresql database to run. There are other sites that might need other background services to integrate with - e.g. assets.ubuntu.com needs OpenStack Swift.
A project should be able to ask dotrun should be able to spin up these services automatically, such that developers can still get up and running with a single dotrun
command.
I propose we do this by looking to see if a project has a docker-compose.yaml
file in its root (as ubuntu.com now does), and if it does, simply running docker-compose up
before running the site. This would solve the problem for ubuntu.com.
Unfortunately, this means that the dotrun snap will need to have the ability to control Docker, which is no easy task. How to do this is explored a little in this forum post. Basically, devs will need to have the docker snap installed.
However, this snap isn't official and isn't well maintained, and so it actually doesn't work properly to build some of our images. I think this means we will need to run the docker snap purely as the interface into docker, where the CLI binary sits, but actually connect it to a system version of docker which should hopefully work to build our images properly. Some experimentation is needed here.
Re-enable "test snap" GitHub PR tests
I've disabled tests in #34 because I don't know why the error is happening. Tests pass locally for me, so it's something to do with GitHub actions, I'll look into it later. Maybe the action just needs more resources.
Can't run multiple instances of the same project with dotrun v2
My common workflow is to have one terminal session running dotrun serve
and another for dotrun watch
or dotrun build
.
It seems impossible now with dotrun v2. When trying to run another instance of dotrun in project that is already runinng you get docker error about container name being already in use:
~/Canonical/ubuntu.com [main] $ dotrun watch
Checking for dotrun image updates...
Traceback (most recent call last):
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: http+docker://localhost/v1.41/containers/create?name=dotrun-ubuntu.com
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/dotrun", line 8, in <module>
sys.exit(cli())
File "/Library/Python/3.8/site-packages/dotrun.py", line 116, in cli
container = dotrun.create_container(command)
File "/Library/Python/3.8/site-packages/dotrun.py", line 94, in create_container
return self.docker_client.containers.create(
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/models/containers.py", line 878, in create
resp = self.client.api.create_container(**create_kwargs)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/container.py", line 428, in create_container
return self.create_container_from_config(config, name)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/container.py", line 439, in create_container_from_config
return self._result(res, True)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/client.py", line 274, in _result
self._raise_for_status(response)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 409 Client Error for http+docker://localhost/v1.41/containers/create?name=dotrun-ubuntu.com: Conflict ("Conflict. The container name "/dotrun-ubuntu.com" is already in use by container "ad85d91c8b91004b6b43bb10b7b80465ad57603b4d79f5f78dd19032fa4d7957". You have to remove (or rename) that container to be able to reuse that name.")
Shared folder does not mount automatically on Mac
Running dotrun
snap on Mac requires sharing a dotrun-projects
folder between mac host and multipass instance.
The folder is created and shared by the install script, but after you reboot or shutdown your machine this shared folder is not mounted anymore and it has to be mounted manually. And there is no documented way to do it - you have to look into install script and search for a mount
command and copy it to your CLI.
Mounted folder can be added to "login items" on mac, so Mac will attempt mounting it on login, but this relies on multipass instance running first.
When you restart mac (with multipass running) multipass will (in most cases) automatically start the instances there were running before, so in most cases Mac will be able to mount the shared folder (via "login items"). But if you shut down mac and start again, multipass instances don't automatically start, and shared folder is not available when mac attempts to mount it.
This also has a weird side effect of mac blocking starting any other login items until you click on error message that the folder can't be mounted.
We should find a solution to automatically set up starting dotrun
instance and mounting the shared folder whenever mac is started (user logs in?).
Write tests
``
Chaining yarn commands fails with permission errors
When chaining some yarn commands there are permission errors.
As a minimal reproduction if you create a new dir and package.json with:
{
"scripts": {
"test": "yarn install && yarn help"
}
}
and run dotrun test
I get:
- Yarn dependencies up to date
- No requirements.txt found
[ $ yarn run test ]
yarn run v1.22.0
warning package.json: No license field
$ yarn install && yarn help
warning package.json: No license field
warning No license field
[1/4] Resolving packages...
success Already up-to-date.
warning package.json: No license field
error An unexpected error occurred: "EACCES: permission denied, open '/home/multipass/.npmrc'".
info If you think this is a bug, please open a bug report with the information provided in "/home/multipass/code/test/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/help for documentation about this command.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
= `yarn run test` errored =
Note the error EACCES: permission denied, open '/home/multipass/.npmrc'
. I suspect this is not the real problem, but @squidsoup also gets the same error (the real script we are trying to run is: https://github.com/canonical-web-and-design/maas-ui/blob/master/package.json#L6 which runs fine when using yarn directly).
`$(hostname -I)` returns 2 IP addresses
README suggest using $(hostname -I)
to use IP address instead of 0.0.0.0 when running a server.
In multipass (at least my installation) $(hostname -I)
returns 2 IP addresses:
ubuntu@ubuntu ~ $ hostname -I
192.168.64.4 172.17.0.1
leading to running server without expected $PORT
./entrypoint $(hostname -I):${PORT}
Second IP address seems to be one for docker
System information as of Wed Feb 26 09:50:16 CET 2020
System load: 0.0 Processes: 141
Usage of /: 72.6% of 9.52GB Users logged in: 1
Memory usage: 28% IP address for enp0s2: 192.168.64.4
Swap usage: 0% IP address for docker0: 172.17.0.1
utf-8-validate: Command failed (Apple M1)
dotrun
fails on Apple M1 in https://github.com/canonical-web-and-design/jaas-dashboard
- Yarn dependencies have changed, reinstalling
[ $ yarn install ]
yarn install v1.22.0
[1/4] Resolving packages...
[2/4] Fetching packages...
info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
warning "@canonical/react-components > [email protected]" has incorrect peer dependency "react@^16.8.6".
warning "@canonical/react-components > [email protected]" has incorrect peer dependency "react-dom@^16.8.6".
warning " > [email protected]" has unmet peer dependency "eslint@>=7.0.0".
warning " > [email protected]" has unmet peer dependency "eslint@>=5.0.0".
warning "react-hot-toast > [email protected]" has unmet peer dependency "csstype@^2.6.2".
warning " > @testing-library/[email protected]" has unmet peer dependency "@testing-library/dom@>=7.21.4".
warning "@wojtekmaj/enzyme-adapter-react-17 > [email protected]" has incorrect peer dependency "[email protected]".
warning " > [email protected]" has unmet peer dependency "eslint@^2 || ^3 || ^4 || ^5 || ^6 || ^7.2.0 || ^8".
warning " > [email protected]" has unmet peer dependency "eslint@^6.0.0 || ^7.0.0 || ^8.0.0".
warning "eslint-plugin-jest > @typescript-eslint/[email protected]" has unmet peer dependency "eslint@*".
warning "eslint-plugin-jest > @typescript-eslint/experimental-utils > [email protected]" has unmet peer dependency "eslint@>=5".
warning " > [email protected]" has unmet peer dependency "eslint@^3 || ^4 || ^5 || ^6 || ^7 || ^8".
warning " > [email protected]" has unmet peer dependency "eslint@^3 || ^4 || ^5 || ^6 || ^7 || ^8".
warning " > [email protected]" has incorrect peer dependency "react@^0.14.9 || ^15.3.0 || ^16.0.0".
warning " > [email protected]" has incorrect peer dependency "stylelint@^14.0.0".
warning " > [email protected]" has unmet peer dependency "jest@^27.0.0".
[4/4] Building fresh packages...
[1/6] ⡀ bufferutil
[2/6] ⡀ utf-8-validate
[3/6] ⡀ core-js-pure
[6/6] ⡀ core-js
error /home/parallels/parallels-code/jaas-dashboard/node_modules/utf-8-validate: Command failed.
Exit code: 1
Command: node-gyp-build
Arguments:
Directory: /home/parallels/parallels-code/jaas-dashboard/node_modules/utf-8-validate
Output:
gyp info it worked if it ends with ok
gyp info using [email protected]
gyp info using [email protected] | linux | arm64
gyp info find Python using Python version 3.8.10 found at "/snap/dotrun/67/bin/python3"
(node:339075) [DEP0150] DeprecationWarning: Setting process.config is deprecated. In the future the property will be read-only.
(Use `node --trace-deprecation ...` to show where the warning was created)
gyp info spawn /snap/dotrun/67/bin/python3
gyp info spawn args [
gyp info spawn args '/snap/dotrun/67/usr/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/home/parallels/parallels-code/jaas-dashboard/node_modules/utf-8-validate/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/snap/dotrun/67/usr/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/home/parallels/snap/dotrun/67/.cache/node-gyp/16.8.0/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/home/parallels/snap/dotrun/67/.cache/node-gyp/16.8.0',
gyp info spawn args '-Dnode_gyp_dir=/snap/dotrun/67/usr/lib/node_modules/npm/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/home/parallels/snap/dotrun/67/.cache/node-gyp/16.8.0/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/home/parallels/parallels-code/jaas-dashboard/node_modules/utf-8-validate',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
make: cc: Command not found
make: *** [validation.target.mk:111: Release/obj.target/validation/src/validation.o] Error 127
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/snap/dotrun/67/usr/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack at ChildProcess.emit (node:events:394:28)
gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)
gyp ERR! System Linux 5.4.0-100-generic
gyp ERR! command "/snap/dotrun/67/usr/bin/node" "/snap/dotrun/67/usr/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/parallels/parallels-code/jaas-dashboard/node_modules/utf-8-validate
gyp ERR! node -v v16.8.0
gyp ERR! node-gyp -v v7.1.2
gyp ERR! not ok
= `yarn install` errored =
Error with the package django-openid-auth on M1
The command dotrun install
fails on projects with python dependency django-openid-auth==0.16
(required by our SSO), the cause is a dependency of this package backports.zoneinfo 0.2.1
, steps to reproduce:
mkdir test-project && cd test-project
dotrun exec yarn init -y
echo "backports.zoneinfo==0.2.1" > requirements.txt
dotrun
Here is the output:
$ dotrun install
- Installing yarn dependencies (forced)
[ $ yarn install ]
yarn install v1.22.0
[1/4] Resolving packages...
[2/4] Fetching packages...
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
Done in 0.02s.
- Creating python environment: .venv
[ $ virtualenv --always-copy --python /snap/dotrun/current/bin/python3 /home/ubuntu/test/.venv ]
created virtual environment CPython3.8.10.final.0-64 in 68ms
creator CPython3Posix(dest=/home/ubuntu/test/.venv, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/ubuntu/snap/dotrun/69/.local/share/virtualenv)
added seed packages: pip==22.0.4, setuptools==60.10.0, wheel==0.37.1
activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
- Installing python dependencies (forced)
[ $ pip3 install --requirement requirements.txt ] ( virtualenv `.venv` )
Collecting backports.zoneinfo==0.2.1
Using cached backports.zoneinfo-0.2.1.tar.gz (74 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: backports.zoneinfo
Building wheel for backports.zoneinfo (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for backports.zoneinfo (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [39 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-aarch64-cpython-38
creating build/lib.linux-aarch64-cpython-38/backports
copying src/backports/__init__.py -> build/lib.linux-aarch64-cpython-38/backports
creating build/lib.linux-aarch64-cpython-38/backports/zoneinfo
copying src/backports/zoneinfo/_common.py -> build/lib.linux-aarch64-cpython-38/backports/zoneinfo
copying src/backports/zoneinfo/_zoneinfo.py -> build/lib.linux-aarch64-cpython-38/backports/zoneinfo
copying src/backports/zoneinfo/__init__.py -> build/lib.linux-aarch64-cpython-38/backports/zoneinfo
copying src/backports/zoneinfo/_version.py -> build/lib.linux-aarch64-cpython-38/backports/zoneinfo
copying src/backports/zoneinfo/_tzpath.py -> build/lib.linux-aarch64-cpython-38/backports/zoneinfo
running egg_info
writing src/backports.zoneinfo.egg-info/PKG-INFO
writing dependency_links to src/backports.zoneinfo.egg-info/dependency_links.txt
writing requirements to src/backports.zoneinfo.egg-info/requires.txt
writing top-level names to src/backports.zoneinfo.egg-info/top_level.txt
reading manifest file 'src/backports.zoneinfo.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.png' under directory 'docs'
warning: no files found matching '*.svg' under directory 'docs'
no previously-included directories found matching 'docs/_build'
no previously-included directories found matching 'docs/_output'
adding license file 'LICENSE'
adding license file 'licenses/LICENSE_APACHE'
writing manifest file 'src/backports.zoneinfo.egg-info/SOURCES.txt'
copying src/backports/zoneinfo/__init__.pyi -> build/lib.linux-aarch64-cpython-38/backports/zoneinfo
copying src/backports/zoneinfo/py.typed -> build/lib.linux-aarch64-cpython-38/backports/zoneinfo
running build_ext
building 'backports.zoneinfo._czoneinfo' extension
creating build/temp.linux-aarch64-cpython-38
creating build/temp.linux-aarch64-cpython-38/lib
aarch64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/home/ubuntu/test/.venv/include -I/usr/include/python3.8 -c lib/zoneinfo_module.c -o build/temp.linux-aarch64-cpython-38/lib/zoneinfo_module.o -std=c99
lib/zoneinfo_module.c:1:10: fatal error: Python.h: No such file or directory
1 | #include "Python.h"
| ^~~~~~~~~~
compilation terminated.
error: command '/snap/dotrun/69/usr/bin/aarch64-linux-gnu-gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for backports.zoneinfo
Failed to build backports.zoneinfo
ERROR: Could not build wheels for backports.zoneinfo, which is required to install pyproject.toml-based projects
= `pip3 install --requirement requirements.txt` errored =
Dockerpy is not pulling the latest image
The docker SDK is acting differently than the CLI. It's not pulling the latest image when it is already present. This is not what we want for dotrun, so we should find a workaround.
Lines 54 to 58 in 53a97a2
I have seen this issue reported here: docker/docker-py#2422.
Cache/snap cleanup command
The snap directory in the users home directory (~/snap/dotrun) contains caches for yarn etc. so over time this dir can get quite large (see: #28).
It might be nice to have a dotrun command that cleaned up the yarn/python caches etc. Something like dotrun purge
or dotrun prune
.
Docker error about name conflict
For some reason I was getting an error about name conflict even that no other dotrun
instance was running at a time:
~/Canonical/anbox-cloud.io [main] $ dotrun
Checking for dotrun image updates...
Traceback (most recent call last):
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 409 Client Error: Conflict for url: http+docker://localhost/v1.41/containers/create?name=dotrun-anbox-cloud.io
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/dotrun", line 8, in <module>
sys.exit(cli())
File "/Library/Python/3.8/site-packages/dotrun.py", line 129, in cli
container = dotrun.create_container(command)
File "/Library/Python/3.8/site-packages/dotrun.py", line 107, in create_container
return self.docker_client.containers.create(
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/models/containers.py", line 878, in create
resp = self.client.api.create_container(**create_kwargs)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/container.py", line 428, in create_container
return self.create_container_from_config(config, name)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/container.py", line 439, in create_container_from_config
return self._result(res, True)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/client.py", line 274, in _result
self._raise_for_status(response)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/Users/bartaz/Library/Python/3.8/lib/python/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 409 Client Error for http+docker://localhost/v1.41/containers/create?name=dotrun-anbox-cloud.io: Conflict ("Conflict. The container name "/dotrun-anbox-cloud.io" is already in use by container "931abce5c5396d440b64f0d0a654b76d58ca2648bfc9d06533e1763c7692f360". You have to remove (or rename) that container to be able to reuse that name.")
I fixed it by manually deleting container with "anbox-cloud" in the name.
Not sure what could cause that issue.
specs.canonical.com gunicorn.error on M1
Attempting to run dotrun
on specs.canonical.com on an M1 machine results in an error. Details below.
2022-10-31 15:39:17.783Z ERROR gunicorn.error "Exception in worker process" Traceback (most recent call last): File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker worker.init_process() File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/gunicorn/workers/ggevent.py", line 146, in init_process super().init_process() File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/gunicorn/workers/base.py", line 134, in init_process self.load_wsgi() File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi self.wsgi = self.app.wsgi() File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 58, in load return self.load_wsgiapp() File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/talisker/gunicorn.py", line 163, in load_wsgiapp app = super(TaliskerApplication, self).load_wsgiapp() File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp return util.import_app(self.app_uri) File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/gunicorn/util.py", line 359, in import_app mod = importlib.import_module(module) File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 848, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ubuntu/specs-canonical-com/webapp/app.py", line 19, in <module> sheet = get_sheet() File "/home/ubuntu/specs-canonical-com/webapp/spreadsheet.py", line 43, in get_sheet creds = service_account.Credentials.from_service_account_info( File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/google/oauth2/service_account.py", line 224, in from_service_account_info signer = _service_account_info.from_dict( File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/google/auth/_service_account_info.py", line 58, in from_dict signer = crypt.RSASigner.from_service_account_info(data) File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/google/auth/crypt/base.py", line 113, in from_service_account_info return cls.from_string( File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/google/auth/crypt/_python_rsa.py", line 153, in from_string key = _helpers.from_bytes(key) # PEM expects str in Python 3 File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/google/auth/_helpers.py", line 130, in from_bytes raise ValueError("{0!r} could not be converted to unicode".format(value)) ValueError: None could not be converted to unicode 2022-10-31 15:39:17.793Z INFO gunicorn.error "Worker exiting (pid: 88)" Traceback (most recent call last): File "src/gevent/greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/gunicorn/arbiter.py", line 242, in handle_chld self.reap_workers() File "/home/ubuntu/specs-canonical-com/.venv/lib/python3.8/site-packages/gunicorn/arbiter.py", line 525, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> 2022-10-31T15:39:17Z <Greenlet at 0xffff917a6260: <bound method Arbiter.handle_chld of <gunicorn.arbiter.Arbiter object at 0xffff9175fc40>>(17, None)> failed with HaltServer
Use direnv and `.envrc` for processing environment variables
Instead of dotrun.toml
Unable to find remote helper for 'https' error
Summary
Error fatal: unable to find remote helper for 'https'
comes up when trying to run dotrun build
on a site that uses documentation.builder
. E.g. docs.ubuntu.com
docker-compose doesn't run when running `dotrun -C {directory}`
Because when we check for the docker-compose.yaml
file, we don't take the -C
directory setting into account.
This means no-one trying to run dotrun in multipass will have docker-compose
provisioning the database automatically for them.
Chromium error when using dotrun on ubuntu.com
Dependency Dashboard
This issue provides visibility into Renovate updates and their statuses. Learn more
This repository currently has no open or pending branches.
- Check this box to trigger a request for Renovate to run again on this repository
Ability to use a local node module
Expose all the container ports
At the moment, dotrun exposes the port in the env file, but a project might need to expose more ports for different purposes.
It would be great if dotrun could detect what ports a container is exposing and do the mapping automatically or at least let the user specify the ports to expose.
black fails with "permission denied" because of /dev/shm access
When running Black within my strictly confined snap inside a multipass VM I get a permission denied error:
ubuntu@dotrun:/home/robin/Projects/ubuntu.com$ black webapp
Traceback (most recent call last):
File "/home/robin/Projects/ubuntu.com/.venv/bin/black", line 8, in <module>
sys.exit(patched_main())
File "/home/robin/Projects/ubuntu.com/.venv/lib/python3.6/site-packages/black.py", line 4135, in patched_main
main()
File "/home/robin/Projects/ubuntu.com/.venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/home/robin/Projects/ubuntu.com/.venv/lib/python3.6/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/robin/Projects/ubuntu.com/.venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/robin/Projects/ubuntu.com/.venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/robin/Projects/ubuntu.com/.venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/robin/Projects/ubuntu.com/.venv/lib/python3.6/site-packages/black.py", line 464, in main
sources=sources, fast=fast, write_back=write_back, mode=mode, report=report
File "/home/robin/Projects/ubuntu.com/.venv/lib/python3.6/site-packages/black.py", line 529, in reformat_many
executor = ProcessPoolExecutor(max_workers=worker_count)
File "/snap/dotrun/15/usr/lib/python3.6/concurrent/futures/process.py", line 402, in __init__
EXTRA_QUEUED_CALLS)
File "/snap/dotrun/15/usr/lib/python3.6/multiprocessing/context.py", line 102, in Queue
return Queue(maxsize, ctx=self.get_context())
File "/snap/dotrun/15/usr/lib/python3.6/multiprocessing/queues.py", line 42, in __init__
self._rlock = ctx.Lock()
File "/snap/dotrun/15/usr/lib/python3.6/multiprocessing/context.py", line 67, in Lock
return Lock(ctx=self.get_context())
File "/snap/dotrun/15/usr/lib/python3.6/multiprocessing/synchronize.py", line 162, in __init__
SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
File "/snap/dotrun/15/usr/lib/python3.6/multiprocessing/synchronize.py", line 59, in __init__
unlink_now)
PermissionError: [Errno 13] Permission denied
This works if I run it directly in the multipass VM itself.
I suspect this error is similar to the one mentioned here, to do with access to /dev/shm
:
https://stackoverflow.com/questions/2009278/python-multiprocessing-permission-denied
Store data inside a .dotrun.json file inside the project directory
`dotrun {command}` should run `yarn run {command}`
Which means we can remove explicit commands for serve
, build
, test
etc. We should also remove the native running of yarn run watch
in the background - the project’s package.json should now handle this itself)
At the same time we should make dotrun
by itself run dotrun start
.
Support for detecting and installing changes from Gemfiles
Error about port conflict (when docker container is already running in background)
Occasionally (not sure when, maybe when computer goes to sleep, or when shell is closed by vscode, or something), the dotrun process ends, but the docker container in the background still runs.
This causes a conflict of port number when I try to run dotrun again
I have to manually stop given container in docker to be able to use dotrun again.
Dotrun containers have a unique names now, but listening on the same port still is blocking running another container if one is already running.
Would it be possible for dotrun to detect container running on given port and close it or something?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.