docker-archive / docker-registry Goto Github PK
View Code? Open in Web Editor NEWThis is **DEPRECATED**! Please go to https://github.com/docker/distribution
License: Apache License 2.0
This is **DEPRECATED**! Please go to https://github.com/docker/distribution
License: Apache License 2.0
if the storage_path is set to /srv/docker/
the following exception is thrown:
2013-09-25 20:06:16,545 ERROR: Exception on /v1/repositories/library/busybox/tags [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1687, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1360, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1358, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1344, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/var/dockmaster/registry/registry/toolkit.py", line 195, in wrapper
return f(namespace, repository, *args, **kwargs)
File "/var/dockmaster/registry/registry/toolkit.py", line 169, in wrapper
return f(*args, **kwargs)
File "/var/dockmaster/registry/registry/tags.py", line 32, in get_tags
data[tag_name[4:]] = store.get_content(fname)
File "/var/dockmaster/registry/lib/storage/local.py", line 27, in get_content
with open(path, mode='r') as f:
IOError: [Errno 2] No such file or directory: '/srv/docker/epositories/library/busybox/tag_latest'
With storage_path set to /srv/docker
everything works fine. The r
in repositories is simply cut away ...
my config:
vagrant@precise64:~$ sudo docker version
Client version: 0.6.3
Go version (client): go1.1.2
Git commit (client): b0a49a3
Server version: 0.6.3
Git commit (server): b0a49a3
Go version (server): go1.1.2
Last stable version: 0.6.3
For the docker-registry I used the latest master ...
I am confused why my image size is doubled after pushing into and pulling from private registry on the server?
For example, on my local machine the size of the image is
REPOSITORY TAG ID CREATED SIZE
tools:11000/machinelearning ubuntu_sshd df5f2d7b1bf7 30 minutes ago 24.23 MB (virtual 155.7 MB)
but the image size pulling from private registry on the server is
REPOSITORY TAG ID CREATED SIZE
tools:11000/machinelearning ubuntu_sshd df5f2d7b1bf7 39 minutes ago 48.44 MB (virtual 311.4 MB)
Does anyone know the reason for that?
Thanks.
When I try to push to my private registry using the samalba/docker-registry released on Aug 23, I receive the following error:
$ sudo docker push my.docker.registry.com:443/private/testing123 The push refers to a repository [my.docker.registry.com:443/private/testing123] (len: 1) Sending image list Pushing repository my.docker.registry.com:443/private/testing123 (1 tags) Pushing 27cf784147099545 2013/08/30 16:06:54 Failed to upload metadata: Put https://docker-registry/v1/images/27cf784147099545/json: dial tcp 199.101.28.20:443: connection timed out
I replaced my actual repository url with "my.docker.registry.com".
There is a good chance I'm just doing something silly, but wanted to document it here just in case I'm hitting a bug. I'll make sure to follow-up on whether I was able to find a solution in case others also have the same problem.
When trying to push an image to registry, it returns http 400 with 'already exists' in the body.
As it is not an error, It would be better to return a specific HTTP code: 302, 304 or even maybe 208.
FYI
docker-registry is now included in the Alfred Package Managers Workflow. Hope you find it useful.
Please let me know if you'd like the logo changed. I'm currently using the dotcloud logo.
It would be nice to have a call that takes an image Id and returns the jsons from all childrens regardless of the repository (very useful for the builder)
I'm not sure if this is a bug or not, but when I run the registry in the foreground via docker like this:
% docker run samalba/docker-registry
Then control-c to kill it, it still appears to be running, according to docker ps.
% docker ps
ID IMAGE COMMAND CREATED STATUS PORTS
2458b528bdda samalba/docker-registry:latest /bin/sh -c cd /docke 5 seconds ago Up 4 seconds 49169->5000
Is there a way to kill the registry without having to use 'docker kill'?
I have a 2GB docker image being stored in a private docker registry. When I run sudo docker pull docker.foo.com/bar
the image starts downloading, but docker pull loads everything into RAM, which is an issue because I have an EC2 instance with 1.6GB of ram. When the amount of available ram hits 0, docker pull will crash. Also it will sometimes fail before all of the ram is used, sometimes it will crash when there is still 200mb available which makes me think its not even a ram buffer issue.
The error that happens upon exit is as follows:
Downloading 341 MB/1.499 GB (23%)
Error while retrieving image for tag: (exit status 2: tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
); checking next endpoint
I'm using the lxc-docker
package from https://launchpad.net/~dotcloud/+archive/lxc-docker
sudo docker version
Client version: 0.5.3
Server version: 0.5.3
Go version: go1.1
Also worth noting that I tried this with a larger EC2 instance with 3.75GB of ram (m1.medium), and the pull worked as expected.
The current index.docker.io registry has a web interface, but docker-registry seems to lack that web interface.
Will a web interface be added to docker-registry at some point or will it just stay as it is?
S3 spits back a 400 with:
<Error>
<Code>MalformedXML</Code>
<Message>The XML you provided was not well-formed or did not validate against our published schema</Message>
<RequestId>...someRequestId...</RequestId>
<HostId>...someHostId...</HostId>
</Error>
Any ideas? I've tried upgrading boto without any luck.
When using the default configuration with multiple workers, sessions don't work as expected and checksum matching fails when uploading an image. This is due to Flask.secret_key
being generated independently by each worker.
For example, consider running using this command line, straight from the README:
gunicorn -k gevent --max-requests 100 --graceful-timeout 3600 -t 3600 -b localhost:5000 -w 8 wsgi:application
This will spawn eight workers, each of which will generate a different secret key using gen_random_string(64)
.
When a client attempts to push an image, the session cookie that is set by one worker will not be valid for another worker. This manifests as the following error:
api_error: Checksum not found in cookie
The obvious workaround is to set secret_key
in config.yml. However, since there's no documentation about this being needed, nor any useful error messages, it's not obvious what the issue is.
Here are some ideas, but I'm not sure what the best way to proceed is:
Any reason why in s3.py, is_secure isn't defaulted to True?
self._s3_conn = \
boto.s3.connection.S3Connection(self._config.s3_access_key,
self._config.s3_secret_key,
is_secure=False)
As of commit 0ddc686, Docker compresses layer tarballs using lzma compression. This registry code uses the tarfile
module which only supports lzma/xz compression in Python 3.3. Running the registry on Python 2.7 results in an IOError when calling checksums.compute_tarsum
.
I don't know how compatible the registry code is with Python 3, but the boto dependency seems pretty pegged to 2.7. Any plans to address this?
I've stup a private registry and I pushed my container using docker push host.ip.address/image_name but now when (on a different machine) I try to pull the container (docker pull host.ip.address/image_name) it says: Internal server error: 404 trying to fetch remote history for image_name.
Any ideas why? I can see the image uploaded on the registry
It would be fantastic to add a Dockerfile to the registry so it can be easily deployed using docker. Seems like the right thing to do :)
Hey there,
I've been trying to get basic authentication working as suggested in the README, but reading through the code, the only Authentication headers that are either read or set are Token based Authentication headers.
I've set authentication up op Apache2 (this works 100% certain). What happens is:
The first issue is solved by allowing /v1/ping to the world (would need an update if ever v1 is abandoned).
The second issue is not easily solved, because it involves fixing the client.
Please either document how to use basic auth or remove the reference :)
I'm trying to host this registry on dotcloud using Nginx basic auth, but can't seem to get it working properly.
I'm not exactly sure if this is the right place to post this, but I have a hunch it has something to do with the build script and how the nginx.conf is loaded for the service.
Here's my nginx.conf
location ^~ / {
auth_basic "Restricted";
auth_basic_user_file /home/dotcloud/password;
}
I've tried playing around with the location and then just removing the location altogether like:
auth_basic "Restricted";
auth_basic_user_file /home/dotcloud/password;
Has anyone gotten this up and running on dotcloud?
When a repository has a forbidden character within its name, the registry returns a 404 with full HTML body.
It should be a specific error with json body.
See moby/moby#663
Hi guys,
Any news on the standalone (index independent) mode? I noticed that such a mode is in the documentation(http://docs.docker.io/en/latest/api/registry_api.html#without-an-index). However, I looked through the source and couldn't find such an option, is there any plans to implement this any time soon?
Thanks,
Michael and the NEMALOAD team
This IP check is good for security but is a constraint for some people using a pool of proxy to connect to the internet. Some proxies don't respect the X-Forwarded-For and make this check fail. If the connection is in full HTTPS, the cookie is impossible to steal (as long as the certificate authority is checked and valid).
The idea is to disable the check in HTTPS and keep it in HTTP.
Some people have run into all sorts of troubles with images they've uploaded. They couldn't delete those images in any way.
I've seen a few persons complain that:
It should be possible for an user to delete their own images. If the image isn't referenced by any other image, the user should be able to delete it.
This will become an issue as more people start uploading images to the registry.
It should be possible to DELETE of a whole repository + whole namespace (if empty).
The Rest API implemented by the Registry needs an official documentation.
This repo has 3 tags, btw; base, preclone, and latest. Latest is the only one that needed to be updated on index.docker.io
Hope this helps.
vagrant@precise64:~/strider-dockerfile$ docker push jaredly/strider
2013/07/16 20:03:07 POST /v1.3/images/jaredly/strider/push?registry=
The push refers to a repository [jaredly/strider] (len: 3)
Processing checksums
Sending image list
Pushing repository jaredly/strider to https://registry-1.docker.io (3 tags)
Pushing 5bf03cb1b9d59087912f251f54db485ad89c27121c8d66d00c73756f3409c375
Image 5bf03cb1b9d59087912f251f54db485ad89c27121c8d66d00c73756f3409c375 already uploaded ; skipping
Pushing tags for rev [5bf03cb1b9d59087912f251f54db485ad89c27121c8d66d00c73756f3409c375] on {https://registry-1.docker.io/repositories/jaredly/strider/tags/latest}
Pushing 27cf784147099545
2013/07/16 20:03:11 HTTP code 401 while uploading metadata: {
"error": "Requires authorization"
}
vagrant@precise64:~/strider-dockerfile$ docker login
Username (jaredly):
2013/07/16 20:03:18 POST /v1.3/auth
Login Succeeded
vagrant@precise64:~/strider-dockerfile$ docker push jaredly/strider
2013/07/16 20:03:21 POST /v1.3/images/jaredly/strider/push?registry=
The push refers to a repository [jaredly/strider] (len: 3)
Processing checksums
Sending image list
Pushing repository jaredly/strider to https://registry-1.docker.io (3 tags)
Pushing 5bf03cb1b9d59087912f251f54db485ad89c27121c8d66d00c73756f3409c375
Image 5bf03cb1b9d59087912f251f54db485ad89c27121c8d66d00c73756f3409c375 already uploaded ; skipping
Pushing tags for rev [5bf03cb1b9d59087912f251f54db485ad89c27121c8d66d00c73756f3409c375] on {https://registry-1.docker.io/repositories/jaredly/strider/tags/latest}
Pushing 27cf784147099545
Image 27cf784147099545 already uploaded ; skipping
... [skipping lots of already uploaded images] ...
Pushing tags for rev [0c93269bfbdfc718cfbf04eda73e9f65de8e1927d72ebeeb7f4a9713a58962a9] on {https://registry-1.docker.io/repositories/jaredly/strider/tags/base}
Pushing 36ecb5c86a9d1113372ea444ff1a464d71738d5532669f25dfd76d3b6f25e4f3
Image 36ecb5c86a9d1113372ea444ff1a464d71738d5532669f25dfd76d3b6f25e4f3 already uploaded ; skipping
Pushing tags for rev [36ecb5c86a9d1113372ea444ff1a464d71738d5532669f25dfd76d3b6f25e4f3] on {https://registry-1.docker.io/repositories/jaredly/strider/tags/base}
Pushing d03bbd8dcfe6ed3016affde8b727fac726ac1bdf38603d38c2fd39d64cceb6fa
2013/07/16 20:03:39 HTTP code 401 while uploading metadata: {
"error": "Requires authorization"
}
vagrant@precise64:~/strider-dockerfile$ docker login
Username (jaredly):
2013/07/16 20:03:48 POST /v1.3/auth
Login Succeeded
vagrant@precise64:~/strider-dockerfile$ docker push jaredly/strider
2013/07/16 20:03:50 POST /v1.3/images/jaredly/strider/push?registry=
The push refers to a repository [jaredly/strider] (len: 3)
Processing checksums
Sending image list
Pushing repository jaredly/strider to https://registry-1.docker.io (3 tags)
Pushing 5bf03cb1b9d59087912f251f54db485ad89c27121c8d66d00c73756f3409c375
Image 5bf03cb1b9d59087912f251f54db485ad89c27121c8d66d00c73756f3409c375 already uploaded ; skipping
Pushing tags for rev [5bf03cb1b9d59087912f251f54db485ad89c27121c8d66d00c73756f3409c375] on {https://registry-1.docker.io/repositories/jaredly/strider/tags/latest}
2013/07/16 20:03:52 Internal server error: 401 trying to push tag latest on jaredly/strider
I would happily pay for a private hosted registry.
+1 if you want to see this.
I was wondering how to delete a repository from my private registry? As I searched online there is no apparent way to do it. Thanks for you help.
After removing /var/lib/docker/graph/checksums as listed in #15, the registry (or docker?) is asking for authentication with the registry server:
$ docker version
Client version: 0.5.0
Server version: 0.5.0
Go version: go1.1
docker-registry$ git log
commit 3c783545febeedd3723a7009325fc42cf65df082
...
vagrant@precise64:~$ docker run -d base apt-get install -y cowsay
1be449ad1e69
vagrant@precise64:~$ docker commit 1be449ad1e69 bacongobbler/cowsay
bf1653c49ad5
vagrant@precise64:~$ docker tag bacongobbler/cowsay registry.domain.com:5000/bacongobbler/cowsay
vagrant@precise64:~$ docker push registry.domain.com:5000/bacongobbler/cowsay
Username (): ^C
2013/07/19 21:07:48 Error: Registration: "Wrong username format (it has to match \"^[a-z0-9]{4,30}$\")"
This is with a completely new image of vagrant. From IRC:
arothfusz: it might not be trying to push to the wrong place -- I suspect it is just *always* checking for locally cached credentials
vagrant@precise64:~$ docker push dpaola2/buildpack
The push refers to a repository [dpaola2/buildpack] (len: 1)
Processing checksums
Sending image list
Pushing repository dpaola2/buildpack to registry-1.docker.io (1 tags)
Pushing 27cf784147099545
Image 27cf784147099545 already uploaded ; skipping
Pushing tags for rev [27cf784147099545] on {registry-1.docker.io/users/dpaola2/buildpack/latest}
Pushing b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc
Image b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc already uploaded ; skipping
Pushing tags for rev [b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc] on {registry-1.docker.io/users/dpaola2/buildpack/latest}
Pushing 08964efe9750fef9788b7d7d45e6f2daaa48700474c78cd7939258a628caeedb
Image 08964efe9750fef9788b7d7d45e6f2daaa48700474c78cd7939258a628caeedb already uploaded ; skipping
Pushing tags for rev [08964efe9750fef9788b7d7d45e6f2daaa48700474c78cd7939258a628caeedb] on {registry-1.docker.io/users/dpaola2/buildpack/latest}
Pushing a2d966cafe8be1de91abfebd1ddf3ab840e0788d7b35fcb238a21106b9973054
Image a2d966cafe8be1de91abfebd1ddf3ab840e0788d7b35fcb238a21106b9973054 already uploaded ; skipping
Pushing tags for rev [a2d966cafe8be1de91abfebd1ddf3ab840e0788d7b35fcb238a21106b9973054] on {registry-1.docker.io/users/dpaola2/buildpack/latest}
Pushing d8b55e36b3afbee665af6cbf5727e0492aeec986d66584dbee3d0b2df7892bf7
Image d8b55e36b3afbee665af6cbf5727e0492aeec986d66584dbee3d0b2df7892bf7 already uploaded ; skipping
Pushing tags for rev [d8b55e36b3afbee665af6cbf5727e0492aeec986d66584dbee3d0b2df7892bf7] on {registry-1.docker.io/users/dpaola2/buildpack/latest}
Pushing e5f4281fe2d9d210385cb4356b602948ae13fff586f192e8e61d3881e6b1b3ae
Image e5f4281fe2d9d210385cb4356b602948ae13fff586f192e8e61d3881e6b1b3ae already uploaded ; skipping
Pushing tags for rev [e5f4281fe2d9d210385cb4356b602948ae13fff586f192e8e61d3881e6b1b3ae] on {registry-1.docker.io/users/dpaola2/buildpack/latest}
Pushing 18873a4fb6120460d80344eeb7624b3c7f7d2d5e03bab6892285524b7679e360
Buffering to disk 87336960/? (n/a)
87336960/87336960 (100%)
Received HTTP code 400 while uploading layer: {
"error": "Checksum mismatch, ignoring the layer"
}
It would be nice to have the image size as part of the headers so we can have a progressbar.
197dc32 appears to break the repository.
ImportError: No module named checksums
Where is this module supposed to come from, because it's not available from pip install -r requirements.txt
.
Right now, sys.lib is modified for the code to be run locally.
ASAP, it should be a python package (setuptools) in order to avoid messing with sys.lib and to make the deployment easier.
Implement simple checksum image validation in workflow.py functional test.
I think it would be useful to allow environment variables to be used instead of (or in addition to) editing the config file. Something that follows part III of the 12-factor app methodology (http://12factor.net/config). It would be awesome if I could just take the samalba/docker-registry and run it passing in my storage and notification settings. The way it is now I think I need to commit a new container with my config file installed.
Implement removal of test images in workflow.py functional test, so we can integrate it in docker-ci.
Hi, Sorry for asking many questions recently. We have some projects using docker containers so I want to setup a private registry asap.
I encounter an error while pushing repository to private registry.
On the client it doesn't show any error.
On the server it says:
put_image_layer: Error when computing checksum file could not be opened successfully
The version of registry I am using is
HTTP/1.1 200 OK
Server: gunicorn/0.17.4
Date: Mon, 12 Aug 2013 21:26:04 GMT
Connection: keep-alive
Expires: -1
Content-Type: application/json
Pragma: no-cache
Cache-Control: no-cache
Content-Length: 4
X-Docker-Registry-Version: 0.5.5
X-Docker-Registry-Config: dev
I tried to run push with sudo but still get the same error.
Thanks for your help.
There doesn't seem to be a way to list the images stored in a private registry.
I am trying to run docker pull
from a private registry and end up with unexpected EOF
errors. I am using the latest registry code at f8723f8 and docker version Docker version 0.6.1, build 5105263
.
My nginx configuration looks like this:
upstream docker_registry {
server 127.0.0.1:5000;
}
server {
listen 80;
server_name registry.baremetal.io;
rewrite ^(.*)$ https://registry.baremetal.io$1 permanent;
}
server {
listen 443;
root /dev/null;
index index.html index.htm;
server_name registry.baremetal.io;
ssl on;
ssl_certificate /etc/ssl/baremetal/registry.baremetal.io.crt.pem;
ssl_certificate_key /etc/ssl/baremetal/registry.baremetal.io.key.pem;
client_max_body_size 800M;
chunked_transfer_encoding on;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 900;
location / {
proxy_pass http://docker_registry;
}
}
The nginx logs look like this:
==> /var/log/nginx/error.log <==
2013/09/15 08:15:33 [warn] 22516#0: *128 an upstream response is buffered to a temporary file /var/cache/nginx/proxy_temp/6/00/0000000006 while reading upstream, client: 10.41.142.200, server: registry.baremetal.io, request: "GET /v1/images/e0f7c47bd69aaa0850c35b6204df077cf9ef0fae3e08ce3c44276f4a1af7760c/layer HTTP/1.1", upstream: "http://127.0.0.1:5000/v1/images/e0f7c47bd69aaa0850c35b6204df077cf9ef0fae3e08ce3c44276f4a1af7760c/layer", host: "registry.baremetal.io"
==> /var/log/nginx/access.log <==
10.41.142.200 - - [15/Sep/2013:08:15:33 +0000] "GET /v1/images/8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c/json HTTP/1.1" 200 437 "-" "docker/0.6.1 go/go1.1.2 git-commit/5105263 kernel/3.8.0-30-generic" "-"
10.41.142.200 - - [15/Sep/2013:08:15:33 +0000] "GET /v1/images/afe0c306cb73c6fd8c47d6c8d7aedc935641cfb01af37ec646d44debdbaa4adb/json HTTP/1.1" 200 1427 "-" "docker/0.6.1 go/go1.1.2 git-commit/5105263 kernel/3.8.0-30-generic" "-"
10.41.142.200 - - [15/Sep/2013:08:15:36 +0000] "GET /v1/images/e0f7c47bd69aaa0850c35b6204df077cf9ef0fae3e08ce3c44276f4a1af7760c/layer HTTP/1.1" 200 2244247 "-" "docker/0.6.1 go/go1.1.2 git-commit/5105263 kernel/3.8.0-30-generic" "-"
And the registry logs show:
"172.17.42.1 - - [15/Sep/2013:08:20:58] "GET /v1/images/afe0c306cb73c6fd8c47d6c8d7aedc935641cfb01af37ec646d44debdbaa4adb/layer HTTP/1.0" 200 - "-" "docker/0.6.1 go/go1.1.2 git-commit/5105263 kernel/3.8.0-30-generic"
2013-09-15 08:20:58,531 INFO: "172.17.42.1 - - [15/Sep/2013:08:20:58] "GET /v1/images/afe0c306cb73c6fd8c47d6c8d7aedc935641cfb01af37ec646d44debdbaa4adb/layer HTTP/1.0" 200 - "-" "docker/0.6.1 go/go1.1.2 git-commit/5105263 kernel/3.8.0-30-generic"
Any ideas? Thanks!
Any chance of getting some kind of cryptographic security into docker? If this is going to become a major way of pushing linux images around, I really think that security should be baked in by default, early on.
If dotCloud is recommending that everybody use 'ubuntu', can we make sure that the version we get from the registry has been signed by dotCloud, and isn't some MITM'd version containing malware?
At some point, we could implement limit on total size on an uploaded image (and compute the size during the upload).
Hello, so I thought I had my docker private registry up and running okay but I keep getting 400 errors.
This is just a test I ran ...
# grab a ubuntu container
imageId=$(sudo docker ps -a | grep ubuntu | grep latest | awk '{ if (NR==2) print $1 }')
#now commit this container
sudo docker ps -a commit $imageId localhost:49153/test
#now push this container ...
sudo docker push localhost:49153
#now attempt to pull this repo back down to test ...
sudo docker pull localhost:49153
When I do this, it pushes okay to the server. I've tried building a local container using
docker run samalba/docker-registry
and also using it on dotcloud. When I ran on dotcloud I was able to verify that the image was uploaded to s3 properly. But either way, when I attempt to pull the image down, I get this error:
Pulling repository localhost:49153/ubuntu_test
Pulling image 8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c (latest) from localhost:49153/ubuntu_test
Error while retrieving image for tag: (Internal server error: 400 trying to fetch remote history for 8dbd9e392a964056420e5d58ca5cc376ef18e2de93b5cc90e868a1bbc8318c1c); checking next endpoint
I can't imagine this being an error on how I'm deploying the servers as two completely different deployments gave the error and everything else seemed to be fine. Has anyone had issues like this in the past?
It seems it would be trivial to reconstruct the dockerfile from image metadata, right? That would be really awesome.
this does not work:
docker run samalba/docker-registry
tried on a fedora19 installation.
The lib/storage.py implements an abstracted storage library that supports local filesystem and S3. When using S3, no writes are made on the local disk, which is a good feature for scalability. However when accessing tiny files several times with S3, it impacts performance a lot.
The idea of this ticket is to implement a caching layer using Redis only for small files access (the one fetched and written using get_content and put_content methods).
The cache should be optional (it'll work only if you configure a Redis in the config file) and completely transparent when using the storage api.
Possibly related to #39
Or possibly just a red herring. But both had to do with checksums.
Anyway, noticed new behavior with the latest docker release. On any push the registry is spitting out:
HTTP code 400 while uploading metadata: {
"error": "Checksum not found in Cookie"
}
So I MITM'd this and here are the headers:
proxy request spec { method: 'PUT',
host: '127.0.0.1',
port: 8300,
headers:
{ host: '127.0.0.1',
'x-forwarded-proto': 'http',
'x-forwarded-for': '127.0.0.1',
'x-forwarded-port': '80',
connection: 'close',
'content-length': '0',
'user-agent': 'docker/0.5.3-dev go/go1.1.2 git-commit/9ff4c96+CHANGES kernel/3.8.0-27-generic',
authorization: 'Token a2F6bWVyOno=',
cookie: 'session=U+nZglmTy++TSlCE6iKVu94HTZo=?checksum=KGxwMApTJ3NoYTI1NjozOWZlZmRiOTlhNjVlY2ZmNGJhM2EzOTJkYzkwM2RiODBiZjI0NDJhYjMwMGY2NjgxMzIyNWFjM2YwZGViYTFiJwpwMQphUyd0YXJzdW0rc2hhMjU2OjdlOTJhNGY5OGI1NmE1OTM3OTRjOTcwOWIyMTRkZWYxNTM1ZGFiNmRhZjMzZmI0MjdkZTZlMDI0Y2VkNjI5Y2YnCnAyCmEu; session=U+nZglmTy++TSlCE6iKVu94HTZo=?checksum=KGxwMApTJ3NoYTI1NjozOWZlZmRiOTlhNjVlY2ZmNGJhM2EzOTJkYzkwM2RiODBiZjI0NDJhYjMwMGY2NjgxMzIyNWFjM2YwZGViYTFiJwpwMQphUyd0YXJzdW0rc2hhMjU2OjdlOTJhNGY5OGI1NmE1OTM3OTRjOTcwOWIyMTRkZWYxNTM1ZGFiNmRhZjMzZmI0MjdkZTZlMDI0Y2VkNjI5Y2YnCnAyCmEu',
'x-docker-checksum': 'tarsum+sha256:7e92a4f98b56a593794c9709b214def1535dab6daf33fb427de6e024ced629cf',
'accept-encoding': 'gzip' },
path: '/v1/images/27cf784147099545/checksum' }
I see a checksum in the cookie but I'm no python/flask guy.
Repro is as simple as booting up the registry and trying to push base. Docker 0.5.3 and latest -registry.
This is not an issue but more of a feature request. Would be nice to have my own db of users in the private registry and have the ability to pass in digest or basic auth for a particular user when pulling an image. This would enable me to have multiple users with their own images in my private registry that cannot pull each other's images.
Make sure you're logged out of the docker index for this.
Go to an arbitrary user's page - it appears both that you're logged in and logged in as the user. This looks like it's just a UI thing as going anywhere else will show a normal display up top:
Click rgbkrk/ipython, back to normal:
Note: It doesn't show any of the buttons a logged in user would see.
I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error: Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx. Unless that error comes from the server. In this case why can't it find localhost?
Server is on EC2 if it helps.
I would like to be able to run my own registry simply by typing "docker run registry"
$ docker search registry
$
docker-registry)voltagex@dream:/media/files/linux/docker-registry$ gunicorn --access-logfile - --debug -k gevent -b 0.0.0.0:5000 -w 1 wsgi:application
2013-08-10 00:09:01 [14916] [INFO] Starting gunicorn 0.17.4
2013-08-10 00:09:01 [14916] [INFO] Listening at: http://0.0.0.0:5000 (14916)
2013-08-10 00:09:01 [14916] [INFO] Using worker: gevent
2013-08-10 00:09:01 [14921] [INFO] Booting worker with pid: 14921
2013-08-10 00:09:03 [14921] [ERROR] Exception in worker process:
Traceback (most recent call last):
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 473, in spawn_worker
worker.init_process()
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/workers/ggevent.py", line 131, in init_process
super(GeventWorker, self).init_process()
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 100, in init_process
self.wsgi = self.app.wsgi()
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 106, in wsgi
self.callable = self.load()
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 27, in load
return util.import_app(self.app_uri)
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/util.py", line 353, in import_app
__import__(module)
File "/media/files/linux/docker-registry/wsgi.py", line 11, in <module>
import registry
File "/media/files/linux/docker-registry/registry/__init__.py", line 3, in <module>
from .app import app
File "/media/files/linux/docker-registry/registry/app.py", line 11, in <module>
cfg = config.load()
File "/media/files/linux/docker-registry/lib/config.py", line 31, in load
with open(config_path) as f:
IOError: [Errno 2] No such file or directory: '/media/files/linux/docker-registry/lib/../config.yml'
Traceback (most recent call last):
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 473, in spawn_worker
worker.init_process()
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/workers/ggevent.py", line 131, in init_process
super(GeventWorker, self).init_process()
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 100, in init_process
self.wsgi = self.app.wsgi()
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 106, in wsgi
self.callable = self.load()
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 27, in load
return util.import_app(self.app_uri)
File "/media/files/linux/docker-registry/local/lib/python2.7/site-packages/gunicorn/util.py", line 353, in import_app
__import__(module)
File "/media/files/linux/docker-registry/wsgi.py", line 11, in <module>
import registry
File "/media/files/linux/docker-registry/registry/__init__.py", line 3, in <module>
from .app import app
File "/media/files/linux/docker-registry/registry/app.py", line 11, in <module>
cfg = config.load()
File "/media/files/linux/docker-registry/lib/config.py", line 31, in load
with open(config_path) as f:
IOError: [Errno 2] No such file or directory: '/media/files/linux/docker-registry/lib/../config.yml'
2013-08-10 00:09:03 [14921] [INFO] Worker exiting (pid: 14921)
2013-08-10 00:09:04 [14916] [INFO] Shutting down: Master
2013-08-10 00:09:04 [14916] [INFO] Reason: Worker failed to boot.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.