Giter VIP home page Giter VIP logo

kraken's Introduction

Kraken is a P2P-powered Docker registry that focuses on scalability and availability. It is designed for Docker image management, replication, and distribution in a hybrid cloud environment. With pluggable backend support, Kraken can easily integrate into existing Docker registry setups as the distribution layer.

Kraken has been in production at Uber since early 2018. In our busiest cluster, Kraken distributes more than 1 million blobs per day, including 100k 1G+ blobs. At its peak production load, Kraken distributes 20K 100MB-1G blobs in under 30 sec.

Below is the visualization of a small Kraken cluster at work:

Table of Contents

Features

Following are some highlights of Kraken:

  • Highly scalable. Kraken is capable of distributing Docker images at > 50% of max download the speed limit on every host. Cluster size and image size do not have a significant impact on download speed.
    • Supports at least 15k hosts per cluster.
    • Supports arbitrarily large blobs/layers. We normally limit max size to 20G for the best performance.
  • Highly available. No component is a single point of failure.
  • Secure. Support uploader authentication and data integrity protection through TLS.
  • Pluggable storage options. Instead of managing data, Kraken plugs into reliable blob storage options, like S3, GCS, ECR, HDFS or another registry. The storage interface is simple and new options are easy to add.
  • Lossless cross-cluster replication. Kraken supports rule-based async replication between clusters.
  • Minimal dependencies. Other than pluggable storage, Kraken only has an optional dependency on DNS.

Design

The high-level idea of Kraken is to have a small number of dedicated hosts seeding content to a network of agents running on each host in the cluster.

A central component, the tracker, will orchestrate all participants in the network to form a pseudo-random regular graph.

Such a graph has high connectivity and a small diameter. As a result, even with only one seeder and having thousands of peers joining in the same second, all participants can reach a minimum of 80% max upload/download speed in theory (60% with current implementation), and performance doesn't degrade much as the blob size and cluster size increase. For more details, see the team's tech talk at KubeCon + CloudNativeCon.

Architecture

  • Agent
    • Deployed on every host
    • Implements Docker registry interface
    • Announces available content to tracker
    • Connects to peers returned by the tracker to download content
  • Origin
    • Dedicated seeders
    • Stores blobs as files on disk backed by pluggable storage (e.g. S3, GCS, ECR)
    • Forms a self-healing hash ring to distribute the load
  • Tracker
    • Tracks which peers have what content (both in-progress and completed)
    • Provides ordered lists of peers to connect to for any given blob
  • Proxy
    • Implements Docker registry interface
    • Uploads each image layer to the responsible origin (remember, origins form a hash ring)
    • Uploads tags to build-index
  • Build-Index
    • Mapping of the human-readable tag to blob digest
    • No consistency guarantees: the client should use unique tags
    • Powers image replication between clusters (simple duplicated queues with retry)
    • Stores tags as files on disk backed by pluggable storage (e.g. S3, GCS, ECR)

Benchmark

The following data is from a test where a 3G Docker image with 2 layers is downloaded by 2600 hosts concurrently (5200 blob downloads), with 300MB/s speed limit on all agents (using 5 trackers and 5 origins):

  • p50 = 10s (at speed limit)
  • p99 = 18s
  • p99.9 = 22s

Usage

All Kraken components can be deployed as Docker containers. To build the Docker images:

$ make images

For information about how to configure and use Kraken, please refer to the documentation.

Kraken on Kubernetes

You can use our example Helm chart to deploy Kraken (with an example HTTP fileserver backend) on your k8s cluster:

$ helm install --name=kraken-demo ./helm

Once deployed, every node will have a docker registry API exposed on localhost:30081. For example pod spec that pulls images from Kraken agent, see example.

For more information on k8s setup, see README.

Devcluster

To start a herd container (which contains origin, tracker, build-index and proxy) and two agent containers with development configuration:

$ make devcluster

Docker-for-Mac is required for making dev-cluster work on your laptop. For more information on devcluster, please check out devcluster README.

Comparison With Other Projects

Dragonfly from Alibaba

Dragonfly cluster has one or a few "supernodes" that coordinates the transfer of every 4MB chunk of data in the cluster.

While the supernode would be able to make optimal decisions, the throughput of the whole cluster is limited by the processing power of one or a few hosts, and the performance would degrade linearly as either blob size or cluster size increases.

Kraken's tracker only helps orchestrate the connection graph and leaves the negotiation of actual data transfer to individual peers, so Kraken scales better with large blobs. On top of that, Kraken is HA and supports cross-cluster replication, both are required for a reliable hybrid cloud setup.

BitTorrent

Kraken was initially built with a BitTorrent driver, however, we ended up implementing our P2P driver based on BitTorrent protocol to allow for tighter integration with storage solutions and more control over performance optimizations.

Kraken's problem space is slightly different than what BitTorrent was designed for. Kraken's goal is to reduce global max download time and communication overhead in a stable environment, while BitTorrent was designed for an unpredictable and adversarial environment, so it needs to preserve more copies of scarce data and defend against malicious or bad behaving peers.

Despite the differences, we re-examine Kraken's protocol from time to time, and if it's feasible, we hope to make it compatible with BitTorrent again.

Limitations

  • If Docker registry throughput is not the bottleneck in your deployment workflow, switching to Kraken will not magically speed up your docker pull. To speed up docker pull, consider switching to Makisu to improve layer reusability at build time, or tweak compression ratios, as docker pull spends most of the time on data decompression.
  • Mutating tags (e.g. updating a latest tag) is allowed, however, a few things will not work: tag lookups immediately afterwards will still return the old value due to Nginx caching, and replication probably won't trigger. We are working on supporting this functionality better. If you need tag mutation support right now, please reduce the cache interval of the build-index component. If you also need replication in a multi-cluster setup, please consider setting up another Docker registry as Kraken's backend.
  • Theoretically, Kraken should distribute blobs of any size without significant performance degradation, but at Uber, we enforce a 20G limit and cannot endorse the production use of ultra-large blobs (i.e. 100G+). Peers enforce connection limits on a per blob basis, and new peers might be starved for connections if no peers become seeders relatively soon. If you have ultra-large blobs you'd like to distribute, we recommend breaking them into <10G chunks first.

Contributing

Please check out our guide.

Contact

To contact us, please join our Slack channel.

kraken's People

Contributors

0xflotus avatar adtm avatar alexeldeib avatar ankit7201 avatar apourchet avatar asankaran avatar banka-pranoy avatar bpaquet avatar c-buisson avatar codygibb avatar edoakes avatar eltonzhu avatar evelynl94 avatar igmor avatar jimoosciuc avatar lionelnicolas avatar magnuschatt avatar mmpei avatar orirawlings avatar redmoses avatar rmalpani-uber avatar sootysec avatar squidwarrior avatar talaniz avatar tedgxt avatar thaotrann avatar xinlongz1 avatar yashrajbharti avatar yiranwang52 avatar zerosnake0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kraken's Issues

uuid.NewV4().string()

lib/backend/hdfsbackend/client.go:117:86: multiple-value uuid.NewV4() in single-value context

Make unit test less flaky

Flaky tests:

TestRingLocationsReturnsFirstHostWhenAllHostsUnhealthy

rendezvous_test

github.com/uber/kraken/lib/torrent/scheduler (sometimes stuck on travis CI)

--- FAIL: TestDownloadTorrentWhenPeersAllHaveDifferentPiece (0.00s)
panic: listen tcp :37896: bind: address already in use [recovered]
	panic: listen tcp :37896: bind: address already in use

Compile master failed

see the dependancy from glide.yaml

- package: github.com/docker/distribution
  version: ^2.6.2

this means the package could be >=2.6.2 && <3.0.0.
actually compile master branch will be failed if using v2.6.2 because registry v2.7.0 is not compatible with v2.6.2.
so these dependancy could be defined more exactly.

Tune down endgame mode

No need to send piece request to all connections in endgame mode. 2 or 3 requests per piece should suffice.

Support multiple P2P networks

It is useful when using cloud host as k8s node and network is isolated between multiple k8s cluster.
To dye the agents and process agents by groups will make kraken support more scenarios.

uuid.NewV4().string()

Compile and report errors:
lib/backend/hdfsbackend/client.go:117:86: multiple-value uuid.NewV4() in single-value context
change lib/backend/hdfsbackend/client.go:117 to
uid, _ := uuid.NewV4()
uploadPath := path.Join(c.config.RootDirectory, c.config.UploadDirectory, string(uid[:]))

k8s cluster push image internal server error

$ docker push localhost:30081/images:5.0
The push refers to repository [localhost:30081/images]
76c5d4259e40: Retrying in 1 second
4c8e44b308d0: Retrying in 1 second
dc5ef0c854b4: Retrying in 1 second
03eafa792876: Retrying in 1 second
f99f83132c0a: Retrying in 1 second
6270adb5794c: Waiting
received unexpected HTTP status: 500 Internal Server Error

Uploading & downloading blobs

I've got some questions on uploading & downloading blobs.

  1. For what reason the blobs must be uploaded in chunks? How should the blobs be divided into chunks?

  2. Is it possible to alter the content of an uploaded blob? Can we simply re-upload the relevant chunk?

  3. Is it possible to download only one chunk of an uploaded blob?

ImagePullBackOff when deploy Kraken on k8s cluster

I have been following the instructions https://github.com/uber/kraken#usage to deploy Kraken on k8s cluster. Got ImagePullBackOff error when I tried microk8s.kubectl apply -f demo.json

NAME                                  READY   STATUS             RESTARTS   AGE
demo-cdb996bf9-5fvpp                  0/1     ImagePullBackOff   0          7m11s
kraken-agent-vkgrr                    1/1     Running            1          103m
kraken-build-index-54b5f7fb8c-k5gvr   1/1     Running            0          103m
kraken-build-index-54b5f7fb8c-nv8xk   1/1     Running            0          103m
kraken-build-index-54b5f7fb8c-rk4tg   1/1     Running            0          103m
kraken-origin-579cc65b46-2l8zn        1/1     Running            0          103m
kraken-origin-579cc65b46-gsgkh        1/1     Running            0          103m
kraken-origin-579cc65b46-n9vb6        1/1     Running            0          103m
kraken-proxy-594c8648c4-rhd7z         1/1     Running            1          103m
kraken-testfs-b5fdfdd98-j8nwj         1/1     Running            0          103m
kraken-tracker-76455fffbf-4887q       2/2     Running            0          103m
kraken-tracker-76455fffbf-bdbhk       2/2     Running            0          103m
kraken-tracker-76455fffbf-sdrf7       2/2     Running            0          103m
microbot-75bdc8d8bf-2v95w             1/1     Running            3          12d
microbot-75bdc8d8bf-d8qq6             1/1     Running            3          12d

Or when I tried to docker pull localhost:30081/library/busybox:latest on local machine,
I got
Error response from daemon: manifest for localhost:30081/library/busybox:latest not found

fossa breaking travis-ci builds

Our fossa integration is constantly failing with a cryptic error message:

fossa analyze
⣟ Analyzing module (1/8): github.com/uber/kraken/agent FATAL Could not analyze: no revision found for package name

TestTLSClientBadAuth fails on go1.12

func TestTLSClientBadAuth(t *testing.T) {
require := require.New(t)
c, cleanup := genCerts(t)
defer cleanup()
addr, _, stop := startTLSServer(t, c.CAs)
defer stop()
badConfig := &TLSConfig{}
badtls, err := badConfig.BuildClient()
require.NoError(err)
_, err = Get("https://"+addr+"/", SendTLS(badtls))
require.True(IsNetworkError(err))
}

// IsNetworkError returns true if err is a NetworkError.
func IsNetworkError(err error) bool {
	_, ok := err.(NetworkError)
	return ok
}

You expect, that error is an NetworkError, but on go1.12 it fails with:

2019/03/06 11:50:04 http: TLS handshake error from [::1]:51121: remote error: tls: bad certificate
(httputil.StatusError) GET http://[::]:51120/ 400: Client sent an HTTP request to an HTTPS server.

Checksum missmuch is at github.com/apache/thrift

The following error is displayed after executing make image

go: verifying github.com/apache/[email protected]: checksum mismatch
downloaded: h1:Fv9bK1Q+ly/ROk4aJsVMeuIwPel4bEnD8EPiI91nZMg=
go.sum: h1:CZI8h5fmYwCCvd2RMSsjLqHN6OqABlWJweFKxz4vdEs=
make: *** [agent/agent] Error 1

I would like to modify go.sum as follows
-github.com/apache/thrift v0.0.0-20161221203622-b2a4d4ae21c7 h1:CZI8h5fmYwCCvd2RMSsjLqHN6OqABlWJweFKxz4vdEs=
+github.com/apache/thrift v0.0.0-20161221203622-b2a4d4ae21c7 h1:Fv9bK1Q+ly/ROk4aJsVMeuIwPel4bEnD8EPiI91nZMg=

Support for https proxy

I'm trying to proxy to a private https nexus repository but http seems to be hardcoded for the tagquery

Support preheat when using registry as a backend

Preheat is important when user pushs images to registry directlly. we have designed an architecture based on registry notification. please see the diagram bellow and i will raise a PR later.
image

What the relationship with harbor?

kraken and docker-registry and harbor
for example :
when i use kubernetes, I use harbor first, if i want to use kraken, should i delete all harbor?
Is kraken == docker-registry == harbor ?

Pull images error when deploy Kraken on k8s cluster by helm

I use example Helm chart to deploy Kraken, results are as follows:

[root@master kraken]# kubectl get po
NAME                                  READY   STATUS             RESTARTS   AGE
demo-85c749f4cd-hhgvw                 0/1     ImagePullBackOff   0          2m48s
kraken-agent-kr52p                    1/1     Running            3          3m36s
kraken-build-index-6756d759f8-cgvls   1/1     Running            0          3m36s
kraken-build-index-6756d759f8-dw2z4   1/1     Running            0          3m36s
kraken-build-index-6756d759f8-qdtq4   1/1     Running            0          3m36s
kraken-origin-79f87f699f-cfhhj        1/1     Running            0          3m36s
kraken-origin-79f87f699f-j4lcj        1/1     Running            0          3m36s
kraken-origin-79f87f699f-qt5pv        1/1     Running            0          3m36s
kraken-proxy-84c5dc8cff-bgpjp         1/1     Running            3          3m36s
kraken-testfs-865c984867-zmknd        1/1     Running            0          3m36s
kraken-tracker-77d568d4bd-nf2zk       2/2     Running            0          3m36s
kraken-tracker-77d568d4bd-tssv6       2/2     Running            0          3m36s
kraken-tracker-77d568d4bd-vgrlw       2/2     Running            0          3m36s
[root@master kraken]# docker pull localhost:30081/library/busybox:latest
Error response from daemon: Get http://localhost:30081/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
[root@master kraken]# docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
53071b97a884: Pull complete
Digest: sha256:4b6ad3a68d34da29bf7c8ccb5d355ba8b4babcad1f99798204e7abb43e54ee3d
Status: Downloaded newer image for busybox:latest

I can't pull the image like localhost:30081/library/busybox:latest , and no related records found in the Agent log. What should I do?

download blob timeout in kraken agent

we use kraken to dispatch docker images. and about 400 nodes pulling a same image concurrently.
all work well except two timeout. kraken version v0.1.2
see log from agent-nginx-log:
2019/05/24 03:09:24 [error] 22#22: *6 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /v2/push-perform/harbortestimages/blobs/sha256:e30bae4a4f776956ec66d75e10437e2d39c1624d53b29e5050224238c3d33eb4 HTTP/1.1", upstream: "http://unix:/tmp/kraken-agent-registry.sock/v2/push-perform/harbortestimages/blobs/sha256:e30bae4a4f776956ec66d75e10437e2d39c1624d53b29e5050224238c3d33eb4", host: "localhost:13047"
see the attachment, agent47.log is the log from timeout agent. and agent46.log is from the agent worked well.
Please do me a faver. i will do some deep investigation too.

HOW TO USE

I see https://github.com/uber/kraken/blob/master/examples/devcluster/README.md

for example: I have an image "asd/qwe/busybox:0.0.1"
docker tag asd/qwe/busybox:0.0.1 localhost:15000/qwe/busybox:0.0.1
docker push localhost:15000/qwe/busybox:0.0.1
on another machine
docker pull localhost:16000/qwe/busybox:0.0.1
finally ,the image named "localhost:16000/qwe/busybox:0.0.1" ,but my origin image named "asd/qwe/busybox:0.0.1",the inage name has changed
and how to config "host.docker.internal"?
and ,all the componet(agent,testfs,proxy,build-index,redis-server,tracker,origin),how to deploy?
for example : I have 1 master,3 agent

Flaky unit test

This test failed on travis:

--- FAIL: TestAnnouncerAnnounceUpdatesInterval (0.76s)
    require.go:312: 
        	Error Trace:	announcer_test.go:45
        	            				announcer_test.go:111
        	Error:      	Tick timed out
        	Test:       	TestAnnouncerAnnounceUpdatesInterval
FAIL
coverage: 82.6% of statements
FAIL	github.com/uber/kraken/lib/torrent/scheduler/announcer	0.779s

Something wrong with make devcluster

I was interested in this project, so I had a try with it. I found the fastest way to try it is by running make devcluster after having a look at the documentation. Everything seemed so simple that I thought I could get started quickly. Unfortunately, I failed to do that.
Then, I found some problems:

  • It‘s so slow to make the kraken-herd image
  • We should use --config instead of -config here
  • The container kraken-herd exits with an exception
# docker ps -a
f32695e2870b        kraken-herd:dev                                                  "./herd_start_proces…"   About a minute ago   Exited (1) 57 seconds ago

And I try to find the reason for that. However, I am not familiar with the project, and I am afraid I cannot solve the problem quickly. So I need some help.

# docker logs f32695e2870b
7:C 26 Feb 2019 12:33:42.189 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
7:C 26 Feb 2019 12:33:42.189 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=7, just started
7:C 26 Feb 2019 12:33:42.189 # Configuration loaded
7:M 26 Feb 2019 12:33:42.190 * Running mode=standalone, port=14001.
7:M 26 Feb 2019 12:33:42.190 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
7:M 26 Feb 2019 12:33:42.190 # Server initialized
7:M 26 Feb 2019 12:33:42.190 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
7:M 26 Feb 2019 12:33:42.190 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
7:M 26 Feb 2019 12:33:42.190 * Ready to accept connections
kraken-origin exited unexpectedly. Logs:
2019-02-26T12:33:45.219Z	info	cmd/cmd.go:105	Configuring origin with hostname 'host.docker.internal'
candidate: /etc/kraken/config/origin/development.yaml
2019-02-26T12:33:45.229Z	WARN	metrics/metrics.go:62	Skipping version emitting: no GIT_DESCRIBE env variable found
2019-02-26T12:33:45.229Z	WARN	store/cleanup.go:81	TTL disabled for {/var/cache/kraken/kraken-origin/upload/}
2019-02-26T12:33:45.229Z	WARN	store/cleanup.go:81	TTL disabled for {/var/cache/kraken/kraken-origin/cache/}
OK    00001_tagreplication_init.go
OK    00002_writeback_init.go
goose: no migrations to run. current version: 2
2019-02-26T12:33:45.257Z	WARN	networkevent/producer.go:58	Kafka network events disabled
2019-02-26T12:33:45.257Z	WARN	bandwidth/limiter.go:77	Bandwidth limits disabled
2019-02-26T12:33:45.257Z	INFO	scheduler/scheduler.go:194	Scheduler starting as peer a952045ae4fb26449fbda9a97e9fe9c5f3430a84 on addr host.docker.internal:15001
2019-02-26T12:33:45.257Z	FATAL	cmd/cmd.go:181	Error creating cluster host list: invalid config: no dns record or static list supplied
2019-02-26T12:33:45.257Z	INFO	scheduler/scheduler.go:319	Listening on [::]:15001
nvalid config: no dns record or static list supplied

Something others: Another reason that I can't solve the problem is that I cannot find a good description of the configuration file. Expect that, there are not good comments on the structure fields, so it would be nice to add that.

Looking forward to your reply. 😄

cannot push image

time="2019-05-27T03:49:32.573187371Z" level=error msg="response completed with error" err.code=unknown err.detail="uploads are disabled" err.message="unknown error" go.version=go1.11.4 http.request.host=registry-backend http.request.id=adde9e4d-173e-4b0a-b664-876ed7620dac http.request.method=POST http.request.remoteaddr=172.17.0.1 http.request.uri="/v2/library/redis/blobs/uploads/" http.request.useragent="docker/18.09.4 go/go1.10.8 git-commit/d14af54 kernel/4.15.0-50-generic os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.4 (linux))" http.response.contenttype="application/json" http.response.duration=7.410565ms http.response.status=500 http.response.written=70 vars.name="library/redis"
@ - - [27/May/2019:03:49:32 +0000] "POST /v2/library/redis/blobs/uploads/ HTTP/1.0" 500 70 "" "docker/18.09.4 go/go1.10.8 git-commit/d14af54 kernel/4.15.0-50-generic os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.4 \(linux\))"

Failed to pull image when tracker pod is cycled

Issue: failed to pull image when tracker pod is cycled
Steps to produce:
I have Kraken setup on a cluster which uses Consul as service discovery.
When a tracker pod is killed and brought back up (i.e. tracker ip addr now has changed), agent however still tries to connect to the dead pod ip, causing the following error:

"transferer download: scheduler: create torrent: download metainfo: network error: Get <>/metainfo: dial tcp <deadpod>:80: connect: no route to host"

Thoughts: I looked at the code and looks like agent has a PassiveRing of tracker and

func (r *dnsResolver) resolve() (stringset.Set, error)

for refreshing new hosts doesn't get called after initialization step. The issue persists as long as the agent pod lives. I added c.ring.Refresh() to https://github.com/uber/kraken/blob/master/tracker/metainfoclient/client.go#L54. It refreshes tracker hashring and fixes the issue. Should we add Monitor to refresh periodically?

Dont know how to download or upload file via port 14000

Hi, i am a student who are interested in this project and i have already build a simple cluster on my mac through command 'make devcluster' and it runs so well , but right now i don't know how to download or upload normal files via port host.docker.internal:14000 as mentioned in README.md. Should i use wget or curl ? But every time when i try to access this address it returns "404 page not found".
image
Looking forward to your reply. thx😄

make devcluster fails with go 1.11.4 or 1.12

checked out the master (d86f045), but failed to build. These seem to be multiple code bugs.

chris:kraken chris$ rm -Rf vendor/
chris:kraken chris$ make devcluster
GO111MODULE=on go mod vendor
docker run --rm -it -v /Users/chris/go:/go -w /go/src/github.com/uber/kraken golang:1.11.4 go build -o ./agent/agent ./agent/
# github.com/uber/kraken/lib/middleware
lib/middleware/middleware.go:45:40: cannot use ctx.RoutePattern (type func() string) as type string in argument to strings.Split
# github.com/uber/kraken/lib/dockerregistry
lib/dockerregistry/blobs.go:55:26: undefined: "github.com/uber/kraken/vendor/github.com/docker/distribution/context".Context
lib/dockerregistry/blobs.go:86:28: undefined: "github.com/uber/kraken/vendor/github.com/docker/distribution/context".Context
lib/dockerregistry/blobs.go:90:32: undefined: "github.com/uber/kraken/vendor/github.com/docker/distribution/context".Context
lib/dockerregistry/blobs.go:100:6: undefined: "github.com/uber/kraken/vendor/github.com/docker/distribution/context".Context
lib/dockerregistry/blobs.go:129:20: undefined: "github.com/uber/kraken/vendor/github.com/docker/distribution/context".Context
lib/dockerregistry/storage_driver.go:157:46: undefined: "github.com/uber/kraken/vendor/github.com/docker/distribution/context".Context
lib/dockerregistry/storage_driver.go:178:42: undefined: "github.com/uber/kraken/vendor/github.com/docker/distribution/context".Context
lib/dockerregistry/storage_driver.go:196:46: undefined: "github.com/uber/kraken/vendor/github.com/docker/distribution/context".Context
lib/dockerregistry/storage_driver.go:219:42: undefined: "github.com/uber/kraken/vendor/github.com/docker/distribution/context".Context
lib/dockerregistry/storage_driver.go:244:40: undefined: "github.com/uber/kraken/vendor/github.com/docker/distribution/context".Context
lib/dockerregistry/storage_driver.go:244:40: too many errors
make: *** [agent/agent] Error 2
chris:kraken chris$ go version
go version go1.11.4 darwin/amd64

There seems a hang in agent that down the usage percent of bandwidth

As the title.
Test image is about 140M(compress size) and has 14 layers. bandwidth is 50MB/s ingress and 40MB/s egress.
One blob size is about 50M. the agent trys to make connections until reach the capacity.
then agent spends about 30s to complete downloading the blob.

the time is too long. and seems there is a hang before start download.
Need deep investigation.

Is there a "how to get started with kraken" guide?

I want to use kraken in my master thesis where I evaluate image distribution systems in a virtual network that I create with containernet.

However I am having difficulties setting up or rather even getting started with kraken.
I am not that experienced with image or file distribution systems and could really use a step-by-step guide for getting started with deploying kraken in a network.
I am already successfully using dragonfly and a "how to get started" guide for kraken like dragonfly's quick start and their user guide could be helpful for others too.

So far I did

  1. git clone https://github.com/uber/kraken.git

  2. cd kraken

  3. and than I am pretty much lost since make devcluster does not work. I am working in a linux environment and I already applied the change documented in #71.

I am thankful for any help I can get!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.