Giter VIP home page Giter VIP logo

harbor-helm's People

Contributors

adibov avatar alex1989hu avatar amaari-stingray avatar chlins avatar cu12 avatar danielpacak avatar darend avatar golgoth31 avatar haminhcong avatar heww avatar jessehu avatar jkroepke avatar jsuchome avatar kariya-mitsuru avatar kimxogus avatar mcowger avatar mineryang avatar ninjadq avatar paulczar avatar paulfarver avatar peter-englmaier avatar ramrodo avatar reasonerjt avatar seyhbold avatar stefannica avatar vad1mo avatar wy65701436 avatar ymstmsys avatar ywk253100 avatar zyyw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

harbor-helm's Issues

Harbor-UI failing on empty Clair DB connection string

I am trying to stand harbor's parts up for the first time in kubernetes and puzzled by an error the harbor-UI is throwing. The logs are essentially saying the connection string is empty - I'm just not clear where I would change/add env variables and which ones. I have the admin server and clair running in the cluster already too to make things a bit more unclear.

I'm using dev for all except the UI and jobservice, for which I'm using v1.6.0
error:

2018-09-11T22:55:06Z [INFO] API controller for chart repository server is successfully initialized
2018-09-11T22:55:06Z [INFO] Policy scheduler start at 2018-09-11T22:55:06Z
[ORM]2018/09/11 22:55:06 register db Ping `clair-db`, pq: database "sslmode=disable" does not exist
2018-09-11T22:55:06Z [FATAL] [main.go:128]: failed to initialize clair database: register db Ping `clair-db`, pq: database "sslmode=disable" does not exist

http error: code 401, message Unauthorized

the pod harbor-ui inform the error:
2018-09-19T17:43:07Z [DEBUG] [init.go:37]: topic StartReplication is subscribed
2018-09-19T17:43:07Z [DEBUG] [init.go:37]: topic OnPush is subscribed
2018-09-19T17:43:07Z [DEBUG] [init.go:37]: topic OnDeletion is subscribed
2018-09-19T17:43:07Z [DEBUG] [authenticator.go:126]: Registered authencation helper for auth mode: db_auth
2018-09-19T17:43:07Z [DEBUG] [authenticator.go:126]: Registered authencation helper for auth mode: ldap_auth
2018-09-19T17:43:07Z [DEBUG] [authenticator.go:126]: Registered authencation helper for auth mode: uaa_auth
2018-09-19T17:43:07Z [INFO] Config path: /etc/ui/app.conf
2018-09-19T17:43:07Z [INFO] initializing configurations...
2018-09-19T17:43:07Z [INFO] key path: /etc/ui/key
2018-09-19T17:43:07Z [INFO] initializing client for adminserver http://harbor-harbor-adminserver ...
2018-09-19T17:43:07Z [FATAL] [main.go:89]: failed to initialize configurations: http error: code 401, message Unauthorized

NAME READY STATUS RESTARTS AGE
harbor-harbor-adminserver-6c7bc494b-rfm2r 1/1 Running 1 1h
harbor-harbor-chartmuseum-7cb8894d7d-v7kdv 1/1 Running 1 1h
harbor-harbor-clair-7c7557f559-jwbkw 1/1 Running 3 1h
harbor-harbor-database-0 1/1 Running 1 1h
harbor-harbor-jobservice-5748658559-rwh75 1/1 Running 1 1h
harbor-harbor-notary-server-85c5c879b6-cdfcd 1/1 Running 1 1h
harbor-harbor-notary-signer-567ddfffd7-4gjg7 1/1 Running 1 1h
harbor-harbor-portal-7445c5ff84-mmcqw 1/1 Running 1 1h
harbor-harbor-registry-7c44589886-z4jp8 2/2 Running 2 1h
harbor-harbor-ui-cd775c58c-7vzgk 0/1 CrashLoopBackOff 16 51m

kubectl logs harbor-harbor-adminserver-6c7bc494b-rfm2r
2018-09-19T17:36:59Z [INFO] initializing system configurations...
2018-09-19T17:36:59Z [INFO] Registering database: type-PostgreSQL host-harbor-harbor-database port-5432 databse-registry sslmode-"disable"
2018-09-19T17:37:00Z [ERROR] [utils.go:101]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout
2018-09-19T17:37:04Z [ERROR] [utils.go:101]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout
2018-09-19T17:37:10Z [ERROR] [utils.go:101]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout
2018-09-19T17:37:17Z [INFO] Register database completed
2018-09-19T17:37:17Z [INFO] Upgrading schema for pgsql ...
2018-09-19T17:37:17Z [INFO] No change in schema, skip.
2018-09-19T17:37:17Z [INFO] the path of json configuration storage: /etc/adminserver/config/config.json
2018-09-19T17:37:17Z [INFO] the path of key used by key provider: /etc/adminserver/key
2018-09-19T17:37:18Z [INFO] system initialization completed
10.233.65.18 - - [19/Sep/2018:17:37:20 +0000] "GET /api/configurations HTTP/1.1" 401 13
10.233.66.37 - - [19/Sep/2018:17:37:27 +0000] "GET /api/configurations HTTP/1.1" 200 1728
10.233.65.18 - - [19/Sep/2018:17:37:36 +0000] "GET /api/configurations HTTP/1.1" 401 13
10.233.65.18 - - [19/Sep/2018:17:38:06 +0000] "GET /api/configurations HTTP/1.1" 401 13
10.233.65.18 - - [19/Sep/2018:17:38:50 +0000] "GET /api/configurations HTTP/1.1" 401 13
10.233.65.18 - - [19/Sep/2018:17:40:21 +0000] "GET /api/configurations HTTP/1.1" 401 13
10.233.65.18 - - [19/Sep/2018:17:43:07 +0000] "GET /api/configurations HTTP/1.1" 401 13

Add support for LDAP Group Values

I noticed that the Web UI now has fields for LDAP group configurations, but I don't see any mention in the documentation to allow for defining those attributes in values.yaml.

I have a issue with route.

when run ./install.sh ,I Connection closed by foreign host.

[Step 4]: starting Harbor ...
Creating network "harbor_harbor" with the default driver

Socket error Event: 32 Error: 10053.
Connection closing...Socket close.

I find the problem is that the harbor create a route:172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-5413e0dc505d.And my inside network have a 172.17.0.0/16,it's conflict with harbor,where I show change on harbor?
Thank your answer!

DB password mismatch

I just inserted random passwords, where "changeit" was to secure the deployment but now my admin server is erroring with:

2018-09-12T14:41:31Z [INFO] initializing system configurations...
2018-09-12T14:41:31Z [INFO] Registering database: type-PostgreSQL host-tim-harbor-database port-5432 databse-registry sslmode-"disable"
[ORM]2018/09/12 14:41:31 register db Ping `default`, pq: password authentication failed for user "postgres"
2018-09-12T14:41:31Z [FATAL] [main.go:46]: failed to initialize the system: register db Ping `default`, pq: password authentication failed for user "postgres"

and the postgres db is logging:

FATAL:  password authentication failed for user "postgres"
DETAIL:  Password does not match for user "postgres".
	Connection matched pg_hba.conf line 95: "host all all all md5"

I only edited values.yaml

Help understanding the architecture

Hi I was looking for some additional help in deploying Harbor via Helm. I'm getting hung up on the values that set up the Ingress, Ingress Controller(NGINX). Mainly it is the externalURL that the values.yaml takes in.
I'm unsure of what to put into the externalURL and how to get that to be publicly accessible (as well as the registry available). I've only done loadbalanced services so ingress is new to me. Can someone point me in the right direction? I.e. what would the externalURL be? (a domain name? NGINX IP? The worker node IP that the pod//service is on?)

Thanks.

Deploying with external postgres fails

I'm trying to deploy the chart with an external postgres database and it makes all services to fail. (I'm using the branch 0.3.0)

I've checked and I've created the databases following the files that are used on the official db docker image found here https://github.com/goharbor/harbor/tree/master/make/photon/db.

I'm getting the following logs:
adminserver: 2018-11-28T16:44:38Z [FATAL] [main.go:46]: failed to initialize the system: pq: pg_hba.conf rejects connection for host "100.96.4.47", user "postgres", database "registry", SSL off

clair: {"Event":"pgsql: could not open database: pq: pg_hba.conf rejects connection for host \"100.96.4.46\", user \"postgres\", database \"clair\", SSL off","Level":"fatal","Location":"main.go:96","Time":"2018-11-28 16:49:08.544334"}

notary-server: waiting for postgres://postgres:<correct-password>@<postgres-endpoint>:5432/notaryserver?sslmode=disable to come up.

notary-signer: waiting for postgres://postgres:<correct-password>@<postgres-endpoint>:5432/notaryserver?sslmode=disable to come up.

The connection strings are ok. I've tried manually and I can connect. Is there any preparation that I have to do on the postgres databases?

LDAP and Email settings are not available in the chart

From an operator and system administrator point of view, being able to deploy a product from scratch and 100% automated is a must have.
Choosing to remove the "users" settings from the deployment chart forces an otherwise 100% automated flow to be finished manually.
This choice hence violates the kubernetes paradigm aiming at making applications deployments fully automated and reproductible.

How to push docker image to harbor registry?

大家好:
我的Harbor已经搭建完成。用的ingress暴露的服务。其中externalURL为:

externalURL: http://harbor.liudz.net

可以用“docker login harbor.liudz.net”登录成功。但是在push image的时候一直报如下错误:

[root@localhost harbor-helm]# ls
[root@localhost harbor-helm]# docker push harbor.liudz.net/test/busybox:v1
The push refers to a repository [harbor.liudz.net/test/busybox]
0610f7f0e378: Preparing 
Error: Status 404 trying to push repository test/busybox: "default backend - 404"

求助,我改怎么上传我本地的docker image呢?

Provide "Registry GC" function to help registry's GC

refer to Deleting repositories, why do i have to shut down the harbor for this operation of registry's garbage collection(GC)?

I suggest you can improve as follows:

  1. Put a "Registry GC" Button on the harbor's administrator UI
  2. Once the system administrator clicks this button, a prompt box will pop up to show the upcoming GC operation
  3. If the system administrator clicks the confirmation button, then continue
  4. temporarily disable push image globally
  5. Execute this garbage-collect python script
  6. After the above script is executed, enable push image globally

jobserver一直重启?

jobserver一直报做重启操作?存储使用的storageCLASS,其他服务都正常,以下是jobserver的日志:

2018-10-23T04:38:47Z [ERROR] [service_logger.go:63]: Job context initialization error: http error: code 405, message 
2018-10-23T04:38:47Z [INFO] Retry in 9 seconds
2018-10-23T04:38:56Z [ERROR] [service_logger.go:63]: Job context initialization error: http error: code 405, message 
2018-10-23T04:38:56Z [INFO] Retry in 13 seconds
2018-10-23T04:39:09Z [ERROR] [service_logger.go:63]: Job context initialization error: http error: code 405, message 
2018-10-23T04:39:09Z [INFO] Retry in 19 seconds
2018-10-23T04:39:28Z [ERROR] [service_logger.go:63]: Job context initialization error: http error: code 405, message 
2018-10-23T04:39:28Z [INFO] Retry in 29 seconds
2018-10-23T04:39:57Z [ERROR] [service_logger.go:63]: Job context initialization error: http error: code 405, message 
2018-10-23T04:39:57Z [FATAL] [service_logger.go:73]: Failed to initialize job context: job context initialization error: http error: code 405, message  (5 times tried)

Database unable to recover from pod recreation

It appears the recent sub path changes made the database unable to recover from pod recreation, i.e. reattaching to an existing PVC.

here1
hehe2
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: C
  MONETARY: C
  NUMERIC:  C
  TIME:     C
The default text search configuration will be set to "english".
Data page checksums are disabled.
initdb: directory "/var/lib/postgresql/data" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/var/lib/postgresql/data" or run initdb
with an argument other than "/var/lib/postgresql/data".

This is using the latest commit on master.

Note: this cluster was created fresh using the latest commit on master, I did not try to upgrade from a pre-subpath release.

NFS的权限

harbor 1.6.0
goharbor/harbor-db:dev
用NFS为harbor提供存储,POD无法更改权限
harbor-database
harbor-chartmuseum
harbor-registry
都报
chown: changing ownership of 'X/X/X/X/X/X/X': Operation not permitted
怀疑是镜像做的有问题,因水平有限,不能继续分析,也有可能是误报,不知道有没有用NFS给harbor提供存储的,又用helm安装的朋友。

harbor-ui: "failed to authenticate harbor: Failed to authenticate user:

Full disclosure - I'm not using helm but trying to convert this repo to bend to how we deploy to kubernetes. I am using the tag 0.3 as a guide. I'm relying on kube dns for inter component connectivity.

I am very close, but a key component isn't working for me - harbor actually connecting to the registry pod. I have my own nginx pod running too to allow connection to the UI from outside my cluster. This appears to also be complicating my set up. I'm just looking for some guidance on these items because I've re-read the documentation serveral times and am missing something.

Logs from UI Container Starting, which 3 things jump out at me and I'm not sure where to correct:

Start ui
2018-09-24T23:18:09Z [INFO] Config path: /etc/ui/app.conf
2018-09-24T23:18:09Z [INFO] initializing configurations...
2018-09-24T23:18:09Z [INFO] key path: /etc/ui/key
2018-09-24T23:18:09Z [INFO] initializing client for adminserver http://harbor-adminserver.svc.cluster.local:8080 ...
2018-09-24T23:18:09Z [INFO] initializing the project manager based on local database...
2018-09-24T23:18:09Z [INFO] configurations initialization completed
2018-09-24T23:18:09Z [INFO] Registering database: type-PostgreSQL harbor.rds.amazonaws.com port-5432 databse-harbor sslmode-"disable"
2018-09-24T23:18:09Z [INFO] Register database completed
2018-09-24T23:18:09Z [INFO] User id: 1 already has its encrypted password.
2018-09-24T23:18:09Z [INFO] Enable redis cache for chart caching
2018-09-24T23:18:09Z [INFO] API controller for chart repository server is successfully initialized
2018-09-24T23:18:09Z [INFO] Policy scheduler start at 2018-09-24T23:18:09Z
2018-09-24T23:18:09Z [INFO] initialized clair database
2018-09-24T23:18:09Z [INFO] Handle notification with topic 'scan_all_policy': notifier.ScanPolicyNotification{Type:"daily", DailyTime:0}
2018-09-24T23:18:09Z [INFO] Policy Alternate Policy is scheduled
2018-09-24T23:18:09Z [INFO] Policies:1, Tasks:0, CompletedTasks:0, FailedTasks:0
2018-09-24T23:18:09Z [INFO] Waiting for -83880 seconds after comparing offset 0 and utc time 83880
2018-09-24T23:18:09Z [INFO] Start syncing repositories from registry to DB... 
2018-09-24T23:18:09Z [ERROR] [utils.go:45]: 401 {"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"registry","Class":"","Name":"catalog","Action":"*"}]}]}
2018-09-24T23:18:09Z [ERROR] [main.go:168]: 401 {"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"registry","Class":"","Name":"catalog","Action":"*"}]}]}
2018-09-24T23:18:09Z [INFO] Init proxy
2018/09/24 23:18:09 [I] [asm_amd64.s:2337] http server Running on http://:8080
2018-09-24T23:20:00Z [WARNING] Not found an entry.
2018-09-24T23:20:02Z [ERROR] [security.go:211]: failed to authenticate harbor: Failed to authenticate user, due to error 'Not found an entry'
2018-09-24T23:20:02Z [WARNING] Not found an entry.
2018-09-24T23:20:04Z [ERROR] [security.go:211]: failed to authenticate harbor: Failed to authenticate user, due to error 'Not found an entry'
  1. 401 on connection to registry pod
  2. Running on http://:8080 - seems like the ui dns needs to be specified
  3. failed to authenticate harbor: Failed to authenticate user, due to error Not found an entry

Here is my service and deployment file for ui

---
apiVersion: v1
kind: Service
metadata:
  name: "harbor-ui"
  namespace: ournamespace
  labels:
    name: "harbor-ui"
    dns_zone: "{{ dns_zone }}"
spec:
  ports:
    - name: http
      port: 8080
      nodePort: 0
      protocol: TCP
  type: NodePort
  selector:
    name: "harbor-ui"
    sha: "{{ docker_version[0:7] }}"

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: "harbor-ui-{{ docker_version[0:7] }}"
  namespace: ournamespace
  labels: 
    app: harbor-ui
    sha: "{{ docker_version[0:7] }}"
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: "harbor-ui"
        sha: "{{ docker_version[0:7] }}"
    spec:
      containers:
        - name: "harbor-ui"
          image: "ourreg/harbor-ui:{{ docker_version }}"
          env:
            - name: UI_SECRET
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-ui-secret
            - name: JOBSERVICE_SECRET
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-jobservice-secret
            - name: _REDIS_URL
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-redis-url
            - name: GODEBUG
              value: netdns=cgo
            - name: LOG_LEVEL
              value: info
            - name: CONFIG_PATH
              value: /etc/ui/app.conf
            - name: ENABLE_HARBOR_SCAN_ON_PUSH
              value: "1"
            - name: ADMINSERVER_URL
              value: "http://harbor-adminserver.svc.cluster.local:8080"
            - name: CHART_CACHE_DRIVER
              value: "redis"
            - name: secretKey
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-secretKey
            - name: secret
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-secret
            - name: tokenServiceRootCertBundle
              valueFrom:
                secretKeyRef:
                  name: harbor-certs
                  key: root.crt
            - name: tokenServicePrivateKey
              valueFrom:
                secretKeyRef:
                  name: harbor-certs
                  key: private_key.pem
            - name: WITH_CLAIR
              value: "true"
            - name: CLAIR_DB_HOST
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-clair-pgsql-host
            - name: CLAIR_DB_PORT
              value:  "5432"
            - name: CLAIR_DB_USERNAME
              value: "harborclair"
            - name: CLAIR_DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-clair-db-password
            - name: CLAIR_DB
              value: "harborclair"
            - name: CLAIR_URL
              value: "http://harbor-clair.svc.cluster.local:6060"
            - name: CLAIR_DB_SSLMODE
              value: "disable"
            - name: SYNC_REGISTRY
              value: "true"
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: harbor-certs
              mountPath: /ssl/
            - name: harbor-config
              mountPath: /etc/ui/config
      imagePullSecrets:
        - name: portus-pull-secret
      volumes:
        - name: harbor-config
          secret:
            secretName: harbor-config
        - name: harbor-certs
          secret:
            secretName: harbor-certs

Here is my service and deployment file for admin server, I'm relying on kube secrets to supply the sensitive information

---
kind: Service
apiVersion: v1
metadata:
  name: "harbor-adminserver"
  namespace: ournamespace
  labels:
    name: "harbor-adminserver"
    dns_zone: "{{ dns_zone }}"
spec:
  ports:
    - name: http
      port: 8080
      protocol: TCP
  type: NodePort
  selector:
    name: "harbor-adminserver"
    sha: "{{ docker_version[0:7] }}"

---

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: "harbor-adminserver-{{ docker_version[0:7] }}"
  namespace: ournamespace
  labels: 
    app: harbor-adminserver
    sha: "{{ docker_version[0:7] }}"
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: "harbor-adminserver"
        sha: "{{ docker_version[0:7] }}"
    spec:
      containers:
        - name: "harbor-adminserver"
          image: "ourreg/harbor-adminserver:{{ docker_version }}"
          env:
            - name: secretKey
              valueFrom:
                  secretKeyRef:
                    name: harbor-config
                    key: harbor-secretKey
            - name: EMAIL_PWD
              valueFrom:
                  secretKeyRef:
                    name: harbor-config
                    key: harbor-email-pwd
            - name: HARBOR_ADMIN_PASSWORD
              valueFrom:
                  secretKeyRef:
                    name: harbor-config
                    key: harbor-admin-password
            - name: JOBSERVICE_SECRET
              valueFrom:
                  secretKeyRef:
                    name: harbor-config
                    key: harbor-jobservice-secret
            - name: UI_SECRET
              valueFrom:
                  secretKeyRef:
                    name: harbor-config
                    key: harbor-ui-secret
            - name: POSTGRESQL_HOST
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-pgsql-host
            - name: POSTGRESQL_PASSWORD
              valueFrom:
                  secretKeyRef:
                    name: harbor-config
                    key: harbor-pgsql-password
            - name: POSTGRESQL_SSLMODE
              value: "disable"
            - name: POSTGRESQL_PORT
              value: "5432"
            - name: POSTGRESQL_USERNAME
              value: "harbor"
            - name: POSTGRESQL_DATABASE
              value: "harbor"
            - name: EMAIL_HOST
              value: "ourmailserver"
            - name: EMAIL_PORT
              value: "587"
            - name: EMAIL_USR
              value: "ouruser"
            - name: EMAIL_SSL
              value: "true"
            - name: EMAIL_FROM
              value: "[email protected]"
            - name: EMAIL_INSECURE
              value: "false"
            - name: EXT_ENDPOINT
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: https-harbor-dns-value
            - name: UI_URL
              value: "http://harbor-ui.svc.cluster.local:8080"
            - name: JOBSERVICE_URL
              value: "http://harbor-jobservice.svc.cluster.local:8080"
            - name: REGISTRY_URL
              value: "http://harbor-registry.svc.cluster.local:5000"
            - name: TOKEN_SERVICE_URL
              value: "http://harbor-ui.svc.cluster.local:8080/service/token"
            - name: WITH_NOTARY
              value: "false"
            - name: NOTARY_URL
              value: "http://notary-server.svc.cluster.local:4443"
            - name: LOG_LEVEL
              value: "info"
            - name: IMAGE_STORE_PATH
              value: "/" # This is a temporary hack.
            - name: AUTH_MODE
              value: "ldap_auth"
            - name: SELF_REGISTRATION 
              value: "on"
            - name: LDAP_URL
              value: "ldaps://ourldapserver"
            - name: LDAP_SEARCH_DN
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: "harbor-ldap-search-dn"
            - name: LDAP_SEARCH_PWD
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-ldap-search-pwd
            - name: LDAP_BASE_DN
              value: "ourbasedn"
            - name: LDAP_FILTER 
              value: ""
            - name: LDAP_UID
              value: "cn"
            - name: LDAP_SCOPE
              value: "2"
            - name: LDAP_TIMEOUT
              value: "5"
            - name: LDAP_VERIFY_CERT
              value: "false"
            - name: DATABASE_TYPE
              value: "postgresql"
            - name: PROJECT_CREATION_RESTRICTION
              value: "everyone"
            - name: VERIFY_REMOTE_CERT
              value: "off"
            - name: MAX_JOB_WORKERS
              value: "3"
            - name: TOKEN_EXPIRATION
              value: "30"
            - name: CFG_EXPIRATION
              value: "5"
            - name: GODEBUG
              value: "netdns=cgo"
            - name: ADMIRAL_URL
              value: "NA"
            - name: RESET 
              value: "false"
            - name: WITH_CLAIR
              value: "true"
            - name: CLAIR_DB_HOST
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-clair-pgsql-host
            - name: CLAIR_DB_PORT
              value:  "5432"
            - name: CLAIR_DB_USERNAME
              value: "harborclair"
            - name: CLAIR_DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: harbor-clair-db-password
            - name: CLAIR_DB
              value: "harborclair"
            - name: CLAIR_URL
              value: "http://harbor-clair.svc.cluster.local:6060"
            - name: CLAIR_DB_SSLMODE
              value: "disable"
            - name: REGISTRY_STORAGE_PROVIDER_NAME
              value: "s3"
            - name: WITH_CHARTMUSEUM
              value: "true"
            - name: CHART_REPOSITORY_URL
              valueFrom:
                secretKeyRef:
                  name: harbor-config
                  key: chartmuseum-dns-value
            - name: PORT
              value: "8080"
            - name: JSON_CFG_STORE_PATH
              value: /etc/adminserver/config/config.json
            - name: KEY_PATH
              value: /etc/adminserver/secrets/harbor-secretKey
          ports:
            - containerPort: 8080
          volumeMounts:
          - name: harbor-certs
            mountPath: /ssl/
          - name: harbor-config
            mountPath: /etc/adminserver/secrets
      imagePullSecrets:
        - name: portus-pull-secret
      volumes:
        - name: harbor-config
          secret:
            secretName: harbor-config
        - name: harbor-certs
          secret:
            secretName: harbor-certs

Here is my proxy harbor.conf (we have a base nginx.conf not included here

# this is necessary for us to be able to disable request buffering in all cases
proxy_http_version 1.1;

upstream ui {
  server harbor-ui.svc.cluster.local:8080;
}

upstream registry {
  least_conn;
  server harbor-registry.svc.cluster.local:5000;
}

log_format timed_combined '$remote_addr - '
  '"$request" $status $body_bytes_sent '
  '"$http_referer" "$http_user_agent" '
  '$request_time $upstream_response_time $pipe';
access_log /dev/stdout timed_combined;

server {
  listen 443 ssl;
  server_name HARBOR_DNS_VALUE;
  server_tokens off;
  # SSL
  ssl_certificate /etc/ssl/harbornginx/harbor.crt;
  ssl_certificate_key /etc/ssl/harbornginx/harbor.key;

  # Recommendations from https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
  ssl_protocols TLSv1.1 TLSv1.2;
  ssl_ciphers '!aNULL:kECDH+AESGCM:ECDH+AESGCM:RSA+AESGCM:kECDH+AES:ECDH+AES:RSA+AES:';
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;

  # disable any limits to avoid HTTP 413 for large image uploads
  client_max_body_size 0;

  # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
  chunked_transfer_encoding on;
  location / {
    proxy_pass http://ui/;
    proxy_set_header X-Forwarded-Ssl on;
    proxy_set_header Host               $http_host;   # required for docker client's sake
    proxy_set_header X-Real-IP          $remote_addr; # pass on real client's IP
    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto  https;

    # Add Secure flag when serving HTTPS
    proxy_cookie_path / "/; secure";
    proxy_read_timeout                  900;
    proxy_buffering                     on;
  }

  location /api/ {
    proxy_pass http://ui/api/;
    proxy_set_header                X-Forwarded-Ssl     on;
    proxy_set_header Host               $http_host;   # required for docker client's sake
    proxy_set_header X-Real-IP          $remote_addr; # pass on real client's IP
    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto  https;
    proxy_read_timeout                  900;
    proxy_buffering                     on;
  }
  location ~ ^/(login|log_out|sendEmail|language|reset|userExists|reset_password|chartrepo) {
    proxy_pass http://ui;
    proxy_set_header                X-Forwarded-Ssl     on;
    proxy_set_header Host               $http_host;   # required for docker client's sake
    proxy_set_header X-Real-IP          $remote_addr; # pass on real client's IP
    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto  https;
    proxy_read_timeout                  900;
    proxy_buffering                     on;
  }
  location /v1/ {
    return 404;
  }
  location /v2/ {
    proxy_pass http://registry/;
    proxy_set_header Host               $http_host;   # required for docker client's sake
    proxy_set_header X-Real-IP          $remote_addr; # pass on real client's IP
    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto  https;
    proxy_read_timeout                  900;
    proxy_buffering                     on;
    proxy_redirect  http://registry/v2/ https://HARBOR_DNS_VALUE/v2/;
          
  }

  location /service/ {
    proxy_pass http://ui/service/;
    proxy_set_header                X-Forwarded-Ssl     on;
    proxy_set_header Host               $http_host;   # required for docker client's sake
    proxy_set_header X-Real-IP          $remote_addr; # pass on real client's IP
    proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto  https;
    proxy_read_timeout                  900;
    proxy_buffering                     on;
  }		
  location /service/notifications {
    return 404;
  }
}
  server {
    listen 80;
    server_name HARBOR_DNS_VALUE;
    return 301 https://$host$request_uri;
} 

This issue with the proxy I'm having is that it allows me to successfully run docker login harbor.dns.com but then pushes/pulls do not work - 404's. But this configuration is also incorrect because it makes harbor.dns.com/v2/v2/ required to actually navigate to the registry api.

I appreciate any pointers, I've struggled with this for a few days on this.

Error: transport is closing

I am trying to setup harbor by 0.3.0 branch, but I had a problem:

[root@k8s-master01 harbor-helm]# helm install --name harbor-v1 .  --wait --timeout 1500 --debug --namespace kube-system
[debug] Created tunnel using local port: '32771'

[debug] SERVER: "127.0.0.1:32771"

[debug] Original chart version: ""
[debug] CHART PATH: /root/harbor-helm

E1116 02:07:15.077032   22003 portforward.go:178] lost connection to pod
Error: transport is closing

helm version:

[root@k8s-master01 harbor-helm]# helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Is it a branch problem?
Can I deploy harbor using master branch?
Thank you very much for your reply.

the auto generated service name is too long

Failed to provision volume with StorageClass "gfs": failed to create volume: failed to create endpoint/service default/glusterfs-dynamic-adminserver-config-harbor-harbor-adminserver-0: error creating service: Service "glusterfs-dynamic-adminserver-config-harbor-harbor-adminserver-0" is invalid: metadata.name: Invalid value: "glusterfs-dynamic-adminserver-config-harbor-harbor-adminserver-0": must be no more than 63 characters

Storage counter on main screen reading root instead of images folder

Hello
A small request, low priority :)

Using Helm chart v0.3.0

On the main Harbor screen (after login), there is a Storage counter with a colored arc , a big number and an "xx GB Limit".

I think the "xx GB Limit" is being taken from the container root folder /, instead of the images storage folder /var/lib/registry.

This can be misleading as the actual image folder /var/lib/registry is mounted from a PVC, so the Storage Limit and usage won't show up properly.

Thanks!

How to set up harbor time zone?

How to set up harbor time zone to Asia/Shanghai?
I tried to set it in env, but it not work.
` env:

  • name: TZ
    value: Asia/Shanghai`

Run images as non-root

We're not allowed to run images as root on our k8s cluster. Docker best practice is to not use root images.

Is the root user really required?

Max

Visit the home page to display 404 not found

operating system

CentOS Linux release 7.5.1804 (Core)

docker

Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-74.git6e3bb8e.el7.centos.x86_64
Go version: go1.9.4
Git commit: 6e3bb8e/1.13.1
Built: Tue Aug 21 15:23:37 2018
OS/Arch: linux/amd64

Server

Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-74.git6e3bb8e.el7.centos.x86_64
Go version: go1.9.4
Git commit: 6e3bb8e/1.13.1
Built: Tue Aug 21 15:23:37 2018
OS/Arch: linux/amd64
Experimental: false

helm

Client: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}

openshift

3.10.0

Problem phenomenon

All containers are shipped normally, accessing the home page reported 404. ui has a line error。

ui:

2018-09-10T12:48:01Z [DEBUG] [init.go:37]: topic StartReplication is subscribed
2018-09-10T12:48:01Z [DEBUG] [init.go:37]: topic OnPush is subscribed
2018-09-10T12:48:01Z [DEBUG] [init.go:37]: topic OnDeletion is subscribed
2018-09-10T12:48:01Z [DEBUG] [authenticator.go:126]: Registered authencation helper for auth mode: db_auth
2018-09-10T12:48:01Z [DEBUG] [authenticator.go:126]: Registered authencation helper for auth mode: ldap_auth
2018-09-10T12:48:01Z [DEBUG] [authenticator.go:126]: Registered authencation helper for auth mode: uaa_auth
2018-09-10T12:48:01Z [INFO] Config path: /etc/ui/app.conf
2018-09-10T12:48:01Z [INFO] initializing configurations...
2018-09-10T12:48:01Z [INFO] key path: /etc/ui/key
2018-09-10T12:48:01Z [INFO] initializing client for adminserver http://harbor-harbor-adminserver ...
2018-09-10T12:48:01Z [INFO] initializing the project manager based on local database...
2018-09-10T12:48:01Z [INFO] configurations initialization completed
2018-09-10T12:48:01Z [INFO] Registering database: type-PostgreSQL host-172.30.124.16 port-5432 databse-registry sslmode-"disable"
2018-09-10T12:48:01Z [INFO] Register database completed
2018-09-10T12:48:01Z [INFO] User id: 1 already has its encrypted password.
2018-09-10T12:48:01Z [INFO] Enable redis cache for chart caching
2018-09-10T12:48:01Z [DEBUG] [chart_repository.go:435]: Chart storage server is set to http://harbor-harbor-chartmuseum
2018-09-10T12:48:01Z [INFO] API controller for chart repository server is successfully initialized
2018-09-10T12:48:01Z [INFO] Policy scheduler start at 2018-09-10T12:48:01Z
2018-09-10T12:48:01Z [INFO] initialized clair database
2018-09-10T12:48:01Z [INFO] Handle notification with topic 'scan_all_policy': notifier.ScanPolicyNotification{Type:"daily", DailyTime:0}
2018-09-10T12:48:01Z [INFO] Policy Alternate Policy is scheduled
2018-09-10T12:48:01Z [INFO] Policies:1, Tasks:0, CompletedTasks:0, FailedTasks:0
2018-09-10T12:48:01Z [INFO] Waiting for -46080 seconds after comparing offset 0 and utc time 46080
2018-09-10T12:48:01Z [ERROR] [main.go:162]: Failed to parse SYNC_REGISTRY: strconv.ParseBool: parsing "": invalid syntax
2018-09-10T12:48:01Z [INFO] Because SYNC_REGISTRY set false , no need to sync registry
2018-09-10T12:48:01Z [INFO] Init proxy
2018/09/10 12:48:01 [I] [asm_amd64.s:2337] http server Running on http://:8080

adminserver

2018-09-10T12:47:32Z [INFO] initializing system configurations...
2018-09-10T12:47:32Z [INFO] Registering database: type-PostgreSQL host-172.30.124.16 port-5432 databse-registry sslmode-"disable"
2018-09-10T12:47:32Z [INFO] Register database completed
2018-09-10T12:47:32Z [INFO] Upgrading schema for pgsql ...
2018-09-10T12:47:34Z [INFO] the path of json configuration storage: /etc/adminserver/config/config.json
2018-09-10T12:47:34Z [INFO] the path of key used by key provider: /etc/adminserver/key
2018-09-10T12:47:34Z [INFO] system initialization completed
10.131.0.1 - - [10/Sep/2018:12:47:37 +0000] "GET /api/configurations HTTP/1.1" 200 1764
10.131.0.1 - - [10/Sep/2018:12:47:43 +0000] "GET /api/configurations HTTP/1.1" 200 1764
10.131.0.1 - - [10/Sep/2018:12:47:44 +0000] "GET /api/configurations HTTP/1.1" 200 1764
10.131.0.1 - - [10/Sep/2018:12:48:01 +0000] "GET /api/configurations HTTP/1.1" 200 1764
10.131.0.1 - - [10/Sep/2018:12:49:22 +0000] "GET /api/configurations HTTP/1.1" 200 1764
10.131.0.1 - - [10/Sep/2018:12:50:09 +0000] "GET /api/configurations HTTP/1.1" 200 1764

registry

time="2018-09-10T12:47:35.983596138Z" level=info msg="debug server listening localhost:5001"
time="2018-09-10T12:47:35.985067135Z" level=info msg="configuring endpoint harbor (http://harbor-harbor-ui/service/notifications), timeout=3s, headers=map[]" go.version=go1.7.3 instance.id=1e597611-7150-4d0f-9225-cf91c8a8e804 service=registry version=v2.6.2
time="2018-09-10T12:47:36.037144366Z" level=info msg="using redis blob descriptor cache" go.version=go1.7.3 instance.id=1e597611-7150-4d0f-9225-cf91c8a8e804 service=registry version=v2.6.2
time="2018-09-10T12:47:36.037733838Z" level=info msg="listening on [::]:5000" go.version=go1.7.3 instance.id=1e597611-7150-4d0f-9225-cf91c8a8e804 service=registry version=v2.6.2

Invalid user name or password

After helm install XXX, and wait for all pods running.
I enter the portal using default username/password ( admin/Harbor12345), I got "Invalid user name or password".

I have no idea why I cannot login.

Note: I had checked Harbor12345 is defined in secret

How to access harbour after deploying to k8s

hi all,

how do we access the habor after we have deployed to k8s?

sorry for this question. M just wondering how do we access? I tried changing the webportal service to nodeport but the portal tells me that there is an error. Unknown errors have occurred. Please try again later.

Any help?

LDAP and non LDAP users

I have connected my harbor install to our LDAP, it works great, but we would also likes some service accounts not in LDAP for use only with harbor; as for instance the admin account is not authenticating via LDAP. Is this possible?

Harbor shows the wrong used/available storage stat

After doing some digging, I see Harbor calls the admin server to retrieve this stat. This works fine when harbor components are deployed on the same physical machine, however seeing as this is a helm chart... this reports the wrong usage. This could be solved by putting the adminserver in the same pod as the registry, along with setting the correct env var IMAGE_STORE_PATH

ingress的问题

在3个月前的老版本中,我使用以下命令
helm install . --debug --name my-harbor --namespace harbor --set externalDomain=harbor.aa.com.cn
就能在WEB中使用harbor.aa.com.cn访问harbor的web,但在新版中使用
helm install . --debug --name my-harbor --namespace harbor --set externalURL=http://harbor.aa.com.cn -f ./values1.yaml
却无法使用harbor.aa.com.cn打开网页,使用kubectl get in -n harbor 查看,发现hosts还是老的域名

status of jobservice is CrashLoopBackOff and always restart and Failed to load and run worker pool: connect to redis server timeout

I am trying to deploy harbor using 0.3.0 branch, but I found that jobservice always restart and harbor-ui cannot connect internal Redis.
just like:

2018-11-16T15:57:24Z [FATAL] [service_logger.go:73]: Failed to load and run worker pool: connect to redis server timeout: dial tcp 10.96.137.238:0: i/o timeout

[root@k8s-master01 harbor-helm]# kubectl get pod -n harbor
NAME                                              READY     STATUS             RESTARTS   AGE
harbor-v1-harbor-adminserver-5b59c684b4-rwgvs     1/1       Running            2          17m
harbor-v1-harbor-chartmuseum-699cf6599-4bb5p      1/1       Running            0          17m
harbor-v1-harbor-clair-6d9bb84485-c829n           1/1       Running            2          17m
harbor-v1-harbor-database-0                       1/1       Running            0          17m
harbor-v1-harbor-jobservice-5c9496775d-8nfkp      0/1       CrashLoopBackOff   6          17m
harbor-v1-harbor-notary-server-5fb65b6866-6scfj   1/1       Running            0          17m
harbor-v1-harbor-notary-signer-5bfcfcd5cf-clslv   1/1       Running            0          17m
harbor-v1-harbor-registry-75c9b6b457-rzdd5        1/1       Running            0          17m
harbor-v1-harbor-ui-5974bd5549-kvcjk              1/1       Running            5          17m
harbor-v1-redis-84dffd8574-l6swk                  1/1       Running            0          17m

10.96.137.238:0: is redis ClusterIP,Look like there is no port of redis.

So I open templates/_helpers.tpl, and I found:

host:port,pool_size,password
100 is the default value of pool size
*/}}
{{- define "harbor.redisForUI" -}}
  {{- template "harbor.redis.host" . }}:{{ template "harbor.redis.port" . }},100,{{ template "harbor.redis.password" . }}
{{- end -}}

and:

{{- define "harbor.redis.host" -}}
  {{- if .Values.redis.external.enabled -}}
    {{- .Values.redis.external.host -}}
  {{- else -}}
    {{- .Release.Name }}-redis-master
  {{- end -}}
{{- end -}}

{{- define "harbor.redis.port" -}}
  {{- if .Values.redis.external.enabled -}}
    {{- .Values.redis.external.port -}}
  {{- else -}}
    {{- .Values.redis.master.port }}
  {{- end -}}
{{- end -}}

Finally I open values.yaml, and found the default configuration of redis

redis:
  # if external Redis is used, set "external.enabled" to "true"
  # and fill the connection informations in "external" section.
  # or the internal Redis will be used
  usePassword: false
  password: "changeit"
  cluster:
    enabled: false
  master:
    persistence:
      enabled: *persistence_enabled
      storageClass: "gluster-heketi"
      accessMode: ReadWriteOnce
      size: 1Gi
  persistence:
    # existingClaim:

but there is no redis.master.port. Then I add port: 6379 to the values.yaml.

redis:
  # if external Redis is used, set "external.enabled" to "true"
  # and fill the connection informations in "external" section.
  # or the internal Redis will be used
  usePassword: false
  password: "changeit"
  cluster:
    enabled: false
  master:
    port: 6379
    persistence:
      enabled: *persistence_enabled
      storageClass: "gluster-heketi"
      accessMode: ReadWriteOnce
      size: 1Gi
  persistence:
    # existingClaim:

and reinstall harbor

[root@k8s-master01 harbor-helm]# helm del --purge harbor-v1
[root@k8s-master01 harbor-helm]# helm install --name harbor-v1 .   --wait --timeout 1500 --debug --namespace harbor

Harbor is working, and I can access harborUI.
image

I just want to ask if this is a bug and everything ?

Goharhor UI: Cannot see the image pushed using Skopeo

Pushed an image goharbor registry using Skopeo but couldn't see it in GoHarbor UI.

Note: I'm able to inspect the pushed image to goharbor registry without any issues.
Command used: 'skopeo inspect IMAGE-NAME'

Here is the command:
skopeo --debug copy --src-creds= --dest-creds= docker://registry-one/project/imageName:latest docker://goharbor-registry/project/imageName:latest

skopeo version 0.1.26

protocol version not supported.

I'm newbie to harbor. I've installed the harbor follow the readme. I use the default values.yaml which is ingress enable.After installation, all seams work fine. all pods running.
But, I try to open https://core.harbor.domain. I got the error:the protocol version not supported.
some problem about the https.
when I set:
tls: enabled: false
I can access UI thrugh the http://core.harbor.domain.

How should I do?

Accessing HarborUI using NodePort indefinitely says "Loading"

I have deployed harbor from master branch with service type as NodePort

helm install --name harb-imagerepo . -f values.yaml
with following changes to values.yaml

externalURL: https://xxx.domain/harbor --- running External LoadBalancer( HA Proxy) with ssl termination on it.
ingress enabled: false
service.type: NodePort
secretKey: sixteen-digit-key
nginx.tls.enabled:false
Rest everything as default
PVC are getting provisioned using underlying Ceph cluster. There are no
below are more details
Cluster Info:
k8s-cluster-details

List of all pods in running state
image

List of Service
image

Result of curl

$ curl 10.xxx.1xx.1xx:30002
<!doctype html>

<title>VMware</title>
**Loading...**
<script type="text/javascript" src="runtime.a66f828dca56eeb90e02.js"></script><script type="text/javascript" src="scripts.70b8b612aef2adc60282.js"></script><script type="text/javascript" src="main.e08f87e1596c0c50af39.js"></script>

Jobservice GC invalid redis URL scheme

When trying to perform GC, the jobservice produces the following log:

2018-10-23T07:41:04Z [INFO] the readonly request has been sent successfully
2018-10-23T07:41:04Z [INFO] start to run gc in job.
2018-10-23T07:41:04Z [ERROR] [job_logger.go:82]: failed to connect to redis invalid redis URL scheme: 
2018-10-23T07:41:04Z [INFO] the readonly request has been sent successfully

My generated configmap looks like this:

protocol: "http"
port: 8080
worker_pool:
  workers: 50
  backend: "redis"
  redis_pool:
    redis_url: "harbor-redis:6379/1"
    namespace: "harbor_job_service_namespace"
logger:
  path: "/var/log/jobs"
  level: "DEBUG"
  archive_period: 14
admin_server: "http://harbor-adminserver"

It seems like the redis url is empty. Is there maybe somewhere else the redis_url must be specified?

lost commit history during move to new repo

Hi,

I just found the new repo for harbor's helm repo and noticed the commit history has all been squashed out. It would be really nice to have used git filter-branch --prune-empty --subdirectory-filter FOLDER-NAME BRANCH-NAME to split it out while keeping the commit history.

Also I believe this also makes the DCO for the original commit somewhat contentious as it does not list all contributors of code to the commit.

There is probably not much that can be done, its possible we could go back to the original repo and git filter-branch it and then re-apply the commits in here. I see there's a [incomplete] list of contributors in a contributors file so that at least helps acknowledge the hard work done by the various contributors.

adminserver not working with external database with SSL

Using branch 0.3.0 as "stable" is empty and "master" is under heavy development.

Deploying the helm char with:
Helm v2.11.0
Kubernetes v1.11.3

Initial test with SSL disabled worked perfectly fine.
Then:
1- edit values.yaml, update database.external.sslmode to require
2- All pods (except for adminserver) start up fine.
3- Notary completes migrations, everything is looking good so far everywhere else
4- adminserver stays in CrashLoopBackOff

adminserver logs:

2018-10-16T22:56:19Z [INFO] initializing system configurations...
2018-10-16T22:56:19Z [FATAL] [main.go:46]: failed to initialize the system: pq: no pg_hba.conf entry for host "<ip address>", user "<username>", database "<core_database>", SSL off

As you can see from the error "SSL off", looks like the adminserver is not applying the "SSL require" value properly (Notary does work with that). Checking the configmap harbor-harbor-adminserver it does shows the value on it: POSTGRESQL_SSLMODE: require

Also tried with prefer, same results for adminserver, and notary containers won't come up.

Please check or let me know if I should use a different word (true, enable, enabled, etc?) on adminserver configmap.

Thanks

p.d.
Is this related to goharbor/harbor#5843 ?
If yes, how can I get the fix into my helm chart?
Thanks again

when this chart available ?

Hello.

This is just a simple question.
I think this chart is not stable now, when will be this chart stable?
Please share your plan and roadmap if you have.

Thanks.

Jobservice CrashLoopBackOff

I tried use this chart to install harbor , but the Jobservice pod cannot startup.

What wrong with it?

Here is my harbor info:

[root@master harbor-helm]# kubectl -n harbor-dev get pvc
NAME                                         STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-harbor-dev-harbor-redis-0               Bound     pvc-29181d68-e3cb-11e8-86c1-fa163eb28327   1Gi        RWO            rbd            12m
database-data-harbor-dev-harbor-database-0   Bound     pvc-290d3930-e3cb-11e8-86c1-fa163eb28327   1Gi        RWO            rbd            12m
harbor-dev-harbor-chartmuseum                Bound     pvc-289b297a-e3cb-11e8-86c1-fa163eb28327   5Gi        RWO            rbd            12m
harbor-dev-harbor-jobservice                 Bound     pvc-289d7b74-e3cb-11e8-86c1-fa163eb28327   1Gi        RWO            rbd            12m
harbor-dev-harbor-registry                   Bound     pvc-289ef921-e3cb-11e8-86c1-fa163eb28327   50Gi       RWO            rbd            12m
[root@master harbor-helm]# kubectl -n harbor-dev get po 
NAME                                               READY     STATUS             RESTARTS   AGE
harbor-dev-harbor-adminserver-5f5bb8479-gtk6c      1/1       Running            3          12m
harbor-dev-harbor-chartmuseum-7865bbfc46-fkblk     1/1       Running            0          12m
harbor-dev-harbor-clair-568d6cc674-7dtsb           1/1       Running            2          12m
harbor-dev-harbor-core-6d6d659975-92gzp            1/1       Running            3          12m
harbor-dev-harbor-database-0                       1/1       Running            0          12m
harbor-dev-harbor-jobservice-845c69c5c-s5lcw       0/1       CrashLoopBackOff   6          12m
harbor-dev-harbor-notary-server-668cc9b786-khdkn   1/1       Running            0          12m
harbor-dev-harbor-notary-signer-657f4cbfdd-mnkhg   1/1       Running            0          12m
harbor-dev-harbor-portal-67c958994c-hr2ls          1/1       Running            0          12m
harbor-dev-harbor-redis-0                          1/1       Running            0          12m
harbor-dev-harbor-registry-68b5c6c88f-sfzmm        2/2       Running            0          12m

[root@master harbor-helm]# kubectl -n harbor-dev logs -f harbor-dev-harbor-jobservice-845c69c5c-s5lcw
panic: load configurations error: missing logger config of job service


goroutine 1 [running]:
main.main()
	/go/src/github.com/goharbor/harbor/src/jobservice/main.go:41 +0x265


Error: failed to prepare subPath for volumeMount "ui-config" of container "ui"

There are similar issues raised https://github.com/goharbor/harbor/issues/4496 where it was mentioned that it was fixed in the master branch.
However, I have tried it and got similar error.

Pods are in CreateContainerConfigError and upon checking the pod events, there is this error about failing to prepare subpath.

K8s version 1.9.4
Harbor-helm cloned from git clone on 07 Sep 2018.
Helm version 2.9.0
Added new ui.secrets during helm install command. Similarly added other secrets as well in the helm install command.

kubectl -n registry get po
NAME                                             READY     STATUS                       RESTARTS   AGE
registry-harbor-adminserver-56bdf58596-lv95s     0/1       CreateContainerConfigError   0          13m
registry-harbor-chartmuseum-56f6dbbfd8-jmmt7     1/1       Running                      0          13m
registry-harbor-clair-5c89b85f5f-2r92l           0/1       CreateContainerConfigError   0          13m
registry-harbor-database-0                       1/1       Running                      0          13m
registry-harbor-jobservice-6685ff9f44-5rzgq      0/1       CreateContainerConfigError   0          13m
registry-harbor-notary-server-6499cfb799-6lm8c   0/1       CreateContainerConfigError   0          13m
registry-harbor-notary-signer-68b48c78bf-dkxxv   1/1       Running                      0          13m
registry-harbor-registry-65b8458c75-w9s4t        0/1       CreateContainerConfigError   0          13m
registry-harbor-ui-7d8f58b85d-fqq5w              0/1       CreateContainerConfigError   0          13m
registry-redis-master-0                          1/1       Running                      0          13m

Here is the evetn output from describe output for one pod:

Events:
  Type     Reason                 Age                 From                        Message
  ----     ------                 ----                ----                        -------
  Normal   Scheduled              16m                 default-scheduler           Successfully assigned registry-harbor-clair-5c89b85f5f-2r92l to node1
  Normal   SuccessfulMountVolume  12m                 kubelet, node1  MountVolume.SetUp succeeded for volume "clair-config"
  Normal   SuccessfulMountVolume  12m                 kubelet, node1  MountVolume.SetUp succeeded for volume "default-token-xvtr2"
  Warning  Failed                 10m (x11 over 12m)  kubelet, node1  Error: failed to prepare subPath for volumeMount "clair-config" of container "clair"
  Normal   Pulled                 7m (x25 over 12m)   kubelet, node1  Container image "goharbor/clair-photon:dev" already present on machine

Error: UPGRADE FAILED

# helm upgrade harbor --set externalURL=https://harbor.172.50.0.29.nip.io --set ingress.hosts.core=harbor.172.50.0.29.nip.io --set ingress.hosts.notary=notary.172.50.0.29.nip.io --namespace=harbor .
Error: UPGRADE FAILED: StatefulSet.apps "harbor-redis-master" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden. && StatefulSet.apps "harbor-harbor-database" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden.

STORAGE: local" maybe STORAGE: local?

chartmusum pod failed, logs got error:

unsupported backend: local"

check the harbor-helm/templates/chartmuseum/chartmuseum-cm.yaml
found:

STORAGE: local"

may:

STORAGE: local

change it, redeploy, chartmusum start success.

How to access the registry use cli

大家好,

   当我部署完Harbor之后,因为我修改了harbor-ui的服务的spec.type为NodePort,
   所以我可以通过hostIP+port的方式在浏览器里访问harbor-ui。
   但是当我想登录registry时,遇到了一个错误,
    我在命令行执行:“docker login core.harbor.domain”时,
     报错“Error response from daemon: Get https://core.harbor.domain/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)”。
 这里我需要怎么来配置我的harbor服务来让我可以通过命令行来登录它,并且上传我的镜像呢?

values.yaml中commonName 应该配置什么

Error: render error in "harbor/templates/nginx/secret.yaml": template: harbor/templates/nginx/secret.yaml:3:13: executing "harbor/templates/nginx/secret.yaml" at <genSignedCert .Value...>: error calling genSignedCert: error parsing ip:
values.yaml中commonName 应该配置什么

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.