Giter VIP home page Giter VIP logo

connect's People

Contributors

ag-adampike avatar dsaxton-1password avatar edif2008 avatar ekmoore avatar jillianwilson avatar jpcoenen avatar mackenbach avatar simonbarendse avatar thatguygriff avatar verkaufer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

connect's Issues

Push images to a different registry, in addition to Docker Hub

Is your feature request related to a problem? Please describe.

In order to combat dockerhub's imposed rate limits it would be great to host these images on ghcr or quay.io.

Describe the solution you'd like

Push the connect-api and connect-sync images to an additional container registry like Quay or GHCR.

There is nothing like your Kubernetes cluster not able to use secrets because dockerhubs very low pull rate limits.

Terraform Configuration Not Connecting

I have a relatively straightforward configuration of 1password connect setup in terraform. The server seems to boot without any errors but I do not show any connections in the 1password dashboard.

resource "kubernetes_namespace" "op_connect" {
  provider = kubernetes.default-cluster
  metadata {
    name = "op-connect"
  }

  lifecycle {
    ignore_changes = [
      metadata[0].labels,
    ]
  }
}

resource "helm_release" "op_connect" {
  provider   = helm.default-cluster
  name       = "op-connect"
  namespace  = kubernetes_namespace.op_connect.metadata[0].name
  repository = "https://1password.github.io/connect-helm-charts/"
  chart      = "connect"
  version    = "1.8.1"
  set_sensitive {
    name  = "connect.credentials_base64"
    type  = "string"
    value = var.connect_credentials
  }

  set_sensitive {
    name  = "operator.token.value"
    type  = "string"
    value = var.operator_token
  }

  # Deploy the Kubernetes Operator alongside the Connect Server
  set {
    name  = "operator.create"
    value = true
  }
}

The variables are injected using Terraform cloud. The value of credentials_base64 was encoded using the script on the ESC Fargate doc:

cat 1password-credentials.json | base64 | tr '/+' '_-' | tr -d '=' | tr -d '\n'

The logs don't seem to have any errors.
image

Any ideas on what could be wrong would be appreciated.

The "Getting Started" section in the kubernetes example appears to be incorrect

This might be because my use case is weird, but still.

I'm using Kustomize with the helmCharts option to inflate the 1password connect charts, but I'm deploying the secret that contains 1password-credentials.json using the command in the kubernetes example.

This is the command I'm referring to

kubectl create secret generic op-credentials --from-file=1password-credentials.json=./1password-credentials.json

That produces the following secret:

apiVersion: v1
data:
  1password-credentials.json:  <base64_encoded_file_contents>
kind: Secret
metadata:
  creationTimestamp: "2023-03-04T05:14:48Z"
  name: op-credentials
  namespace: default
  resourceVersion: "9983"
  uid: 4454843d-8d82-4a87-8f7a-366eaf34e753
type: Opaque

The problem is that what gets assigned to the OP_SESSION env variable in the connect-api container is already decoded, so it's the plain JSON contents of the file (I quadruple checked by opening an ssh tunnel into the container), but apparently the container expects the value to still be base64 encoded, and tries to decode it again.

This causes 500 errors when making requests to the API, which look like this in the service logs:

Server: (unable to get credentials and initialize API, retrying in 30s), Wrapped: (failed to FindCredentialsUniqueKey), Wrapped: (failed to loadCredentialsFile), Wrapped: (LoadLocalAuthV2 failed to credentialsDataFromBase64), illegal base64 data at input byte 0"

I managed to make things work by encoding the JSON to base64, encoding that base64 value to base64 AGAIN, and then manually creating the secret manifest file and assigning that doubly-encoded value to the data.1password-credentials.json key.

There's a chance that this is caused by a combination of the kubernetes (v1.25.4) and chart (non-specified, so I assume latest) versions I'm using. The images for both connect-api and connect-sync are at version 1.5.7.

It feels more like either a bug in the container or an error in the example docs, although my money is on the bug, I see no sensible reason why the file should be encoded/decoded twice.

I'm equal parts frustrated and proud for having finally figured this out.

Cheers

Memory Leak in Connect API Container

Your environment

Chart Version: connect-1.10.0
Image version: 1password/connect-api:1.6.1
Helm Version: 3.11.1
Kubernetes Version: v1.24.9-gke.3200

What happened?

I found that the onepassword-connect pod was using a great deal of memory, specifically the connect-api container. The connect-api container was upgraded to the latest version, 1.6.1, and there still seems to be a pretty obvious memory leak.

Steps to reproduce

Deploy the 1Password Connect Helm chart on Kubernetes
View pod memory metrics within an hour or two

Notes & Logs

image

image

max(container_memory_working_set_bytes{container="connect-api", pod="YOUR-POD-NAME"} / (1024*1024)) by (container)

Unable to update a subset of item attributes

Your environment

Chart Version: 1.10.0
Helm Version: 3.11.1
Kubernetes Version: v1.24.9
Docker Compose: v2.15.1

What happened?

I used PATCH /v1/vaults/{vaultUUID}/items/{itemUUID} as found in the API reference but got a 400 response code when using the 1Password connect server deployed via both the Docker Compose example and the connect server Helm chart with Minikube.

I've ran curl -i -X PATCH $OP_CONNECT_HOST/v1/vaults/$OP_VAULT/items/$OP_ITEM_ID -H "Content-Type: application\json" -H "Authorization: Bearer $OP_CONNECT_TOKEN" -d "{ "op":"replace", "path":"/title", "value":"hello-world" }" against both deployments which returned a 400. I've also tried this with different paths like title, /fields/username, /fields/username/label which have all returned 400 as well.

On both deployments of the 1Password Connect server I was successfully able get and delete items from the vault which makes me think the token in use is fine.

What did you expect to happen?

I expected to update part of the 1Password item.

Steps to reproduce

  1. Setup 1Password connect server using either Helm or Docker Compose
  2. Replace a subset of a 1Password item attributes using PATCH /v1/vaults/{vaultUUID}/items/{itemUUID}

Notes & Logs

Vault ID and Item ID have been replaced with X

compose-op-connect-api-1   | {"log_message":"(I) PATCH /v1/vaults/X/items/X","timestamp":"2023-03-01T16:34:11.896995668Z","level":3,"scope":{"request_id":"3f2a15c4-953a-424b-ba6e-f38ff52f461d"}}

compose-op-connect-api-1   | {"log_message":"(I) PATCH /v1/vaults/X/items/X completed (400: Bad Request) in 31ms","timestamp":"2023-03-01T16:34:11.928466168Z","level":3,"scope":{"request_id":"3f2a15c4-953a-424b-ba6e-f38ff52f461d"}}
{"status":400,"message":"Invalid request body or parameter"}

/metrics endpoint exposed

I have a connect server deployed in a different cloud service than my main app, so I am using Lets Encript to protect the communication from my app to the 1password connect server. I noticed that the /metrics endpoint in the server is publicly available, and it responds with information about the server. I am not sure if you (at 1Password) are aware of this, I don't think this is sensible information but I think it would be better if it wasn't public.

API support for "DOCUMENT"

From the API docs it seems it's only possible to list and download files. Is it possible to push them also ?

Permissions and docker-compose

When following the "Get Started"-section, and using the OP CLI to generate the 1password-credentials.json, the file gets created with 600 permission flags, under the current user (on many systems this will be user with id 1000). Then, when using the provided docker-compose.yml, the 1password-credentials.json file gets loaded as a volume into the containers. The container will then be unable to open the file, as it is running as the user opuser with the id 999, and the 1password-credentials.json gets loaded with the same permission it had on the host, these being 600 and 1000:1000.

An example of an error in the container:

{"log_message":"(E) Server: (unable to get credentials and initialize API, retrying in 30s), Wrapped: (failed to FindCredentialsUniqueKey), Wrapped: (failed to loadCredentialsFile), Wrapped: (LoadLocalAuthV2 failed to credentialsDataFromDisk), open /home/opuser/.op/1password-credentials.json: permission denied","timestamp":"2022-03-21T13:45:00.240003924Z","level":1}

Workaround: Try changing the permissions of the 1password-credentials.json to something more permissive, like 666. However, changing permissions for sensitive files like this one is probably not a good idea.

Software and versions used to test:

docker-compose version 1.29.2, build 5becea4c
Docker version 19.03.9, build 9d988398e7
OP 2.0.0

Fields with type MONTH_YEAR are broken

Using the Connect API server to receive items that contain fields of type MONTH_YEAR is broken as these don't contain any value.

Example credit card item response from the API -

{
  "category": "CREDIT_CARD",
  "createdAt": "2021-08-30T14:22:39Z",
  "fields": [
    { "id": "notesPlain",                 "label": "notesPlain",            "purpose": "NOTES",                 "type": "STRING" },
    { "id": "cardholder",                 "label": "cardholder name",                                           "type": "STRING",             "value": "Johnny Dohny" },
    { "id": "type",                       "label": "type",                                                      "type": "CREDIT_CARD_TYPE",   "value": "mc" },
    { "id": "ccnum",                      "label": "number",                                                    "type": "CREDIT_CARD_NUMBER", "value": "2221 1234 5678 1234 5678" },
    { "id": "cvv",                        "label": "verification number",                                       "type": "CONCEALED",          "value": "123" },
    { "id": "expiry",                     "label": "expiry date",                                               "type": "MONTH_YEAR" },
    { "id": "validFrom",                  "label": "valid from",                                                "type": "MONTH_YEAR" },
    { "id": "n4fpws2ygmimbfgkcha5auu6zi", "label": "test month year",                                           "type": "MONTH_YEAR" },
    { "id": "bank",                       "label": "issuing bank",          "section": { "id": "contactInfo" }, "type": "STRING" },
    { "id": "phoneLocal",                 "label": "phone (local)",         "section": { "id": "contactInfo" }, "type": "PHONE" },
    { "id": "phoneTollFree",              "label": "phone (toll free)",     "section": { "id": "contactInfo" }, "type": "PHONE" },
    { "id": "phoneIntl",                  "label": "phone (intl)",          "section": { "id": "contactInfo" }, "type": "PHONE" },
    { "id": "website",                    "label": "website",               "section": { "id": "contactInfo" }, "type": "URL" },
    { "id": "pin",                        "label": "PIN",                   "section": { "id": "details" },     "type": "CONCEALED" },
    { "id": "creditLimit",                "label": "credit limit",          "section": { "id": "details" },     "type": "STRING" },
    { "id": "cashLimit",                  "label": "cash withdrawal limit", "section": { "id": "details" },     "type": "STRING" },
    { "id": "interest",                   "label": "interest rate",         "section": { "id": "details" },     "type": "STRING" },
    { "id": "issuenumber",                "label": "issue number",          "section": { "id": "details" },     "type": "STRING" }
  ],
  "id": "hs4myjtn2ev76iyraevsnttty4",
  "lastEditedBy": "XXXXXXXXXXXXXXXXXXXXXXXXXX",
  "sections": [
    { "id": "details", "label": "Additional Details" },
    { "id": "contactInfo", "label": "Contact Information" },
    { "label": "Details" }
  ],
  "title": "Very Secret Master Card",
  "updatedAt": "2021-09-06T08:14:24Z",
  "vault": { "id": "xxxxxxxxxxxxxxxxxxxxxxxxxx" },
  "version": 3
}

Critical Vulnerabilities in Connect images

Hey folks,

Running docker scan on 1password/connect-api produces a report for two critical vulnerabilities as demonstrated blow:

✗ Critical severity vulnerability found in zlib/zlib1g
  Description: Out-of-bounds Write
  Info: https://snyk.io/vuln/SNYK-DEBIAN11-ZLIB-2976151
  Introduced through: meta-common-packages@meta
  From: meta-common-packages@meta > zlib/zlib1g@1:1.2.11.dfsg-2+deb11u1

✗ Critical severity vulnerability found in openssl/libssl1.1
  Description: OS Command Injection
  Info: https://snyk.io/vuln/SNYK-DEBIAN11-OPENSSL-2933518
  Introduced through: ca-certificates@20210119, [email protected]
  From: ca-certificates@20210119 > [email protected]+deb11u2 > openssl/[email protected]+deb11u2
  From: [email protected] > shadow/passwd@1:4.8.1-1 > pam/[email protected]+deb11u1 > libnsl/[email protected] > libtirpc/[email protected] > krb5/[email protected]+deb11u1 > krb5/[email protected]+deb11u1 > openssl/[email protected]+deb11u2
  From: ca-certificates@20210119 > [email protected]+deb11u2
  Fixed in: 1.1.1n-0+deb11u3

Given that Connect is closed source we cannot determine the contextual severity of either. Could we request that these two critical vulnerabilities be addressed (or ideally Connect be open sourced 😉 ).

GKE AutoPilot: No NET_BROADCAST

Tried to test a deployment to GKE in autopilot mode tonight and got an error deploying the helm chart that one of the containers was requesting NET_BROADCAST, which is not available in autopilot. Is this a functional requirement that means I need to move back to traditional GKE setup, or is this something that can be configured?

Incorrect syntax for fields without section when fetched via Connect API

I've just set up 1Password Connect in my company's network, but immediately ran into issues fetching fields from secret items that didn't have associated sections:

$ op read op://Systems-staging/.env.api/SOME_API_KEY
[ERROR] 2022/08/24 12:31:40 could not read secret op://Systems-staging/.env.api/SOME_API_KEY: could not find field or file SOME_API_KEY on item .env.api in vault Systems-staging

This is the syntax described for fields without sections in the documentation, and I'm able to fetch the field in question using the above syntax when not routing my request through our Connect API instance.

After confirming that items with sections were resolving properly, I tried a different syntax on a hunch:

$ op read op://Systems-staging/.env.api/undefined/SOME_API_KEY
abcdef123456

It appears specifying the section as undefined allows fetching fields with no section. This is concerning both in that it contradicts official documentation and that it implies ambiguous behavior when a section's name is set to the string "undefined".

We're running in ECS Fargate and using the latest versions of the Connect API/Sync containers (v1.5.6). I haven't exhaustively tested this issue on other secret types, but the item exhibiting this behavior is an API Credential secret.

Ability to share vaults to users

Summary

Ability to permission and share one password vaults with users and groups
The functionality I am referring to is available via the website, and documented here; https://support.1password.com/create-share-vaults-teams/#edit-a-vault

Use cases

There are various use-cases where I could see it being useful to create and share vaults in the downstream terraform provider.
An example is the creation of VPN credentials, that are shared to an item in a personal vault.

Proposed solution

I realise this is not supported in the upstream one password API, but I'm unsure where else I can find more information.
Is this functionality on the road map?

Is there a workaround to accomplish this today?

Not to my knowledge.

References & Prior Work

Available on the website.

https://support.1password.com/create-share-vaults-teams/#edit-a-vault

How to update a password item with the REST API?

Hi,

I'm trying to programmatically update a password via the REST API, but the two options we see have issues:

  • using the PUT /v1/vaults/{vaultUUID}/items/{itemUUID} operation does not work because then we also have to set things like the entropy and password strength (which should be generated by 1password)
  • using the PATH /v1/vaults/{vaultUUID}/items/{itemUUID} operation does not work because there's no way to reference the password value, something like "path": "fields/password/value" obviously because it's an array of fields

Is there a way to do this with the REST API?

Incorrect Item.File info returned by Connect Server

Testing against version 1.5.5 of connect server, using v1.5.0 of connect-sdk-go. It's returning an empty File.Name, and an incorrect file ID in the ContentPath, but only when there's more than one file in the item, and only for that second file.

Do a GetItem, then loop over the Item.Files and print them. You'll see the first one is correct, and the others are not.

Correct first file:

&onepassword.File{
    ID:"wtnh5e6lmncudctdredacted",
    Name:"tls.crt",
    Section:(*onepassword.ItemSection)(nil),
    Size:1903,
    ContentPath:"/v1/vaults/ch3poqtcarbpviredacted/items/ygq6tyfo3adhzkredacted/files/wtnh5e6lmncudctdredacted/content",
    content:[]uint8(nil)
}

Incorrect second file: (notice how the ID does not match the one in ContentPath, and Name is empty when it should be "tls.key".)

&onepassword.File{
    ID:"tjpabagntylpvredacted",  // does not match ID in ContentPath
    Name:"",  // missing Name
    Section:(*onepassword.ItemSection)(0xc00171d580),
    Size:1675,
    ContentPath:"/v1/vaults/ch3poqtcarbpviredacted/items/ygq6tyfo3adhzkredacted/files/kgt4gkxbavhhbndvredacted/content",
    content:[]uint8(nil)
}

GetFileContent broken in 1.5.4

Docker images: 1password/connect-api:1.5.4 and 1password/connect-sync:1.5.4.
Connect-sdk-go version: v1.4.0.

With 1.5.0 of Connect, the following code worked, but with 1.5.4 it produces

panic: error with GetFileContent: need at least version 1.3.0 of Connect for this function, detected version 1.2.0 (or earlier). Please update your Connect server

Here's the code to repeat the problem:

package main

import (
	"fmt"

	"github.com/1Password/connect-sdk-go/connect"
	"github.com/1Password/connect-sdk-go/onepassword"
)

func main() {
	connectHost := "http://localhost:8080"
	token := "redacted"
	vaultName := "my-vault"
	itemName := "docker-credentials"
	fileName := "docker-credentials.json"
	client := connect.NewClientWithUserAgent(connectHost, token, "my-test")

	// get vault
	vaults, err := client.GetVaultsByTitle(vaultName)
	if err != nil {
		panic(err)
	}
	if len(vaults) != 1 {
		panic("more or less than one vault")
	}

	// get item
	// use GetItemsByTitle instead of GetItemByTitle in order to handle length cases
	items, err := client.GetItemsByTitle(itemName, vaults[0].ID)
	if err != nil {
		panic(err)
	}
	item := &onepassword.Item{}
	switch {
	case len(items) == 1:
		item, err = client.GetItem(items[0].ID, items[0].Vault.ID)
		if err != nil {
			panic(err)
		}
	case len(items) > 1:
		panic("expected one item")
	}

	// get file contents
	contents := []byte{}
	for _, file := range item.Files {
		if file.Name == fileName {
			contents, err = client.GetFileContent(file)
			if err != nil {
				panic(fmt.Sprintf("error with GetFileContent: %v", err))
			}
		}
	}

	fmt.Printf("%#v\n", string(contents))
}

Resource usage explanation

I've been tracking the metrics of onepassword connect server deployed in a k8s environment. I wanted to understand why the usage pattern follows this gradual growth.

CPU / Memory
The dots represent container restarts.

Screenshot 2022-08-09 at 17 44 10

Unable to Create SSH Key

When adding an item via the connect API with category = "SSH_KEY", the resultant private_key is continually returned as Field type = "STRING" instead of the passed in Field type = "SSHKEY".

Example:
item.json

{
  "vault": {
    "id": "abc123"
  },
  "title": "SSH Key",
  "category": "SSH_KEY",
  "fields": [
    {
      "id": "private_key",
      "label": "private key",
      "type": "SSHKEY",
      "value": "-----BEGIN OPENSSH PRIVATE KEY-----\nabcdefghijklmnopqrstuvwxyz\n...\n-----END OPENSSH PRIVATE KEY-----\n"
    }
  ]
}

curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $OP_API_TOKEN" -d @item.json https://connect-server.example.com/v1/vaults/abc123/items

{
  "vault": {
    "id": "abc123"
  },
  "title": "SSH Key",
  "category": "SSH_KEY",
  "fields": [
    {
      "id": "private_key",
      "label": "private key",
      "type": "STRING",
      "value": "-----BEGIN OPENSSH PRIVATE KEY-----\nabcdefghijklmnopqrstuvwxyz\n...\n-----END OPENSSH PRIVATE KEY-----\n"
    }
  ]
}

When getting an existing SSH_Key from the Connect API, the response includes the correct Field type = "SSHKEY". I've tested using both Field types of SSH_KEY and SSHKEY as I've seen them both mentioned in the docs/returned responses and connect-sdk-go. I ran into this issue while working on adding SSH Key support to the 1Password Terrafrom provider.

Unable to create a token from the CLI even with proper access to the vault

Hi there! I am a new 1Password Connect user. Following the documentation on this GitHub README, I created a Connect Server (not deployed yet at this point) but creating a Token fails with the error: Couldn't issue the token: "can't grant access to the vault: You do not have the requested access to vault

➜  ~ op connect server create Connect --vaults redacted
File /Users/arun/1password-credentials.json already exists, overwrite it? [Y/n]
Set up a Connect server.
UUID: redacted
Credentials file: /Users/arun/1password-credentials.json
➜  ~ op connect token create Test --server Connect --vaults redacted
[ERROR] 2023/02/14 00:05:08 Couldn't issue the token: "can't grant access to the vault: You do not have the requested access to vault 'Temp'."

If it matters, even after deploying the Connect server on my infrastructure (a Docker container in my MacBook), token creation fails.

Any insights on why this is happening would be great, thank you!

High memory usage/possible memory leak

We use 1Password connect and operator with the connect helm chart across several Kubernetes clusters and have noticed very high memory usage (up to 32GB on a large cluster). In our monitoring we can see that the memory consumption goes up steadily even when no new 1Password items are created. Could this be a memory leak?
1Password connect version: 1.2.0
1Password operator version: 1.0.1
1Password connect helm chart version: 1.4.0
Kubernetes versions: between 1.18.5 and 1.19.10

credentials file is not version 2

Hey,

I wanted to try out the terraform 1password provider and was following support.1password.com/secrets-automation/ along using this docker-compose.yaml

Generated a credentials file in 1password, then saved and mounted as 1password-credentials.json into the containers

{
    "sectionName": "#",
    "details": {
        "fields": [],
        "documentAttributes": {
            "integrityHash": "U-INTEGRITYHASH",
            "fileName": "1password-credentials.json",
            "unencryptedSize": 1093,
            "signingKey": {
                "kid": "KID",
                "ext": true,
                "key_ops": [
                    "encrypt",
                    "decrypt"
                ],
                "k": "k",
                "alg": "A256GCM",
                "kty": "oct"
            },
            "encryptionKey": {
                "k": "K_k",
                "ext": true,
                "key_ops": [
                    "encrypt",
                    "decrypt"
                ],
                "kid": "KID",
                "alg": "A256GCM",
                "kty": "oct"
            },
            "documentId": "DOC_ID",
            "encryptedSize": 1109,
            "nonce": "NONCE"
        }
    },
    "uuid": "UUID",
    "updatedAt": 1636373455,
    "createdAt": 1636373455,
    "categoryUUID": "006",
    "overview": {
        "url": "",
        "pbe": 0,
        "pgrng": false,
        "title": "1Password - automation Credentials File",
        "ainfo": "1 KB",
        "ps": 0
    }
}

docker-compose logs show:

op-connect-sync_1  | {"log_message":"(I) ### syncer credentials bootstrap ### ","timestamp":"2021-11-08T12:48:14.2144965Z","level":3}
op-connect-sync_1  | {"log_message":"(E) Server: (unable to get credentials and initialize API, retrying in 16s), Wrapped: (failed to FindCredentialsUniqueKey), credentials file is not version 2","timestamp":"2021-11-08T12:48:14.2161563Z","level":1}
op-connect-sync_1  | {"log_message":"(I) ### syncer credentials bootstrap ### ","timestamp":"2021-11-08T12:48:30.1860457Z","level":3}
op-connect-sync_1  | {"log_message":"(E) Server: (unable to get credentials and initialize API, retrying in 30s), Wrapped: (failed to FindCredentialsUniqueKey), credentials file is not version 2","timestamp":"2021-11-08T12:48:30.1877143Z","level":1}
op-connect-sync_1  | {"log_message":"(I) ### syncer credentials bootstrap ### ","timestamp":"2021-11-08T12:49:00.1580402Z","level":3}
op-connect-sync_1  | {"log_message":"(E) Server: (unable to get credentials and initialize API, retrying in 30s), Wrapped: (failed to FindCredentialsUniqueKey), credentials file is not version 2","timestamp":"2021-11-08T12:49:00.1600499Z","level":1}
op-connect-sync_1  | {"log_message":"(I) ### syncer credentials bootstrap ### ","timestamp":"2021-11-08T12:49:30.1265922Z","level":3}
op-connect-sync_1  | {"log_message":"(E) Server: (unable to get credentials and initialize API, retrying in 30s), Wrapped: (failed to FindCredentialsUniqueKey), credentials file is not version 2","timestamp":"2021-11-08T12:49:30.1283437Z","level":1}

Couldn't find any version specs etc. How do I change the version ? Is the credentials file ok ?

Log levels should be aligned with syslog

We are seeing a bunch of logs from connect-api that indicate success but the log level seems off. An example log:

{
    "log_message": "(I) GET /v1/vaults/XXX/items/XXX completed (200: OK)",
    "timestamp": "2022-07-31T11:05:55.785430724Z",
    "level": 3,
    "scope": {
        "request_id": "XXX",
        "jti": "XXX"
    }
}

The level says 3 which would map to syslog level Error but the message indicates (I) which seems like an info log. This is causing most of the logs from connect-api to show up as Errors in our log stack, even though, this seems like an info level log.

  1. Which is the correct log level for this?
  2. Could you use syslog levels for the level field?

Invalid item UUID errors

We started to see a bunch of errors in our logs from connect-api like this:

{
    "log_message": "(E) 400: Invalid Item UUID",
    "timestamp": "2022-07-31T11:05:55.79723929Z",
    "level": 1,
    "scope": {
        "request_id": "XXX",
        "jti": "XXX"
    }
}

connect-operator version: 1password/onepassword-operator:1.5.0
connect-api version: 1password/connect-api:1.5.6
connect-sync version: 1password/connect-sync:1.5.6

Unable to create token with write access on the UI

After creating a Connect Server with read+write access to a vault, I am trying to create a Token with access to the same vault, but the level of access defaults to just "Read". I am unable to grant "Write" access to the token despite the vault's write access being available to the Connect Server.

image

Could you help me understand why this might be happening?

Incorrect spec for Item: File

Spec says that Item: File contains a property called content_path.
Instead the property returned is capitalized as contentPath when requesting from the current live API.

User Permissions / Database Creation Issues

two issues,

For some reason despite supplying variables for MYSQL, it appears the docker container ignores those and tries to create a local 1password.sqlite database.

I have tried many scenarios, typically I use USER/GROUP 1000 but have tried 999 as well and also without a USER or GROUP defined variables all ending with the same outcomes.

ENVIRONMENT:****************************************************************************************

OS: Ubuntu 22.04.3 LTS x86_64
Host: Compute Instance
Kernel: 5.15.0-78-generic
Uptime: 1 day, 18 hours, 2 mins
Packages: 2727 (dpkg), 9 (snap)
Shell: zsh 5.8.1
Resolution: 1280x800
Terminal: /dev/pts/2
CPU: AMD EPYC 7542 (2) @ 2.899GHz
GPU: 00:01.0 Vendor 1234 Device 1111
Memory: 1917MiB / 3911MiB

DOCKER COMPOSE FILE********************************************************************************

version: "3.7"

networks:
default:
external:
name: bridge
docweb:
name: docweb
external: true

services:

op-connect-api:
image: 1password/connect-api:latest
hostname: op-connect-api
container_name: op-connect-api
restart: "unless-stopped"
environment:
TZ: "America/Los_Angeles"
PUID: 1000
PGID: 1000
OP_HTTP_PORT: 8080
OP_HTTPS_PORT: 8443
MYSQL_DATABASE: "1passworddb"
MYSQL_USERNAME: "1password"
MYSQL_PASSWORD: $(op read op://rr-docker/1passworddb/password)
OP_CONNECT_HOST: "http://localhost:8080"
OP_CONNECT_TOKEN: $OP_API_TOKEN
XDG_DATA_HOME: "/home/opuser"
ports:
- "8080:8080"
networks:
- docweb
volumes:
- "/opt/docker/app_data/1password/1password-credentials.json:/home/opuser/.op/1password-credentials.json"
- "/opt/docker/app_data/1password/data:/home/opuser/.op/data"
extra_hosts:
- "host.docker.internal:host-gateway"

op-connect-sync:
image: 1password/connect-sync:latest
hostname: op-connect-sync
container_name: op-connect-sync
restart: "unless-stopped"
environment:
TZ: "America/Los_Angeles"
PUID: 999
PGID: 999
OP_HTTP_PORT: 8080
OP_HTTPS_PORT: 8443
MYSQL_DATABASE: "1passworddb"
MYSQL_USERNAME: "1password"
MYSQL_PASSWORD: $(op read op://rr-docker/1passworddb/password)
OP_CONNECT_HOST: "http://localhost:8080"
OP_CONNECT_TOKEN: $OP_API_TOKEN
XDG_DATA_HOME: "/home/opuser"
networks:
- docweb
ports:
- "8081:8080"
volumes:
- "/opt/docker/app_data/1password//1password-credentials.json:/home/opuser/.op/1password-credentials.json"
- "/opt/docker/app_data/1password/data:/home/opuser/.op/data"
extra_hosts:
- "host.docker.internal:host-gateway"

volumes:
data:

LOGFILES:*******************************************************************************************

3 | log_message=(I) starting 1Password Connect Sync ... timestamp=2023-08-11T07:49:41.525326255-07:00
3 | log_message=(I) no existing database found, will initialize at /home/opuser/.op/data/1password.sqlite timestamp=2023-08-11T07:49:41.52849658-07:00
Error: Server: (failed to OpenDefault), Wrapped: (failed to open db), unable to open database file: no such file or directory
Usage:
connect-sync [flags]
Flags:
-h, --help help for connect-sync
-v, --version version for connect-sync

Unable to configure TLS

Hello,

I'm using a modified version of the AWS Fargate container template and am attempting to configure TLS with the Let's Encrypt environment variables.

I have an HTTPS listener that is listening on port 443 and forwarding to 8443. I have updated the various connect-api container ports in the template to 8443 as well.

The TLS handshake is failing with:

{"log_message":"(E) Requesting LetsEncrypt certificate: [example.com] solving challenges: example.com: no solvers available for remaining challenges (configured=[tls-alpn-01] offered=[http-01 dns-01 tls-alpn-01] remaining=[http-01 dns-01]) (order=https://acme-v02.api.letsencrypt.org/acme/order/394822390/61236891500) (ca=https://acme-v02.api.letsencrypt.org/directory)","timestamp":"2022-02-04T19:20:05.741009245Z","level":1}

I own the domain. The docs are a bit sparse and it's marked as a TODO in the template. Can you provide further clarification of how to configure this or how to troubleshoot it?

Also, do you have a recommended method for placing custom TLS files in the container?

Thanks,

Alex

/items filtering restrictions (category)

When filtering on the item list endpoint is the only filterable field 'title'? I'd like to filter by 'category' but that returns Unsupported request filter. Curious if this is a bug, or if the filtering is limited to a subset of fields on the item, and if so if that's documented anywhere

Open Source

Could you open source connect? I'd feel a lot better running it in my production cluster if I could audit the code.

Remove filesystem permissions check

It's good practice to set strict security contexts for containers and pods in Kubernetes, but connect won't start if the directory is not owned by the current user - even if it can write to it. I feel this is counter-productive and should be removed. I don't see how this condition improves security, and only serves to weaken it in environments with proper security contexts.

❯ k logs onepassword-connect-85bf47bb57-4c9tc
Defaulted container "connect-api" out of: connect-api, connect-sync
Error: Server: (failed to OpenDefault), Wrapped: (failed to defaultPath), failed to ConfigDir: Can't continue. We can't safely access "/.op" because it's not owned by the current user. Change the owner or logged in user and try again.
security context
securityContext: {
	capabilities: drop: ["ALL"]
	readOnlyRootFilesystem:   true
	allowPrivilegeEscalation: false
}
pod security context
securityContext: {
	runAsUser:           1000
	runAsGroup:          3000
	runAsNonRoot:        true
	fsGroup:             2000
	seccompProfile: type: v1.#SeccompProfileTypeRuntimeDefault
}

Feature Request: Support for Unprivileged execution of OnePassword Connect.

Past days we were running into some issues trying to deploy the OnePassword Connect and One Password Connect Operator containers on our clients cluster using your provided helm chart and instructions.

The two major issues we encountered is that the onepassword-connect deployment (api & sync) need a NET_BROADCAST capability and the initContainer wanting to run as Root. The cluster we want to deploy it to is set up by a Belgian provider and enforces a policy that all containers must run as non root. (Security measures).

We were able to mitigate these issues by creating a podsecuritypolicy that allows any capability to be requested, overrules the non root policy and patch the deployment with a serviceaccount linked to this podsecuritypolicy via a clusterrole and binding.

However, as this was done in a proof of concept context, it is crucial for us and our client that the final solution allows the pods to run unprivileged. We are convinced this could be done by adding an option so we could configure the helm chart to create the serviceaccount, roles, binding and policy needed for the onepassword-connect to work in an unprivileged environment. We noticed the onepassword-connect-operator does include this and we can confirm this is working flawlessly in our environment.

Is it something that is planned or could be concidered for a future release? We would be more than happy to incorporate 1Password Connect into our workflows.

Incorrect links under Deployment Examples

The bottom two links under “Deployment Examples” in https://github.com/1Password/connect#deploy-connect-server incorrectly point to the Docker Compose example. I believe it should be:

Deployment Examples:

- [Helm](https://github.com/1Password/connect-helm-charts/tree/main/charts/connect#deploying-1password-connect)
- [Docker Compose](https://github.com/1Password/connect/blob/main/examples/docker/compose/)
- [Kubernetes Manifest](https://github.com/1Password/connect/tree/main/examples/kubernetes)
- [AWS Elastic Container Service](https://github.com/1Password/connect/tree/main/examples/aws-ecs-fargate)

op cli + 1password connect 141 vault limit on create token

Works:
op connect token create "token_with_141_vaults" --server "your_server_id" --expires-in 1h --vault vault1 --vault vault2 ... --vault vault141

Doesn't work:
op connect token create "token_with_142_vaults" --server "your_server_id" --expires-in 1h --vault vault1 --vault vault2 ... --vault vault142

Error

[ERROR] 2023/10/03 10:58:37 failed to RegisterToken: "Validation: (400) (Bad Request), The structure of request was invalid.

Tested on versions of op cli from v2.6.0 through the latest v2.21.0 and it's been a problem consistently for the past year.

ECS / EFS - User ID Issues

Hi folks,

I'm having a few issues trying to deploy out a test environment using ECS and EFS as my data volume.

I am getting user permission errors when, I believe, the container is trying to read the mounted EFS volume.

When the containers are started, I get the following;

Error: Server: (failed to setupServer), Wrapped: (failed to NewController), Can't continue. We can't safely access "/mnt/opc/data/.op/data/files" because it's not owned by the current user. Change the owner or logged in user and try again.

Doing this with Terraform, or attempting to (!), here is my EFS Access Point & ECS Config, I assume the error will jump out at someone between these 2 as I feel like it's to do with the user id given to the EFS system but I am unsure what to set it to in order to get it working. I can't exec to the container because it tears down again immediately when this error appears.

resource "aws_efs_access_point" "opc_user_data_efs_access_point" {
  file_system_id = aws_efs_file_system.opc_ecs_volume_efs_file_system.id
  posix_user {
    gid = 1000
    uid = 1000
  }
  root_directory {
    path = local.efs_root_access_point_path
    creation_info {
      owner_gid   = 1000
      owner_uid   = 1000
      permissions = 775
    }
  }
  tags = merge({ Name = "${local.efs_name}-access-point" })
}

resource "aws_ecs_task_definition" "opc_api_ecs_task_definition" {
  family                   = "opc-api-task-def"
  ###OtherConfig
  container_definitions = jsonencode([{
    name  = "opc-api"
    image = "1password/connect-api:latest"
    portMappings = [
      {
        containerPort = 8080
        hostPort      = 8080
      }
    ]
    environment = [
      {
        name  = "OP_SESSION"
        value = var.op_base64_credentials
      },
      {
        name = "XDG_DATA_HOME"
        value = "/mnt/opc/data"
      }
    ]
    command = []
    mountPoints = [
      {
        containerPath = "/mnt/opc/data"
        sourceVolume  = "connect-data"
      }
    ]
  }])
  volume {
    name = "connect-data"
    efs_volume_configuration {
      file_system_id     = aws_efs_file_system.opc_ecs_volume_efs_file_system.id
      root_directory     = "/"
      transit_encryption = "ENABLED"
      authorization_config {
        iam             = "ENABLED"
        access_point_id = aws_efs_access_point.opc_user_data_efs_access_point.id
      }
    }
  }
  tags = merge({ Name = "opc-connect-service-task-def" })
}

Allow option for DNS-01 challenge for LetsEncrypt certificates

Feature request: Currently certs can only be requested using HTTPS port exposure to the public internet, we don't want to do this for obvious reasons and have been having our LE HTTPS certs generated for us by Traefik using DNS-01 challenges instead of HTTPS challenges.

Docs here

Assuming you're using the ACME agent for LE cert creation, could we add this to the 1P connect server as an option too?

FEATURE REQUEST: Support for same architecture and platform spread as OP CLI

The OP CLI is available for a delightful variety of platforms and architectures.

Currently, the binary (and Docker container) for connect-api and connect-sync are Linux x86-64 only, and the containers don't support multiarch. For many use cases, it would be convenient and useful to run theses on either cloud ARM instances, or in personal environments on Raspberry Pis or similar.

Repeat crashing after "directory not empty" error

We're running about 10 connect servers, and every few days they crash and restarting them fails with errors pertaining to files left on the filesystem. We're running in Kubernetes so it goes into crashLoopBackoff until we go manually delete the pod, at which point the files are cleared and it works again for a few days. Here's an example log:

onepassword-connect-obscured-id-1 connect-sync {"log_message":"(E) Server: (txFunc returned error), deleting obsoleted file : remove /home/opuser/.op/data/files: directory not empty","timestamp":"2021-11-30T20:46:20.902654958Z","level":1}
onepassword-connect-obscured-id-1 connect-sync {"log_message":"(E) Server: (failed to vaultGrp.Wait), Wrapped: (failed to db.Do), Wrapped: (txFunc returned error), deleting obsoleted file : remove /home/opuser/.op/data/files: directory not empty","timestamp":"2021-11-30T20:46:20.902763096Z","level":1}
onepassword-connect-obscured-id-1 connect-sync {"log_message":"(E) Server: (txFunc returned error), deleting obsoleted file : remove /home/opuser/.op/data/files: directory not empty","timestamp":"2021-11-30T20:46:21.0361941Z","level":1}
onepassword-connect-obscured-id-1 connect-sync {"log_message":"(E) Server: (failed to vaultGrp.Wait), Wrapped: (failed to db.Do), Wrapped: (txFunc returned error), deleting obsoleted file : remove /home/opuser/.op/data/files: directory not empty","timestamp":"2021-11-30T20:46:21.03626576Z","level":1}
onepassword-connect-obscured-id-1 connect-sync {"log_message":"(E) failed to sync after re-authenticating, will await next notification","timestamp":"2021-11-30T20:46:21.036295745Z","level":1}

/files is not supported in connect-api:1.2.0

Current behavior

  • Item have files in it
  • {{baseUrl}}/vaults/:vaultUuid/items/:itemUuid/files returns 404 and no logs on connect-api side
  • {{baseUrl}}/vaults/:vaultUuid/items/:itemUuid with the same ids works just fine and log it

Expected behavior

{{baseUrl}}/vaults/:vaultUuid/items/:itemUuid/files returns a list of files in an item.

Context

Image: connect-api:1.2.0

Invalid bearer token warning messages

I'm getting the following warning message in the connect logs:

> kubectl logs onepassword-connect-<id>
...
{"log_message":"(W) Server: (failed to createVerifiedAccess), Authentication: (Invalid bearer token), square/go-jose/jwt: validation failed, token is expired (exp)","timestamp":"<timestamp>","level":2,"scope":{"request_id":"<request-id>","jti":"<jti>"}}
...

It seems to still be working with the connect operator, but these warning are concerning me. What are these about? Can I resolve this?

JWT token is invalid base64

Hi

Not sure if this is the right place, but if not you may be able to point me to the right place.

I try to set up connect in K3S using the official 1Password Helm Chart. My connect-sync container is complaining about the auth:

{"log_message":"(E) Server: (unable to get credentials and initialize API, retrying in 30s), Wrapped: (failed to FindCredentialsUniqueKey), Wrapped: (failed to loadCredentialsFile), Wrapped: (LoadLocalAuthV2 failed to credentialsDataFromBase64), illegal base64 data at input byte 0","timestamp":"2021-10-22T10:00:01.61311251Z","level":1}

This is the part of the values I use for the token in the operator:

      token:
        name: onepassword-token
        key: token

I created the appropriate secret like this:

kubectl -n kube-system create secret generic onepassword-token --from-literal=token=<TOKEN>

I created the token and the 1password-credentials.json on the 1Password website. I think the token I got is wrong. I tried to decode it:

echo "<TOKEN>" | base64 -d
Illegal character '.' in input file.
{"alg":"ES256","kid":"avcxxxxxxxxxxxxxxxxxxxxa","typ":"JWT

The token I got from the website has two dots (.) in it, which is not valid base64. I tried it without the dots, but this doesn't work either. I also created new token, same problem.

TLS failing

I am running the connect containers in AWS Fargate. I am putting the certificate and key on a shared volume. I have verified that the certificate is valid, but am getting an error on the connect-api container startup

Error: Server: (failed to setupServer), Wrapped: (TLS configuration), load certificate+key: tls: failed to find any PEM data in certificate input

Any guidance on resolving the issue is appreciated.

ECS Example: Add some checks of Parameters to conditionally create new infrastructure

From @verkaufer #3 (comment)

We can do some contrived checks using Conditions and Parameters to create new resources if the parameters are some empty value:

Parameters:
  ExistingVPC: 
    Description: An existing VPC where the cluster will run (optional).
    Default: NONE
    Type: String
    
Conditions: 
  NewVPC: !Equals [!Ref ExistingVPC, NONE]
 
Resources:
  DefaultVPC:
    Type: AWS::EC2::VPC
    Properties:
      EnableDnsSupport: true
      EnableDnsHostnames: true
      CidrBlock: !FindInMap ['SubnetConfig', 'VPC', 'CIDR']
  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !If [NewVPC, !Ref DefaultVPC, !Ref ExistingVPC] # use the NewVPC definition if user did not provide ExistingVPC param

Health Check for docker-compose best practice?

Hi, I wanted to add a health check for connect-api but there's no sh or bash on the image:

OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown

And docker health checks largely rely on running a command, primarily curl.

I was just going to extend the base image to have net-tools but wanted to check if I was missing something obvious first.

I realise this is a docs repo, but didn't know where else to ask. Either way it would be good to document how to have a good local development workflow with 1password connect. I strung something workable together but little things like this make me wonder if work has already been done that I'm not aware of.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.