Giter VIP home page Giter VIP logo

charts's Introduction

Trino Community Kubernetes Helm Charts

CI/CD

Fast distributed SQL query engine for big data analytics that helps you explore your data universe

Usage

Helm must be installed to use the charts. Please refer to Helm's documentation to get started.

Once Helm is set up properly, add the repo as follows:

helm repo add trino https://trinodb.github.io/charts/

You can then run helm search repo trino to see the charts.

Then you can install chart using:

helm install my-trino trino/trino --version 0.25.0

Also, you can check the manifests using:

helm template my-trino trino/trino --namespace <YOUR_NAMESPACE>

Documentation

You can find documentation about the chart here.

Development

To test the chart, install it into a Kubernetes cluster. Use kind to create a Kubernetes cluster running in a container, and chart-testing to install the chart and run tests.

brew install helm kind chart-testing
kind create cluster
ct install

To run tests with specific values:

ct install --helm-extra-set-args "--set image.tag=450"

Use the test.sh script to run a suite of tests, with different chart values. If some of the tests fail, use the -s flag to skip cleanup and inspect the resources installed in the Kubernetes cluster. Use -n to use a specific namespace, not a randomly generated one. Use -t to run only selected tests. See the command help (-h) for a list of available tests.

Example:

./test.sh -n trino -s -t default

The documentation is automatically generated from the chart files. Install a git hook to have it automatically updated when committing changes. Make sure you install the pre-commit binary, then run:

pre-commit install
pre-commit install-hooks

charts's People

Contributors

arhimondr avatar dragonslayer27 avatar epicsteve2 avatar florianmalbranque avatar habc12 avatar hashhar avatar heitorrbarros avatar huw0 avatar janwar73 avatar joshthoward avatar kempspo avatar koh-satoh-wpg avatar linzebing avatar littlewat avatar losipiuk avatar mjpsyapse avatar mosabua avatar nineinchnick avatar oasisvali avatar p5 avatar posulliv avatar przemekak avatar radek-kondziolka avatar ryan0x44 avatar senorsen avatar soliverr avatar uzzz avatar wendigo avatar yanyixing avatar zltyfsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Not possible to clear securityContext.runAsUser or securityContext.runAsGroup which prevents using the Chart out-of-the-box with OpenShift

Due to how OpenShift handles UIDs for containers by default, it is best if we do not actually set anything for securityContext.runAsUser or securityContext.runAsGroup since the assigned number range can be dynamic per namespace and you will get a lot of errors unless you jump through all of the hoops to create a custom Security Context Constraint.

Unfortunately due to a bug with Helm (helm/helm#12488), if your deployment process uses the -f values.yaml kind of syntax (such as like with ArgoCD) then it is not possible to wipe out these values with null -- they will always be set as the default after the values merge.

Example values file in my own deployment (assuming the dependent trino chart has a local alias of trino in my chart)

trino:
  securityContext:
    runAsUser: null # purposefully empty
    runAsGroup: null # purposefully empty

After applying my values file, the resulting Deployments will still carry the default from the Trino Chart's default values.yaml like this:

...
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
...

Then, when this is deployed to OpenShift, the Deployments are blocked from being created and generate loads of spam events that runAsUser etc is not within the allowed range ๐Ÿ˜จ

I am not sure what the "right" approach is for this, and a bit of a shame about the bug in Helm. For now my work-around is that I have manually downloaded a specific version of the chart into my own repository under charts/trino/ and just commented out both runAsUser and runAsGroup from the default values file.

Maybe it is a good compromise to allow for mapping in the entire securityContext from a user's own values file (i.e. the entire object that is in the user's values file will just get passed in), but not set any properties underneath it? The downside is that this might be a breaking change for some users if/when they take the version of the chart that implements this change (that they might need to specifically set these values to 1000 in their own values files after that?). Or if you want to avoid a breaking change, add some kind of true/false flag that blocks them that is enabled by default ? (though this sounds a bit messy and less fun to have lingering around...)

[Helm] add support for additionalVolumes

Hello,

I have been working on getting JMX Java Agent working with my Trino installation and came across this issue where:

In order to get Trino JMX Java Agent working, the .jar file needs to be present on the trino machines, and right now the
charts do not support additionalVolumes so that I can add this file as a volumeMount.

Is there an alternative way to do it, or it will actually depend on something like additionalVolumes being included in the Chart ?

Thank you.

Cannot overwrite table with non table for trino.coordinator.additionalJVMConfig and for trino.worker.additionalJVMConfig

Hi i think the additionalJVMConfig parameter in the values.yaml should be additionalJVMConfig : [ ] instead of additionalJVMConfig: { } which is producing this error / bug when doing helm install and upgrade

Values i am using the additionalJVMConfig parameter :

coordinator:
  additionalJVMConfig:
    -  -Djava.security.auth.login.config=/etc/trino/config-files/conf.jaas
    -  -Djava.security.krb5.conf=/etc/trino/config-files/krb5.conf

Error / bug i am getting :

coalesce.go:286: warning: cannot overwrite table with non table for trino.coordinator.additionalJVMConfig (map[])

Result : Despite of the above message the configmap is creating properly

Tasks

No tasks being tracked yet.

dynamic catalog.management

Currently it is not possible to easily use dynamic catalog management because ConfigMaps are mounted as ReadOnly into the coordinator Pod.

It would be nice having a way to work around that, do you have any ideas? Creating an EmptyDir and then moving the files from the current ConfigMap there?

Cannot run older Trino versions with chart version trino-0.14.0

I am a committer on the Apache Pulsar project, and we have created a connector based on the 368 version of Trino. Now we are unable to deploy it using the latest version of the Trino helm chart.

The Trino Helm Version

helm list -A
NAME                     	NAMESPACE     	REVISION	UPDATED                                	STATUS  	CHART                          	APP VERSION        
trino-cluster            	trino         	1       	2023-12-12 09:09:13.0732119 -0800 PST  	deployed	trino-0.14.0                   	432      

Steps to Reproduce

helm install -f 368.yaml $RELEASE_NAME trino/trino --namespace $TRINO_K8S_NS --create-namespace

Contents of the 368.yaml file:

image:
  tag: "368"
  pullPolicy: "Always"

Observed Error Behavior in the Worker Pods

Step1: Confirm that image tagged 368 is downloaded

kubectl describe <worker pod>

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  23s                default-scheduler  Successfully assigned trino/trino-cluster-worker-c4d5bbcc8-xflxh to k8s-node00
  Normal   Pulled     19s                kubelet            Successfully pulled image "trinodb/trino:368" in 1.254727643s (2.728144379s including waiting)
  Normal   Pulling    18s (x2 over 22s)  kubelet            Pulling image "trinodb/trino:368"
  Normal   Pulled     16s                kubelet            Successfully pulled image "trinodb/trino:368" in 1.299575889s (2.112766319s including waiting)
  Normal   Created    16s (x2 over 19s)  kubelet            Created container trino-worker
  Normal   Started    16s (x2 over 19s)  kubelet            Started container trino-worker
  Warning  BackOff    12s (x4 over 15s)  kubelet            Back-off restarting failed container trino-worker in pod trino-cluster-worker-c4d5bbcc8-xflxh_trino(ecb3e37d-67b2-4199-b6b7-94c0da17b5a7)

Step 2: Check the pod logs for the error

kubectl logs <worker pod>

+ set +e
+ grep -s -q node.id /etc/trino/node.properties
+ NODE_ID_EXISTS=1
+ set -e
+ NODE_ID=
+ [[ 1 != 0 ]]
+ NODE_ID=-Dnode.id=trino-cluster-worker-59bff658d4-94mf8
+ exec /usr/lib/trino/bin/launcher run --etc-dir /etc/trino -Dnode.id=trino-cluster-worker-59bff658d4-94mf8
Error occurred during initialization of VM
Could not find agent library /usr/lib/trino/bin/libjvmkill.so in absolute path, with error: /usr/lib/trino/bin/libjvmkill.so: cannot open shared object file: No such file or directory

FWIW, it appears that starting the chart version 412, there is a step to explicitly copy this file into the image at the /usr/lib/trino/bin folder.

Random S3 permission-related issues with Trino on EKS

We are running Trino version 432 on a Kubernetes EKS cluster managed by AWS, using the Trino Helm chart version 0.18. We are experiencing random "Access Denied" errors when querying data on S3 buckets.

The error we receive is similar to the following:

Error running query: TrinoExternalError(type=EXTERNAL, name=HIVE_CANNOT_OPEN_SPLIT, message="Error opening Hive split s3://S3BUCKETNAME/S3FILE.parquet (offset=33554432, length=33554432): Read 49152 tail bytes of s3://S3BUCKETNAME/S3FILE.parquet failed: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 11111111111111; S3 Extended Request ID: 111111/222222222+3333/4444444444+555; Proxy: null), S3 Extended Request ID: 111111/222222222+3333/4444444444+555 (Bucket: S3BUCKETNAME, Key: S3FILE.parquet)", query_id=20240522_111111_222_3333)

This error occurs randomly and affects all queries and users simultaneously. The issue is not specific to any particular S3 bucket or table, as it affects queries for all files stored in different S3 buckets.

Workarounds and Observations

  • The same query will often succeed after 4-5 attempts.
  • If the query continues to fail, deleting and recreating the Trino pods resolves the issue temporarily.
  • This issue did not occur with our previous setup using Presto 359 on EC2.

Configuration

We have configured an IAM role with the necessary S3 permissions and assigned it to the Trino pods through a ServiceAccount annotation in the Helm chart:

serviceAccount:
  create: true
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::AWSACCOUNT:role/TRINOROLE

I guess its safe to rule out issues related to S3 API quota limits and file modifications during querying, as the files are still intact and there are no connectivity problems. I suspect the issue may be related to certain Trino pods failing to assume the required IAM role.

Any suggestions on how to ensure that the pods consistently assume the correct IAM role? Any insights or recommendations to resolve this issue would be greatly appreciated

SSLHandshake exception

Hey,

Im trying to secure our installation with HTTPS certificate and internal TLS so that i can use LDAP for authentication. However i run into the below error.
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

This occurs when i connect via 'https://internal-fqdn'. Our organization has a wildcard certificate issued by GoDaddy and that has been added to the cluster in the form of a secret. Then this certificate is terminated on the ingress with the backend protocol set to HTTPS.

If i visit the site via a web browser the site says there is a certificate and that it is valid. However if i connect via the Trino.jar application with the command ./trino.jar https://internal-fqdn and then run show catalogs; the error appears. Remove the TLS and connect via http and this error does not occur. any suggestions?

For context too i have the following configuration in our helm values file as well
additionalConfigProperties:
[
#To allow the certificate to be terminated at the ingress
http-server.process-forwarded=true,
#This is required for the nodes and coordiantor to encypt traffic between each other
internal-communication.shared-secret={redacted secret phrase},
internal-communication.https.required=true,
#Not needed according to https://trino.io/docs/current/security/tls.html#https-secure-directly:~:text=This%20is%20why%20you%20do%20not%20need%20to%20configure%20http%2Dserver.https.enabled%3Dtrue
#http-server.https.enabled=true,
#http-server.https.port=8443
]

Support namespace specification

Currently, the namespace option is ignored by the helm chart.

helm template . --namespace <my_namespace>

I want to include the namespace in the output resources and am glad if this is supported.

OAUTH2 and PASSWORD authentification - some resources not created

Hello

I would like to enable both OAUTH2 and PASSWORD authentifications
I'm using the helm chart version 0.17.0
I have the following custom values :

server:
  config:
    authenticationType: OAUTH2,PASSWORD

auth:
  passwordAuthSecret: "trino-password-authentication"

When I do a helm tempate with the custom values, the section 'password-authenticator.properties' is not created in the configmap trino-coordinator, and the password-volume is not created in the deployment trino-coordinator. I think that it's a bug.

Thanks in advance for your help

Setting `additionalLogProperties` gives error

Hello,

I was trying to set some logging properties with additionalLogProperties field, with something like

additionalLogProperties:
  - "log.path=var/log/server.log"

However I got an error java.lang.IllegalArgumentException: No enum constant io.airlift.log.Level./VAR/LOG/SERVER.LOG

After some trials, I found out that the logging properties may need to be set in config.properties instead of log.properties, which is supposed to contain only log level.

Would someone be able to valid that it's a valid issue, and confirm whether the similar issue exists for other additional* variables in the chart ? Thanks in advance.

Cannot deploy multiple trino clusters in the same k8s namespace

Describe the bug

Not able to deploy multiple clusters using helm chart due to configmap conflicts (caused by this line for example)

chart version: 0.20.0

Suggestion

Templating configmap objects name using predefined template in _helpers.tpl
e.g.
instead of trino-resource-groups-volume-coordinator
use trino-resource-groups-volume-{{ template "trino.coordinator" . }}

accessControl properties are only added to the coordinator

accessControl properties need to be on worker nodes to properly implement graceful shutdowns. Without that, the preStop api request will result in a 403. A work around is to add the access control properties via additionalConfigFiles. I believe it would be helpful to have the properties added to the worker configmap as well, so that the properties are present on both the coordinator and worker nodes.

Some of my learnings outlined on the Trino slack here.

[DeltaLake] File System Cache

Hi everyone!

I'm currently trying to enable the File System Cache in my Trino cluster for the Delta Lake catalog. However, after going through the documentation, I'm struggling to find a straightforward or semantic method to enable it.

Enabling the cache in the file system seems to require mounting an emptyDir and configuring the options outlined in the documentation, which appears to be quite complex.

Additionally, emptyDirs are ephemeral and utilize node storage. To circumvent potential storage issues, I'm exploring workarounds such as mounting persistent volumes.

Has anyone encountered similar challenges or successfully activated the Delta Lake cache in Trino using this method?

Unable to set auth password in installation kubernetes, aready set https and shared-secret

Hello, I already set https behind a ingress with a cert, enable shared secret and configure a authentication type PASSWORD but still unable to set the configuration for password file.
this is my current config values.yaml

image:
  tag: ""
server:
  workers: 3
  config:
    https:
      enabled: true
      port: 8443
      keystore:
        path: ""
    authenticationType: "PASSWORD"
auth:
  passwordAuth: "XXXX:XXXX"
  refreshPeriod: 1m
additionalConfigProperties:
    - "internal-communication.shared-secret=XXXX"
  
coordinator:
  secretMounts:
    - name: trino-password-authentication
      secretName: trino-file-authentication
      path: /etc/trino/auth
  jvm:
    maxHeapSize: "8G"
worker:
  jvm:
    maxHeapSize: "8G"
service:
  type: "NodePort"
  port: 8080 
ingress:
  enabled: true
  className: ""
  annotations:
    kubernetes.io/ingress.class: "nginx"
  hosts:
  - host: somedomain.com
    paths:
       - backend:
          service:
            name: trino-cluster
            port:
              number: 8080
         path: /
         pathType: Prefix
  - http:
    paths:   
       - backend:
          service:
            name: trino-cluster
            port:
              number: 8080
         path: /*
         pathType: ImplementationSpecific
  tls:
  - hosts:
    -  somedomain.com
    secretName: secret-tls

I got no errors just is not enable the auth by password in the login screen. of course XXXX are the password and user.
in the case creating the password I followed this procedure
Creating a password file#
Password files utilizing the bcrypt format can be created using the htpasswd utility from the Apache HTTP Server. The cost must be specified, as Trino enforces a higher minimum cost than the default.

**Create an empty password file to get started:

touch password.db
Add or update the password for the user test:

htpasswd -B -C 10 password.db test**

and about the shared-secret I use

openssl rand 512 | base64

Support annotations for the Service resource

Edit: Created a PR: #149

When using aws-load-balancer-controller, it is useful to pass annotations in the Service ( https://github.com/trinodb/charts/blob/trino-0.19.0/charts/trino/templates/service.yaml ) to denote what properties the load balancer could have.

Proposed solution:
From https://github.com/trinodb/charts/blob/trino-0.19.0/charts/trino/values.yaml#L226

service:
  type: ClusterIP
  port: 8080
  annotations: {} # <--------- this field is new

From https://github.com/trinodb/charts/blob/trino-0.19.0/charts/trino/templates/service.yaml

metadata:
name: {{ template "trino.fullname" . }}
labels:
  app: {{ template "trino.name" . }}
  chart: {{ template "trino.chart" . }}
  release: {{ .Release.Name }}
  heritage: {{ .Release.Service }}
####### 
# This section is new
########
{{- with .Values.service.annotations }}
annotations:
  {{- toYaml . | nindent 4 }}
{{- end }}
####### 
# /This section is new
########
spec:

I could make a pull request if you'd like.

enabling OAuth crashes Trino at startup

I wanted to enable OAuth but Trino crashes on startup without any helpful error. below is my chart config aswell as the stacktrace I get.
enabling debug logs didnt show anything different.

Stacktrace mentions 3 errors but doesnt show them ๐Ÿคฏ

When removing the server authentication type Trino starts up !

I tried with 436 (latest greatest) and 432 (latest chart default)

...
      - server:
          config:
            authenticationType: oauth2
      - additionalConfigProperties:
          - http-server.authentication.oauth2.issuer=https://foo.bar
          - http-server.authentication.oauth2.auth-url=https://foo.bar/oauth/authorize
          - http-server.authentication.oauth2.token-url=https://foo.bar/oauth/token
          - http-server.authentication.oauth2.jwks-url=https://foo.bar/oauth/discovery/keys
          - http-server.authentication.oauth2.userinfo-url=https://foo.bar/oauth/userinfo
          - http-server.authentication.oauth2.oidc.discovery=false
          - http-server.authentication.oauth2.client-id=42deadbeef
          - http-server.authentication.oauth2.client-secret=1337cafebabe
          - web-ui.authentication.type=oauth2
...
2024-01-22T17:45:54.343Z    INFO    main    Bootstrap    transaction.max-finishing-concurrency                                                   1                                                                          1                                   โ”‚
โ”‚     at ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:117)                                                                                                                                                              โ”‚
โ”‚     at ProvisionListenerStackCallback.provision(ProvisionListenerStackCallback.java:66)                                                                                                                                                                         โ”‚
โ”‚     at InternalProviderInstanceBindingImpl$CyclicFactory.get(InternalProviderInstanceBindingImpl.java:164)                                                                                                                                                      โ”‚
โ”‚     at ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)                                                                                                                                                                           โ”‚
โ”‚     at SingletonScope$1.get(SingletonScope.java:169)                                                                                                                                                                                                            โ”‚
โ”‚     at InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)                                                                                                                                                                           โ”‚
โ”‚     at FactoryProxy.get(FactoryProxy.java:60)                                                                                                                                                                                                                   โ”‚
โ”‚     at ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)                                                                                                                                                                           โ”‚
โ”‚     at SingletonScope$1.get(SingletonScope.java:169)                                                                                                                                                                                                            โ”‚
โ”‚     at InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)                                                                                                                                                                           โ”‚
โ”‚     at SingleParameterInjector.inject(SingleParameterInjector.java:40)                                                                                                                                                                                          โ”‚
โ”‚     at SingleParameterInjector.getAll(SingleParameterInjector.java:60)                                                                                                                                                                                          โ”‚
โ”‚     at SingleMethodInjector.inject(SingleMethodInjector.java:84)                                                                                                                                                                                                โ”‚
โ”‚     at MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:146)                                                                                                                                                                                          โ”‚
โ”‚     at MembersInjectorImpl.injectAndNotify(MembersInjectorImpl.java:101)                                                                                                                                                                                        โ”‚
โ”‚     at Initializer$InjectableReference.get(Initializer.java:256)                                                                                                                                                                                                โ”‚
โ”‚     at Initializer.injectAll(Initializer.java:153)                                                                                                                                                                                                              โ”‚
โ”‚     at InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:180)                                                                                                                                                                              โ”‚
โ”‚     at InternalInjectorCreator.build(InternalInjectorCreator.java:113)                                                                                                                                                                                          โ”‚
โ”‚     at Guice.createInjector(Guice.java:87)                                                                                                                                                                                                                      โ”‚
โ”‚     at Bootstrap.initialize(Bootstrap.java:268)                                                                                                                                                                                                                 โ”‚
โ”‚     at Server.doStart(Server.java:135)                                                                                                                                                                                                                          โ”‚
โ”‚     at Server.lambda$start$0(Server.java:91)                                                                                                                                                                                                                    โ”‚
โ”‚     at io.trino.$gen.Trino_432____20240122_174552_1.run(Unknown Source)                                                                                                                                                                                         โ”‚
โ”‚     at Server.start(Server.java:91)                                                                                                                                                                                                                             โ”‚
โ”‚     at TrinoServer.main(TrinoServer.java:38)
|
โ”‚ 3 errors

Where to Pass `trino.s3.credentials-provider` Parameter in Helm Chart?

I'm working with a Helm chart for deploying Trino and need to configure the trino.s3.credentials-provider parameter to use com.amazonaws.auth.InstanceProfileCredentialsProvider for S3 access.

Could someone please guide where to specify the trino.s3.credentials-provider parameter in the Helm chart configuration?
Any assistance would be appreciated!

Conflicting secrets when specifying both `passwordAuthSecret` and authentication groups + using ExternalSecret operator

I've run into a problem when trying to implement combination of the following:
โ€“ managing the password.db file via an ExternalSecret (populated from Vault), which creates the secret that I pass as passwordAuthSecret in values.yaml
โ€“ implementing user groups

Without specifying auth: groups, everything is fine as the helm chart does not create a Secret on its own. In this case, we can use the Secret managed by ExternalSecret as passwordAuthSecret.

However, if there is a value for auth: groups, Helm attempts to create an additional secret of the same name, creating a conflict. A volume called file-authentication-volume is created from the secret, which expects both password.db and group.db to exist in it. However, as the secret managed by the ExternalSecret operator takes precedent, only password.db is found.

As it doesn't make a lot of sense to manage user groups via Vault in the same way we manage passwords, I believe the best approach would be to split password.db and group.db into separate secrets, volumes and volumemounts. file.group-file and file.password-file would have to be adjusted accordingly as well in the coordinator's configmap.

Let me know if that sounds reasonable and if yes, I will create a PR. Thanks.

It's not immediately obvious how the `additional*Properties` are supposed to be set

In the values.yaml the default value for setting additionalProperties is set as {}, suggesting that a dict object is expected: https://github.com/trinodb/charts/blob/main/charts/trino/values.yaml#L199-L207. However, in the templates this value is processed using range (https://github.com/trinodb/charts/blob/main/charts/trino/templates/configmap-coordinator.yaml#L62-L64) suggesting a list of values. In theory, using the same approach as when configuring the additionalCatalog (

{{- range $catalogName, $catalogProperties := .Values.additionalCatalogs }}
{{ $catalogName }}.properties: |
{{- $catalogProperties | nindent 4 }}
{{- end }}
) would work:

additionalConfigProperties:
    new-properties: |
      internal-communication.shared-secret=${ENV:TRINO_SHARED_SECRET}
      http-server.process-forwarded=true
    more-properties: |
      web-ui.authentication.type=oauth2

However, the template is missing correct indentation (nindent 4). So instead one has to set the properties as a list:

  additionalConfigProperties:
    - "internal-communication.shared-secret=${ENV:TRINO_SHARED_SECRET}"
    - "http-server.process-forwarded=true"
    - "web-ui.authentication.type=oauth2"

To make configuration more consistent I would suggest to either:

  1. change the type of additionalConfigProperties to string so it can be configured as follows:
      additionalConfigProperties: |
        internal-communication.shared-secret=${ENV:TRINO_SHARED_SECRET}
        http-server.process-forwarded=true
        web-ui.authentication.type=oauth2
  2. Keep the type as-is, but fix the template to include nindent 4.

Personally, I'd prefer 1. since its more obvious how the string settings correspond to the trino properties. In 2. the nested new-properties, more-properties don't serve a purpose anyway.

Support to securityContext.fsGroup to override the owner of mounted volumes

Hi there!

I recently attached SSDs for caching purposes and enabled them by mounting the volume at /data/trino/cache. However, upon doing so, I encountered the following exception from Alluxio:

IllegalArgumentException: Cannot write to cache directory /data/trino/cache.

After some investigation, I suspect that I need to specify the fsGroup to 1000 in pod.spec.securityContext. Currently, the chart supports runAsGroup and runAsUser in securityContext:

...
     {{- with .Values.securityContext }}
     securityContext:
        runAsUser: {{ .runAsUser }}
        runAsGroup: {{ .runAsGroup }}
     {{- end }}
...

The volume declared into pod spec:

...
    - name: ebs-cache-volume
      ephemeral:
        volumeClaimTemplate:
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 100Gi
            volumeMode: Filesystem
...

Could you please confirm if adding fsGroup: 1000 to the pod.spec.securityContext would resolve this issue? If not, any guidance on how to properly configure the security context for SSD caching would be greatly appreciated.

Thanks in advance for your help!

UncheckedIOException: Failed to bind to /0.0.0.0:8443 when using HTTPS behind an Ingress

Summary

Cannot use TLS encryption b/w Ingress controller and the Service.

Steps to reproduce

server:
  config:
    https:
      enabled: true
service: 
  type: ClusterIP
  port: 8443
ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS

Expected result

The service created would be mapped to 8443 port of the pods, thus facilitating the TLS encryption between the Ingress controller and the Trino Coordinator Pod.

Actual result

The port 8443 is assigned to http-server.http.port, which makes the process attempt to listen on the port twice, and ends up in an exception:

UncheckedIOException: Failed to bind to /0.0.0.0:8443

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.