Giter VIP home page Giter VIP logo

ignition-docker's Introduction

Ignition on Docker - Community Image

8.1 Build Status Gitter Docker Stars Docker Pulls
Ignition 8.1.38 Ignition 7.9.21

This is the Git repository for the Unofficial/Community Ignition Docker image. It includes a docker-bake.hcl in addition to the Dockerfile entries to allow for easy image building. See the Docker Hub page for the full README on how to utilize this Docker image and for information regarding contributing and registering issues.

The full README documentation from the Docker Hub page is maintained in the docs folder in the event that you find typos or areas that need more detail or content.

NOTE: Inductive Automation maintains the official Ignition image on Docker Hub for the 8.1+ series of releases. Check out that page if you're interested in running your containers in production. This is also where to go for nightly images, too!

Building the Image

When updated, this image is pushed to Docker Hub via GH actions. To customize and build your own version of the image, use the instructions in this section.

Available Build Targets

The image build leverages docker buildx bake to provide targets for each of the available permutations of the image.

Target Description
8_1-full Latest 8.1.x image (Standard, Edge, Maker Editions)
8_1-slim Same as 8_1-full but without Launchers and Perspective Workstation
8_1 Group of 8_1-full and 8_1-slim targets
7_9-full Latest 7.9.x image (Standard Edition)
7_9-edge Latest 7.9.x image (Edge Edition)

By default, the BASE_IMAGE_NAME variable refers to localhost:5000/kcollins/ignition. You can override the base image name by setting this environment variable prior to running the various docker buildx bake commands in the next sections.

Single Architecture Local Builds

If you want to build an image for your local Docker installation (without a registry), you can build a target with the following:

docker buildx bake --load --set \*.platform=linux/arm64 8_1-full

Note that you must specify a platform in this mode (linux/amd64, linux/arm64, or linux/arm) since manifests are not supported with exporting directly to Docker Engine.

Multi Architecture Builds

If you want to build a multi-platform image+manifest, you must have a registry to target. You can build a target with the following:

docker buildx bake --push 8_1-full

ignition-docker's People

Contributors

dependabot[bot] avatar eskemojoe007 avatar thirdgen88 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ignition-docker's Issues

Problem with mounting volumes

I am trying to install your container on a Synology NAS (DS918+ with Intel CPU) within Docker application.

All seems good but volumes mounting, probably due to different user right between NAS directory and your container.
So while launching the container it stops with error since it cannot write to the associated volume.
"admin" user in Synology has uid=1024 and gid=100 while in your container "ignition" user has uid=999 and gid=999.
I tried to make \docker\ignition (linked to /var/lib/ignition) NAS folder accessible to "everyone" with whole rights but with no success.

image

I have seen that other contaniers has proper environment variables (like PUID or PGID) to avoid this kind of problem.
Do you kindly have a solution for this issue?

Thanks in advance (for you wonderful job too!)

Failure registering modules in 8.1.0 image on fresh launch

With the new commissioning, one aspect that doesn't work properly with the new commissioning workflow is registering modules against a fresh gateway. It fails due to missing (not yet created) config.idb. Need to defer registration of modules until after initial provisioning is completed.

Current workaround is to omit the mounting of /modules on first-launch. Alternatively, you could also bind-mount a gateway backup to be restored on first-launch as well.

Gateway Network Certificates maintained outside of data volume

I realized recently that the Gateway Network Certificate, used by a given gateway when making an outbound connection to another gateway in the gateway network, is maintained outside of the data volume within the image. Specifically, it lives within the metro-keystore at /usr/local/share/ignition/webserver.

Suggest integrating functionality for having the certificate preserved to the data volume on initial startup, and re-integrated (using alias metro-key, like the default one) into the keystore via the entry point. This will allow fresh containers (with gateway state still preserved via a volume against /var/lib/ignition/data) to retain the existing gateway certificate.

Below is a listing of an example certificate, viewed using keytool -list -v -keystore webserver/metro-keystore (password metro):

ignition@242d69abff74:/usr/local/share/ignition$ keytool -list -v -keystore webserver/metro-keystore
Enter keystore password:
Keystore type: PKCS12
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: metro-key
Creation date: Oct 15, 2020
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=242d69abff74:8060, OU=GatewayNetwork, O=Inductive Automation, L=Folsom, ST=CA, C=US
Issuer: CN=242d69abff74:8060, OU=GatewayNetwork, O=Inductive Automation, L=Folsom, ST=CA, C=US
Serial number: 372d83378f19f9a3
Valid from: Thu Oct 15 01:56:15 UTC 2020 until: Sun Oct 13 01:56:15 UTC 2030
Certificate fingerprints:
	 SHA1: EC:E3:15:7A:36:3A:B5:7D:28:BE:FE:CB:93:6F:47:79:A9:8A:1B:49
	 SHA256: E8:9F:68:81:EC:12:34:3B:10:A0:82:D3:B9:13:EA:5B:E8:CB:E3:A8:70:0D:5A:C8:55:0B:90:35:64:CB:E6:20
Signature algorithm name: SHA256withRSA
Subject Public Key Algorithm: 1024-bit RSA key
Version: 3

Extensions:

#1: ObjectId: 2.5.29.35 Criticality=false
AuthorityKeyIdentifier [
KeyIdentifier [
0000: 07 63 39 48 17 D0 84 89   0C E4 7D F2 4C 08 52 BD  .c9H........L.R.
0010: B0 EE C4 A5                                        ....
]
]

#2: ObjectId: 2.5.29.37 Criticality=false
ExtendedKeyUsages [
  clientAuth
  serverAuth
]

#3: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
  DigitalSignature
  Non_repudiation
  Key_Encipherment
  Data_Encipherment
  Key_CertSign
]

#4: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
  IPAddress: 172.17.0.2
  DNSName: 242d69abff74
  URIName: uri://242d69abff74/metro
]

#5: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 30 81 9F 30 0D 06 09 2A   86 48 86 F7 0D 01 01 01  0..0...*.H......
0010: 05 00 03 81 8D 00 30 81   89 02 81 81 00 9B 59 50  ......0.......YP
0020: DB 81 3D 72 55 99 07 42   48 C0 31 4B E5 DE 45 EB  ..=rU..BH.1K..E.
0030: BD 5F A2 B1 08 AE 8C DA   0F A5 F5 45 9D E6 8B 4C  ._.........E...L
0040: 35 CD 6D A3 E2 7D F0 B1   EF CB BE 83 46 43 18 51  5.m.........FC.Q
0050: 6B 3B EB DC D1 7E 56 BE   2F F7 70 90 18 CA 55 C8  k;....V./.p...U.
0060: 04 04 9E 62 70 02 2A 85   7C D7 D3 29 95 22 E8 40  ...bp.*....).".@
0070: 68 2A E0 0A 64 89 BE 72   D5 3A 21 C3 B4 CD A4 26  h*..d..r.:!....&
0080: A0 7A C1 E2 3D 2F 59 2B   FE 81 6D 90 03 D0 A6 95  .z..=/Y+..m.....
0090: 11 12 74 2D 7A 71 3D 09   2E 12 8F B5 C9 02 03 01  ..t-zq=.........
00A0: 00 01                                              ..
]
]



*******************************************
*******************************************

Things to consider during implementation:

  • When, and under what conditions to perform the update to metro-keystore
  • Do we expose configuration environment variables for driving custom creation of a Gateway Network certificate? For characteristics like SAN definitions (IPAddress, DNSName, URIName)? Also Owner/Issuer?
  • Do we need to omit the IPAddress in the SAN since it might be different on subsequent container launches?

Healthcheck needs a curl timeout

I was running an Ignition container and at some point the Ignition web server went unresponsive (I assume it is nothing to do with your docker image). When I checked the processes running on the server, I saw several thousand curl and grep processes. These were the curl and grep processes from the health check. The curl command doesn't have a default timeout so it was hanging indefinitely because the Ignition server didn't send a response. The grep was waiting for curl to complete before it stopped.

This build-up of previous health check processes could be avoided by using the --max-time option in curl and setting it less than or equal to the HEALTHCHECK --timeout value.

Thanks again for your work!

Gateway restores fail on Ignition 8 when volume mounted

TL;DR: When a docker volume is mounted at /var/lib/ignition then gateway restores fail on Ignition 8 containers. This can be fixed by making sure the Ignition temp directory is also under /var/lib/ignition.

Specifics:
When a docker volume is mounted at /var/lib/ignition then gateway restores fail with the following error on Ignition 8 containers:

com.inductiveautomation.ignition.gateway.localdb.DBStartupException: Error restoring internal database from backup.
jvm 2    |      at com.inductiveautomation.ignition.gateway.localdb.LocalDBManagerImpl.setup(LocalDBManagerImpl.java:162)
jvm 2    |      at com.inductiveautomation.ignition.gateway.IgnitionGateway.startupInternal(IgnitionGateway.java:820)
jvm 2    |      at com.inductiveautomation.ignition.gateway.redundancy.RedundancyManagerImpl.startup(RedundancyManagerImpl.java:272)
jvm 2    |      at com.inductiveautomation.ignition.gateway.IgnitionGateway.initRedundancy(IgnitionGateway.java:664)
jvm 2    |      at com.inductiveautomation.ignition.gateway.IgnitionGateway.lambda$initInternal$0(IgnitionGateway.java:602)
jvm 2    |      at com.inductiveautomation.ignition.common.execution.impl.BasicExecutionEngine$ThrowableCatchingRunnable.run(BasicExecutionEngine.java:518)

I've found that Ignition gateway restore operation fails if the Ignition temp directory is on a different file system than the Ignition data directory. A docker volume is mounted as a different file system than rest of the container file system, and thus it can trigger this problem.

This problem of crossing file systems is almost certainly an Ignition bug, but probably is rarely seen in customer configurations.

Anyway, this problem pops up in the Ignition 8 containers but not the Ignition 7 containers. This is because the temp directory is under /var/lib/ignition in the Ignition 7 containers but isn't in Ignition 8 containers. The Ignition installer sets the temp directory to /var/lib/ignition/temp but since the Ignition 8 containers are created using the zip file instead of the installer, you have to manually set the temp directory to the right location.

The fix for this is changing the Ignition 8 Dockerfile from having this:

# Stage data and user-lib in var folder
RUN mkdir -p /var/lib/ignition && \
    mv data /var/lib/ignition/ && \
    mv user-lib /var/lib/ignition/ && \
    ln -s /var/lib/ignition/data data && \
    ln -s /var/lib/ignition/user-lib user-lib

to this

# Stage data and user-lib in var folder
RUN mkdir -p /var/lib/ignition && \
    mv data /var/lib/ignition/ && \
    mv user-lib /var/lib/ignition/ && \
    mv temp /var/lib/ignition/ && \
    ln -s /var/lib/ignition/data data && \
    ln -s /var/lib/ignition/user-lib user-lib  && \
    ln -s /var/lib/ignition/temp temp

Image ignition-8.0.6-synology

I have been testing this image for a while but it allways gives those messages on startup and then stop.

"Provisioning will be logged here: /usr/local/share/ignition/logs/provisioning.log"
"Waiting for commissioning servlet to become active..."
"Failed to detect RUNNING status during Commissioning Phase after 10 delay."

This is the first time i see something like this.

What can i do?

Thanks in advance.

Permission error when trying to setup container using docker compose

I am seeing a permission error appear when I am trying to bring up an ignition container using docker compose

/usr/local/bin/docker-entrypoint.sh: line 522: /usr/local/share/ignition/data/.docker-init-complete: Permission denied

I have tried a few things and I don't think it is an error in my docker-compose.yaml but I have attached it just in case
docker-compose.txt

Install Ignition without including all inductive modules

Currently, the Dockerfile installs Ignition along with all Inductive modules. Do you know of a way to selectively install only the modules I want?

The docs don't specify a way:
https://docs.inductiveautomation.com/display/DOC79/Linux+-+Install

./${INSTALLER_NAME} --username ${IGNITION_INSTALL_USERNAME} --unattendedmodeui none --mode unattended --edition ${IGNITION_EDITION} --prefix ${IGNITION_INSTALL_LOCATION} && \

Gateway from 7.9.16 tag does not start

It seems we released 7.9.16, at least the zip distributions, with an ignition.conf that has an absolute rather than relative wrapper.java.library.path value:

# Java Library Path (location of Wrapper.DLL or libwrapper.so)
wrapper.java.library.path.1=/usr/local/bin/ignition/lib

Ignition in a container based on the 7.9.16 image fails to start with this error:

Starting Ignition Gateway...
wrapper | --> Wrapper Started as Console
�]0;Ignition Gateway�wrapper | Java Service Wrapper Standard Edition 64-bit 3.5.35
wrapper | Copyright (C) 1999-2018 Tanuki Software, Ltd. All Rights Reserved.
wrapper | http://wrapper.tanukisoftware.com
wrapper | Licensed to Inductive Automation for Inductive Automation
wrapper |
wrapper | Launching a JVM...
jvm 1 | WrapperManager: Initializing...
jvm 1 | WrapperManager:
jvm 1 | WrapperManager: ERROR - Unable to load the Wrapper's native library because none of the
jvm 1 | WrapperManager: following files:
jvm 1 | WrapperManager: libwrapper-linux-x86-64.so
jvm 1 | WrapperManager: libwrapper.so
jvm 1 | WrapperManager: could be located on the following java.library.path:
jvm 1 | WrapperManager: /usr/local/bin/ignition/lib
jvm 1 | WrapperManager: Please see the documentation for the wrapper.java.library.path
jvm 1 | WrapperManager: configuration property.
jvm 1 | WrapperManager:
wrapper | <-- Wrapper Stopped

Presumably because the image uses /usr/local/share/ignition as its Ignition install directory.

I think we've got this fixed for a future 7.9.17 release, but that leaves 7.9.16 in a bit of a broken spot here.

Maker activation support

During first run Maker edition is not being activated and always has 'Activate Ignition"

Have regen'd key and token and gone though docker-entrypoint.sh to force commissioning phase.

Restore from /restore.gwbk not fulfilling commissioning

I've recently encountered an issue when attempting to restore a backup on first-launch (via a bind-mount to /restore.gwbk). When the gateway launches, it does restore the gateway, but stays at the commissioning phase asking for the edition selection. Once the selection (in my case the full edition) is made, the only other panel presented is the Start Gateway panel.

Perhaps should consider refining the provisioning a little more and have it actually perform GETs to resolve the current phase (instead of hard-coding the sequencing and trying to determine all of the permutations).

Errors from module registration when `/modules` is empty

It looks like the recent breakout from the entrypoint of the module/jdbc registration activities now produces an error on startup of a gateway where /modules doesn't contain any *.modl files, e.g.:

gateway_1  | init     | Searching for third-party modules...
gateway_1  | init     | Linking Module: *.modl
gateway_1  | unzip:  cannot find or open /modules/*.modl, /modules/*.modl.zip or /modules/*.modl.ZIP.

Need to add nullglob option to those separate bash scripts.

docker-entrypoint.sh fails when using a bind mount volume

I have tested this out in a few different images but whenever you try to run a container with a host bind mount the docker-entrypoint.sh fails to complete.

If you run the following container:
docker run -P -e GATEWAY_ADMIN_PASSWORD=password -v /home/username/ignition_data:/var/lib/ignition kcollins/ignition:latest

Than it will start, create the folder, and then fail on line 447 of the entrypoint script with the following error:
/usr/local/bin/docker-entrypoint.sh: line 447: /usr/local/share/ignition/data/.docker-init-complete: No such file or directory

The purpose of even doing this, is that starting in 8.0.13 with Perspective theming, there is a pretty constant need to open the theme files and edit them from your host, as they are stored in the /data/modules/com.inductiveautomation.perspective/themes so instead of having to enter the cli for the container, it would be much easier to be able to access them directly from your host.

Permission Denied Error on Persistent Storage

Version: 8.1.4
Setup: Kubernetes k8s with Rancher 2.5.7 Frontend
Problem:
This might be a relatively niche setup, and I'm not sure if this applies to other Kubernetes setups, but I've spun up many different services and never had this type of problem, so it might be something to look at for future releases.
I tried running Ignition Maker edition and kept getting crash loop back-off trying to bind mount a directory from the node (I tried /var/lib/ignition/data and /data), the error I received was:

/usr/local/bin/docker-entrypoint.sh: line 286: /var/lib/ignition/data/.docker-init-complete: Permission denied

Looking through the issues here I found that I should create a persistent volume, but this gave the same error. I tried both HostPath volumes and Local volumes but didn't have any luck. I got a hint from another issue discussing permissions, so I tried changing the permissions on my local $PWD/ignition_data directory:

sudo chmod 777 ignition_data

After this, I tried mounting the volume at /var/lib/ignition/data and this time the Pod actually started, but it still kept crashing with this log output:

init     | Added Init Setting SystemName=Ignition-Maker
init     | Processing Module Enable/Disable...
  Disabling 'Web Browser Module.modl'
  Disabling 'SMS Notification-module.modl'
  Disabling 'BACnet Driver-module.modl'
  Disabling 'Serial Support Client-module.modl'
  Disabling 'Voice Notification-module.modl'
  Disabling 'Vision-module.modl'
  Disabling 'DNP3-Driver.modl'
  Disabling 'Enterprise Administration-module.modl'
  Disabling 'Symbol Factory-module.modl'
init     | Starting Ignition Gateway...
init     | Initiating commissioning helper functions...
FATAL  | wrapper  | Unable to open configuration file: data/ignition.conf (No such file or directory)
FATAL  | wrapper  |   Current working directory: /usr/local/share/ignition
FATAL  | wrapper  |   The Wrapper will stop.

So then I tried mounting to /data, and using a Persistent Volume with HostPath as the source, I finally got this running!

Here is a bit of the deployment configuration that Rancher creates. I removed a lot of the fluff and just tried to leave the important stuff. Again, I needed to chmod the ignition_data folder in order to get running.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ignition-maker
spec:
  template:
    spec:
      containers:
      - env:
        - name: GATEWAY_SYSTEM_NAME
          value: Ignition-Maker
        - name: IGNITION_EDITION
          value: MAKER
        - name: GATEWAY_ADMIN_PASSWORD
          valueFrom:
            secretKeyRef:
              key: GATEWAY_ADMIN_PASSWORD
              name: ignition-secrets
              optional: false
        - name: IGNITION_ACTIVATION_TOKEN
          valueFrom:
            secretKeyRef:
              key: IGNITION_ACTIVATION_TOKEN
              name: ignition-secrets
              optional: false
        - name: IGNITION_LICENSE_KEY
          valueFrom:
            secretKeyRef:
              key: IGNITION_LICENSE_KEY
              name: ignition-secrets
              optional: false
        image: kcollins/ignition:8.1.4
        imagePullPolicy: Always
        name: ignition-maker
        ports:
        - containerPort: 8088
          hostPort: 8088
          name: webgui
          protocol: TCP
        volumeMounts:
        - mountPath: /data
          name: ignition-data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: ignition-data
        persistentVolumeClaim:
          claimName: ignition-data

docker-entrypoint.sh not backwards compatible prior to 8.0.14

I pulled down the master branch and built an Ignition 8.0.13 image by providing the download URL in the Docker build arguments. Unfortunately, when I ran it the 8.0.13 container, I got the following output and error:

Provisioning will be logged here: /usr/local/share/ignition/logs/provisioning.log
Waiting for commissioning servlet to become active...
Performing commissioning actions...
ERROR: Unexpected Response (400) during Commissioning phase: Edition Selection
HTTP/1.1 400 Bad Request
Referrer-Policy: strict-origin-when-cross-origin
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html;charset=iso-8859-1
Content-Length: 410

<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 400 Malformed body sent</title>
</head>
<body><h2>HTTP ERROR 400 Malformed body sent</h2>
<table>
<tr><th>URI:</th><td>/post-step</td></tr>
<tr><th>STATUS:</th><td>400</td></tr>
<tr><th>MESSAGE:</th><td>Malformed body sent</td></tr>
<tr><th>SERVLET:</th><td>CommissioningServlet</td></tr>
</table>

</body>
</html>

This is because versions prior to 8.0.14 didn't support the edition commissioning step. So it seems that all Ignition 8.0 versions prior to 8.0.14 won't work with the current docker-entrypoint.sh which performs that edition commissioning step.

I know you put a lot of work into getting 8.0.14 working and adding other new features, as I saw the significant changes to the docker-entrypoint.sh script. You also rolled out those changes very quickly. So I'm not complaining, I'm just making it clear that those changes aren't backwards compatible in case other people hit that problem too.

I think we can get them to be backwards compatible, if that's something you'd like. Either the entrypoint script can explicitly check the ignition version and behave differently based on that, or it can do some HTTP GET requests to determine what commissioning step is next and only apply steps that are expected.

If you'd like me to attempt to make that change and submit a pull request then I can do that. I just thought I'd report it first and allow you to decide what to do from here.

After Pod coming up, it is not Ignition is not coming in logs and endpoint is not showing up

Igntion Edge version : 7.9.16

Deployment yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name:ignitionedge
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ignitionedge
  template:
    metadata:
      labels:
        app: ignitionedge
    spec:
      securityContext:
        runAsUser: 0
        runAsGroup: 0
      restartPolicy: Always
      imagePullSecrets:
      containers:
      - name: "ignitionedge"
        image: "kcollins/ignition:7.9.16"
        imagePullPolicy: "IfNotPresent"
        ports:
        - containerPort: 8088
        - containerPort: 8043
        - containerPort: 8060
        - containerPort: 8000
        volumeMounts:
        - mountPath: /var/lib/ignition/
          name: ignition-data
        resources:
            limits:
              cpu: 1500m
              memory: 4096Mi
            requests:
              cpu: 1200m
              memory: 1024Mi
      volumes:
      - name: ignition-data
        persistentVolumeClaim:
          claimName: ignition-data

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: ignitionedge
spec:
  type: ClusterIP

  externalIPs:
    - 10.201.10.145
  ports:
  - port: 8088
    targetPort: 8088
    protocol: TCP
    name: user-ui
  - port: 8043
    targetPort: 8043
    protocol: TCP
    name: ssl
  - port: 8060
    targetPort: 8060
    protocol: TCP
    name: metro-ssl
  - port: 8000
    targetPort: 8000
    protocol: TCP
    name: sip

  selector:
    app: ignitionedge

pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/bound-by-controller: "yes"
  name: ignition-data
  namespace: longhorn-system
  labels:
    type: ignition-data-pv
spec:
  storageClassName: manual
  claimRef:
    name: ignition-data
    namespace: prepord
  volumeMode: Filesystem
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  hostPath:
      path: "/usr/local/ignition/"

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ignition-data
  labels:
    type: ignition-data-pvc
spec:
  storageClassName: manual
  resources:
    requests:
      storage: 5Gi
  accessModes:
    - ReadWriteMany

Error logs

Starting Ignition Gateway...
10/28/2020 12:19:43 PM wrapper  | --> Wrapper Started as Console
10/28/2020 12:19:43 PM �]0;Ignition Gateway�wrapper  | Java Service Wrapper Standard Edition 64-bit 3.5.35
10/28/2020 12:19:43 PM wrapper  |   Copyright (C) 1999-2018 Tanuki Software, Ltd. All Rights Reserved.
10/28/2020 12:19:43 PM wrapper  |     http://wrapper.tanukisoftware.com
10/28/2020 12:19:43 PM wrapper  |   Licensed to Inductive Automation for Inductive Automation
10/28/2020 12:19:43 PM wrapper  |
10/28/2020 12:19:43 PM wrapper  | Launching a JVM...
10/28/2020 12:19:43 PM jvm 1    | WrapperManager: Initializing...
10/28/2020 12:19:43 PM jvm 1    | 17:19:41,977 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [data//logback.xml] at [file:/usr/local/share/ignition/data/logback.xml]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,037 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,041 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [SysoutAppender]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,049 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,089 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [com.inductiveautomation.logging.SQLiteAppender]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,096 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [DB]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,230 |-INFO in ch.qos.logback.core.db.DataSourceConnectionSource@153ef8cb - Driver name=SQLite JDBC
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,230 |-INFO in ch.qos.logback.core.db.DataSourceConnectionSource@153ef8cb - Driver version=3.23.1
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,230 |-INFO in ch.qos.logback.core.db.DataSourceConnectionSource@153ef8cb - supportsGetGeneratedKeys=true
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,268 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.classic.AsyncAppender]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,270 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [SysoutAsync]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,270 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [SysoutAppender] to ch.qos.logback.classic.AsyncAppender[SysoutAsync]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,270 |-INFO in ch.qos.logback.classic.AsyncAppender[SysoutAsync] - Attaching appender named [SysoutAppender] to AsyncAppender.
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,271 |-INFO in ch.qos.logback.classic.AsyncAppender[SysoutAsync] - Setting discardingThreshold to 51
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,271 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.classic.AsyncAppender]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,271 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [DBAsync]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,271 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [DB] to ch.qos.logback.classic.AsyncAppender[DBAsync]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,271 |-INFO in ch.qos.logback.classic.AsyncAppender[DBAsync] - Attaching appender named [DB] to AsyncAppender.
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,271 |-INFO in ch.qos.logback.classic.AsyncAppender[DBAsync] - Setting discardingThreshold to 51
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,271 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,271 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [SysoutAsync] to Logger[ROOT]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,272 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [DBAsync] to Logger[ROOT]
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,272 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
10/28/2020 12:19:43 PM jvm 1    | 17:19:42,273 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@534d6567 - Registering current configuration as safe fallback point
10/28/2020 12:19:43 PM jvm 1    | I [o.e.j.u.log                   ] [17:19:42]: Logging initialized @808ms to org.eclipse.jetty.util.log.Slf4jLog
10/28/2020 12:19:43 PM jvm 1    | I [Jetpad                        ] [17:19:42]: Could not generate subject alt names, the self generated certificate will not have any metadata. More details in debug log
10/28/2020 12:19:43 PM jvm 1    | I [o.e.j.s.Server                ] [17:19:43]: jetty-9.4.24.v20191120; built: 2019-11-20T21:37:49.771Z; git: 363d5f2df3a8a28de40604320230664b9c793c16; jvm 1.8.0_265-b01
10/28/2020 12:19:43 PM jvm 1    | I [o.e.j.w.StandardDescriptorProcessor] [17:19:43]: NO JSP Support for /main, did not find org.apache.jasper.servlet.JspServlet
10/28/2020 12:19:43 PM jvm 1    | I [o.e.j.s.session               ] [17:19:43]: DefaultSessionIdManager workerName=node0
10/28/2020 12:19:43 PM jvm 1    | I [o.e.j.s.session               ] [17:19:43]: No SessionScavenger set, using defaults
10/28/2020 12:19:43 PM jvm 1    | I [o.e.j.s.session               ] [17:19:43]: node0 Scavenging every 600000ms
10/28/2020 12:19:43 PM jvm 1    | I [o.e.j.u.TypeUtil              ] [17:19:43]: JVM Runtime does not support Modules
10/28/2020 12:19:43 PM jvm 1    | W [o.e.j.w.WebAppContext         ] [17:19:43]: Failed startup of context o.e.j.w.WebAppContext@8ff6ce8{Ignition,/main,file:///usr/local/share/ignition/webserver/webapps/main/,UNAVAILABLE}
10/28/2020 12:19:43 PM jvm 1    | javax.servlet.ServletException: Unable to start up context. Context temp folder "/var/lib/ignition/temp/" does not exist.
10/28/2020 12:19:43 PM jvm 1    | 	at com.inductiveautomation.ignition.gateway.bootstrap.SRFilter.init(SRFilter.java:50)
10/28/2020 12:19:43 PM jvm 1    | 	at org.apache.wicket.protocol.http.WicketFilter.init(WicketFilter.java:314)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
10/28/2020 12:19:43 PM jvm 1    | 	at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
10/28/2020 12:19:43 PM jvm 1    | 	at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
10/28/2020 12:19:43 PM jvm 1    | 	at java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:742)
10/28/2020 12:19:43 PM jvm 1    | 	at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:100)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.server.Server.start(Server.java:407)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:100)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.server.Server.doStart(Server.java:371)
10/28/2020 12:19:43 PM jvm 1    | 	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
10/28/2020 12:19:43 PM jvm 1    | 	at com.inductiveautomation.catapult.Jetpad.start(Jetpad.java:414)
10/28/2020 12:19:43 PM jvm 1    | 	at com.inductiveautomation.catapult.Catapult.start(Catapult.java:138)
10/28/2020 12:19:43 PM jvm 1    | 	at com.inductiveautomation.catapult.Catapult.main(Catapult.java:63)
10/28/2020 12:19:43 PM jvm 1    | 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
10/28/2020 12:19:43 PM jvm 1    | 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
10/28/2020 12:19:43 PM jvm 1    | 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
10/28/2020 12:19:43 PM jvm 1    | 	at java.lang.reflect.Method.invoke(Method.java:498)

Engpoint is look like
image

Error on subsequent launch of Edge edition against preserved volume

A missing jre-tmp folder in the image seems to be causing troubles only with the Edge edition. The following docker run works great on first-launch:

$ docker run -d --name edge-bug-demo -e GATEWAY_ADMIN_PASSWORD=password \
    -e IGNITION_EDITION=edge \
    -v edge-bug-demo-data:/var/lib/ignition/data \
    -p 8088:8088 \
    kcollins/ignition:8.0.15

But then, once everything is up and running, you do:

$ docker restart edge-bug-demo

... you'll see the following in the container logs:

Starting Ignition Gateway...
wrapper  | --> Wrapper Started as Console
wrapper  | Java Service Wrapper Standard Edition 64-bit 3.5.42
wrapper  |   Copyright (C) 1999-2020 Tanuki Software, Ltd. All Rights Reserved.
wrapper  |     http://wrapper.tanukisoftware.com
wrapper  |   Licensed to Inductive Automation for Inductive Automation
wrapper  | 
wrapper  | Launching a JVM...
jvm 1    | WrapperManager: Initializing...
...
jvm 1    | /usr/local/share/ignition/jre-tmp/sqlite-3.23.1-1e3572f1-d3a3-4a87-a85a-21128d146759-libsqlitejdbc.so.lck (No such file or directory)
...

Need to add the jre-tmp folder during image creation so that it is available for use by Ignition Edge on first launch (it doesn't appear to create the folder if it is missing).

Install using .run vs .zip

@thirdgen88 Can you help me understand why the 7.9 Dockerfile installs Ignition using the prepackaged .run installer while 8.0 uses the .zip archive?

I'm curious because using the zip archive seems like a simpler approach. I'm considering reworking the 7.9 code to use the zip. But, I'm not that familiar with Ignition so I'm most certainly overlooking important details.

Btw, your repo is really well done, the code is clear and concise and so is the documentation. Thanks for sharing your work! You clearly put a lot of thought into it.

Adding a test solution to the repo

It is becoming more clear that automated testing would be helpful to code in the various configuration permutations that need to function within the capabilities of this image. Testing these manually is becoming burdensome. I'm going to tag this issue with help wanted, because I'll certainly welcome it, but mainly if anyone has any suggestions on tooling please let me know too. I'll get it done at some point in either case just to make my life easier. 😀 My current plan would be to probably leverage some Ansible to stage/conduct the testing.

Travis CI needs configuration for multiple versions

I noticed in the most recent build that the image tagging is still setup for just a single version. Need to split out or somehow deal with getting the image tagging in the .travis-ci.yml sorted out so that the 7.7, 7.8, 7.9, etc images get tagged properly and automatically on Docker Hub.

Named volume against `/data` results in permissions error

In Docker Desktop for macOS, attempting to use a named volume against /data (versus the standard /var/lib/ignition/data) results in:

/usr/local/bin/docker-entrypoint.sh: line 418: /data/.docker-init-complete: Permission denied

... since /data ends up with root.root ownership. Consider implementing use of gosu to enable a step-down from root user instead of running the entire entrypoint as ignition UID/GID.

Request for enabling module development features

It would be nice if there were a convenient enablement for the additional java parameters such as:

  • -Xdebug
  • -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=*:8000
  • -Dia.developer.moduleupload=true
  • -Dignition.allowunsignedmodules=true

Perhaps it could be offered more generically (to allow complete flexibility of those parameter sets), but at the very least, a single environment variable that would cause on-pre-gateway-start modification of /var/lib/ignition/data/ignition.conf to uncomment/add those wrapper.java.additional.n parameters would suffice.

Add license to github repo

Since Ignition is used by many corporations the code here to deploy using docker needs a license file. A simple MIT license would allow anyone to use it for almost any reason while protecting the authors. I am more than willing to submit the pull request.

Observing no default tag provider on fresh container launch

Also observed recently a situation where a fresh launch of a container will sometimes come up without a default realtime tag provider. I'm guessing that this has to do with the extra steps in the commissioning process and the shutdown of the interim provisioning gateway (in order to then pass off to exec after initialization) is premature, causing an inconsistent state on initial launch.

May need to look into some other handling (current configuration is meant to optimize gateway launch time) in order to make sure things initialize in a deterministic fashion.

Missing font configuration for Ignition 8

The Ignition 8 image does not have fontconfig and base set of fonts installed. Without the font configuration, scheduled reports will not execute on the gateway.

Thank you,
Bill

Possible that upgrade might orphan JDBC drivers

I recently had a thought about the JDBC driver setup. Each of the JDBC drivers exist in the user-lib folder, outside of the preserved data volume. If an Ignition upgrade comes with new JDBC drivers, the normal Ignition upgrade behavior leaves JDBC drivers alone. At this point with the current Docker Image, there are two considerations to look into:

  • Existing Ignition container with DB connections and Docker volume persistence, upgraded to a new Ignition version with updated JDBC drivers. Need to make sure that the JDBC driver definitions are not broken with the upgrade (since replacement JDBC files will be in user-lib and internal config.db references might still be pointed to the old ones).
  • Ensuring that when users are linking in third-party JDBC drivers via the /jdbc interface, the updated drivers always take precedent.

Container gives 'Gateway Not Running' for latest image. SQL Exception

To reproduce launch latest ignition container with the command:

>$ docker run -p 8088:8088 --name my-ignition -e GATEWAY_ADMIN_PASSWORD=****** -d kcollins/ignition
Gateway not running message in localhost:8088. container log:

jvm 1 | I [P.InternalDatabase ] [15:28:45]: Looking for existing internal database "config.idb"... jvm 1 | I [P.InternalDatabase ] [15:28:45]: ... found existing. jvm 1 | I [g.InternalDatabaseManager ] [15:28:45]: Upgrading schema for module "ignition" jvm 1 | I [o.e.j.s.h.ContextHandler ] [15:28:45]: Started c.i.c.MainWebAppContext@26e6a384{Ignition,/,file:///usr/local/share/ignition/webserver/webapps/main/,AVAILABLE} jvm 1 | W [g.InternalDatabaseManager ] [15:28:45]: Error executing: INSERT OR IGNORE INTO SEQUENCES(name, val, srctable) VALUES ('STOREANDFORWARDSYSSETTINGS_SEQ', 0, 'STOREANDFORWARDSYSSETTINGS') jvm 1 | W [P.InternalDatabase ] [15:28:45]: Unable to connect to internal database "config.idb", shutting down... jvm 1 | W [P.InternalDatabase ] [15:28:45]: Unable to restore from autobackups because none were found. jvm 1 | E [P.InternalDatabase ] [15:28:45]: Startup of internal database "config.idb" failed, autobackups disabled. jvm 1 | org.sqlite.SQLiteException: [SQLITE_ERROR] SQL error or missing database (no such table: SEQUENCES) jvm 1 | at org.sqlite.core.DB.newSQLException(DB.java:909) jvm 1 | at org.sqlite.core.DB.newSQLException(DB.java:921) jvm 1 | at org.sqlite.core.DB.throwex(DB.java:886) jvm 1 | at org.sqlite.core.NativeDB._exec_utf8(Native Method) jvm 1 | at org.sqlite.core.NativeDB._exec(NativeDB.java:87) jvm 1 | at org.sqlite.jdbc3.JDBC3Statement.executeUpdate(JDBC3Statement.java:116) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.AbstractDBInterface.runUpdateQuery(AbstractDBInterface.java:222) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.LocalDBManagerImpl$SingleConnectionDBInterface.runUpdateQuery(LocalDBManagerImpl.java:748) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.LocalDBManagerImpl.updatePersistentRecords(LocalDBManagerImpl.java:556) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.LocalDBManagerImpl$SQLiteSettingsDBManager.onConnected(LocalDBManagerImpl.java:921) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.sqlite.SQLiteDBManager.startupInternal(SQLiteDBManager.java:318) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.sqlite.SQLiteDBManager.startup(SQLiteDBManager.java:202) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.LocalDBManagerImpl.setup(LocalDBManagerImpl.java:176) jvm 1 | at com.inductiveautomation.ignition.gateway.IgnitionGateway.startupInternal(IgnitionGateway.java:830) jvm 1 | at com.inductiveautomation.ignition.gateway.redundancy.RedundancyManagerImpl.startup(RedundancyManagerImpl.java:283) jvm 1 | at com.inductiveautomation.ignition.gateway.IgnitionGateway.initRedundancy(IgnitionGateway.java:697) jvm 1 | at com.inductiveautomation.ignition.gateway.IgnitionGateway.lambda$initInternal$0(IgnitionGateway.java:635) jvm 1 | at com.inductiveautomation.ignition.common.execution.impl.BasicExecutionEngine$ThrowableCatchingRunnable.run(BasicExecutionEngine.java:518) jvm 1 | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) jvm 1 | at java.base/java.util.concurrent.FutureTask.run(Unknown Source) jvm 1 | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) jvm 1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) jvm 1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) jvm 1 | at java.base/java.lang.Thread.run(Unknown Source) jvm 1 | I [g.InternalDatabaseManager ] [15:28:45]: DBManager shutting down (immediate)... jvm 1 | I [g.InternalDatabaseManager ] [15:28:45]: DBManager shut down in 0ms jvm 1 | E [IgnitionGateway ] [15:28:45]: Error during context startup. jvm 1 | com.inductiveautomation.ignition.gateway.localdb.DBStartupException: Error loading internal database "config.idb" jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.sqlite.SQLiteDBManager.startup(SQLiteDBManager.java:254) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.LocalDBManagerImpl.setup(LocalDBManagerImpl.java:176) jvm 1 | at com.inductiveautomation.ignition.gateway.IgnitionGateway.startupInternal(IgnitionGateway.java:830) jvm 1 | at com.inductiveautomation.ignition.gateway.redundancy.RedundancyManagerImpl.startup(RedundancyManagerImpl.java:283) jvm 1 | at com.inductiveautomation.ignition.gateway.IgnitionGateway.initRedundancy(IgnitionGateway.java:697) jvm 1 | at com.inductiveautomation.ignition.gateway.IgnitionGateway.lambda$initInternal$0(IgnitionGateway.java:635) jvm 1 | at com.inductiveautomation.ignition.common.execution.impl.BasicExecutionEngine$ThrowableCatchingRunnable.run(BasicExecutionEngine.java:518) jvm 1 | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) jvm 1 | at java.base/java.util.concurrent.FutureTask.run(Unknown Source) jvm 1 | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) jvm 1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) jvm 1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) jvm 1 | at java.base/java.lang.Thread.run(Unknown Source) jvm 1 | Caused by: org.sqlite.SQLiteException: [SQLITE_ERROR] SQL error or missing database (no such table: SEQUENCES) jvm 1 | at org.sqlite.core.DB.newSQLException(DB.java:909) jvm 1 | at org.sqlite.core.DB.newSQLException(DB.java:921) jvm 1 | at org.sqlite.core.DB.throwex(DB.java:886) jvm 1 | at org.sqlite.core.NativeDB._exec_utf8(Native Method) jvm 1 | at org.sqlite.core.NativeDB._exec(NativeDB.java:87) jvm 1 | at org.sqlite.jdbc3.JDBC3Statement.executeUpdate(JDBC3Statement.java:116) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.AbstractDBInterface.runUpdateQuery(AbstractDBInterface.java:222) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.LocalDBManagerImpl$SingleConnectionDBInterface.runUpdateQuery(LocalDBManagerImpl.java:748) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.LocalDBManagerImpl.updatePersistentRecords(LocalDBManagerImpl.java:556) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.LocalDBManagerImpl$SQLiteSettingsDBManager.onConnected(LocalDBManagerImpl.java:921) jvm 1 | at com.inductiveautomation.ignition.gateway.localdb.sqlite.SQLiteDBManager.startupInternal(SQLiteDBManager.java:318) rich@Ideapad330:~$ A

Gateway Restore broken with 7.9.11 update

Where the gateway restore should happen (upon having a bind-mounted gateway backup at /restore.gwbk), it produces the following error:

Error! No line found

The gateway restore doesn’t complete as expected, and a new default gateway results.

Upgrading a volume-persisted Ignition Container 7.9.13 to 7.9.14

Made a gateway in version 7.9.13 connected to persistent volume at launch.
Updated container image to 7.9.14 and redeployed.

Error received:
Detected Ignition Volume from prior version (7.9.13), running Upgrader │
│ Error: Could not find or load main class com.inductiveautomation.ignition.common.upgrader.Upgrader

Issues with creating a new MAKER container with the nightly(from yesterday)

Im trying to create a container based on the nightly build.

I use my old compose file from 8.0.16 which provides a licence key and activation_token.

Every time I open the webserver I get promted about username/password/licence. Which is allready provided by the compose file.

The error logs from 'docker logs [container]':
image
or this one if i reboot container:
image

docker-compose file:

version: '2.2'
services:
  gateway:
    image: kcollins/ignition:nightly
    ports:
      - "8088:8088"
    stop_grace_period: 30s
    restart: always
    volumes:
      - /home/pi/dockerVolumes/IgnitionData/ignition.conf:/var/lib/ignition/data/ignition.conf
      - my-ignition-data:/var/lib/ignition/data
      - /home/pi/dockerVolumes/IgnitionModules:/modules
      - /home/pi/dockerVolumes/IgnitionActivationToken/activation-token:/activation-token
      - /home/pi/dockerVolumes/IgnitionBackup/IgnitionHome_Ignition-backup.gwbk:/restore.gwbk
    logging:
      driver: "json-file"
      options:
        max-size: "200k"
        max-file: "10"
    environment:
      GATEWAY_MODULES_ENABLED: perspective,opc-ua,tag-historian,alarm-notification,sfc
      GATEWAY_SYSTEM_NAME: "XXXXXX"
      GATEWAY_ADMIN_PASSWORD: "password"
      IGNITION_EDITION: "maker"
      GATEWAY_INIT_MEMORY: "1024"
      GATEWAY_MAX_MEMORY: "2048"
      TZ: "Europe/Oslo"
      IGNITION_LICENSE_KEY: "XXXX-XXXX"
      IGNITION_ACTIVATION_TOKEN_FILE: "/activation-token"
      IGNITION_COMMISSIONING_DELAY: "60"
      IGNITION_STARTUP_DELAY: "300"

volumes:
  my-ignition-data:

Mobile Launch

Hi, I'm trying to use this image with Ignition version 7.9.10
everything is working great, except the mobile launch

here is the error log:

java.net.SocketException: Connection reset

at java.net.SocketInputStream.read(SocketInputStream.java:210)

at java.net.SocketInputStream.read(SocketInputStream.java:141)

at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)

at java.io.BufferedInputStream.read(BufferedInputStream.java:265)

at java.io.DataInputStream.readShort(DataInputStream.java:312)

at com.inductiveautomation.mobile.gateway.MobileVMManager$ClientVMImpl.getDirtyRectangles(MobileVMManager.java:511)

at com.inductiveautomation.mobile.gateway.servlets.MobileSession.getDirtyRectangles(MobileSession.java:336)

at com.inductiveautomation.mobile.gateway.servlets.MobileDataServlet$RectsBase64.handle(MobileDataServlet.java:358)

at com.inductiveautomation.mobile.gateway.servlets.MobileDataServlet.doRequest(MobileDataServlet.java:203)

at com.inductiveautomation.mobile.gateway.servlets.MobileDataServlet.doGet(MobileDataServlet.java:144)

at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)

at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)

at com.inductiveautomation.ignition.gateway.bootstrap.MapServlet.service(MapServlet.java:85)

at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:837)

at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)

at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)

at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)

at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)

at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)

at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)

at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)

at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)

at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)

at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)

at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)

at org.eclipse.jetty.server.Server.handle(Server.java:518)

at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)

at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)

at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)

at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)

at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)

at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)

at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)

at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)

at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)

at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)

at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)

at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)

at java.lang.Thread.run(Thread.java:748)

just want to make sure that if there is anyone has launched mobile successfully?

Ability to retain commissioning phase in image

The image as it stands has provisions for automating the commissioning phase of gateway startup in order to have the gateway come all the way up in an automated fashion. While this is useful, some use cases have emerged where it might be nice to be able to retain the commissioning aspect of the gateway on first-launch. Recommend to actually make the GATEWAY_ADMIN_PASSWORD and GATEWAY_RANDOM_ADMIN_PASSWORD environment variables optional and simply allow commissioning to take place interactively.

Will need to consider how to still retain the rest of the functionality within the entrypoint (module linkage, jdbc linkage, etc) while we’re “stalled” in this phase.

Container Healthcheck states that the gateway is running when it is still starting

The 8.0.12 containers always show running from the moment that a localhost:/StatusPing is able to be recieved, so I am not positive if its checking for the Running tag

I think that it is because the healthcheck is listed as
HEALTHCHECK --interval=10s --start-period=60s --timeout=3s \ CMD curl -f http://localhost:8088/StatusPing 2>&1 | grep RUNNING

and not

HEALTHCHECK --interval=10s --start-period=60s --timeout=3s \ CMD curl -f http://localhost:8088/StatusPing 2>&1 | grep RUNNING > /dev/null

JSON escaping values sent during auto-commissioning

During the auto-commissioning procedure in the docker-entrypoint.sh for Ignition 8.0, some JSON values are constructed and then sent to Ignition via curl. Some variable values are inserted literally into the JSON without first escaping them according to the rules for JSON strings. Backslashes, double quotes, and control characters (U+0000 through U+001F) need to be escaped with a backslash when they're in a JSON string. Without proper escaping, some variable values can cause the JSON to be invalid and unable to be parsed.

For example, when I set a GATEWAY_ADMIN_USERNAME to a value including a double quote, I got an error like this:

Provisioning will be logged here: /usr/local/share/ignition/logs/provisioning.log
Waiting for commissioning servlet to become active...
Performing commissioning actions...
  IGNITION_EDITION: full
  EULA_STATUS: accepted
ERROR: Unexpected Response (400) during Commissioning phase: Configuring Authentication
HTTP/1.1 400 Bad Request
Referrer-Policy: strict-origin-when-cross-origin
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html;charset=iso-8859-1
Content-Length: 410

<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 400 Malformed body sent</title>
</head>
<body><h2>HTTP ERROR 400 Malformed body sent</h2>
<table>
<tr><th>URI:</th><td>/post-step</td></tr>
<tr><th>STATUS:</th><td>400</td></tr>
<tr><th>MESSAGE:</th><td>Malformed body sent</td></tr>
<tr><th>SERVLET:</th><td>CommissioningServlet</td></tr>
</table>

</body>
</html>

People generally won't have usernames with double quotes, backslashes, tabs, or newlines in them, but some might. Or they might do it by accident. Ideally they'd see an error about the invalid value they specified, rather than a generic error caused by failure to parse JSON.

There are very limited ways for this to cause real trouble (right now). I wanted to make this JSON-escaping issue known in case more commissioning steps are added in the future and there are more opportunities for trouble.

If you're looking for a good way to construct your JSON in a bash script that will allow you to fix this issue, I'd recommend using the excellent jq command line tool. For this container you can use apt to get it. The jq command and its dependencies weigh in at about 1MB. You can provide your variable values as arguments (Search for --arg in the jq manual. If you're interested then I can leave an example of how jq could be used.

As an aside, currently you can have a username with a double quote, backslash, and other odd characters in it right now because of a bug in the commissioning process. I reported it in the Ignition forums: https://forum.inductiveautomation.com/t/bug-16714-no-username-validation-in-commissioning/36164

Clean up JRE extraction and leverage built-in checkRuntimes()

It looks like the lib/runtime/jre-nix/release file in the latest version of the Azul JRE doesn't contain the precise java version.

Expected: JAVA_VERSION="11.0.9.1"
Actual: JAVA_VERSION="11.0.9"

Since the ./ignition.sh checkRuntimes already takes architecture into account (from the respective .zip download), might as well just use this to facilitate pre-extraction of the JRE.

Mysql 8 connector for Ignition 8.n

The mysql connector is, by default, missing from the docker stack. It isn't a larte issue, but of you could include the .jar in the apin up, that would be fantastic.

Scheduled Backup Path

Hi,

First, thanks for this! This is absolutely wonderful.

Is there any way to automatically configure a binding for scheduled backup path?
My particular use-case is with the Ignition-MySQL example. I tried modifying the docker-compose.yml file for the gateway section:

e.g.

   volumes:
      - ./gateway_backup.gwbk:/restore.gwbk
      - gateway_data:/var/lib/ignition
      - ./backups:/home/ignition/backups # Backup line

but I got access errors, because the gateway container did not have write access to that path. I was able to get it going by executing a command as root:

docker exec -u 0 -it 6ac2124cdbb2  bash -c "mkdir /home/ignition/backups; chown ignition /home/ignition/backups"

I was wondering if there was a better way to do this? Happy to submit a PR for the Dockerfile (although I have very little experience).

GATEWAY_DEBUG_ENABLED doesn't enable debugger listening on all interfaces

The GATEWAY_DEBUG_ENABLED env variable applies a couple JVM args to enable remote debugging. The syntax for the address portion needs to be specified as address=*:8000 instead of address=8000. I believe the change for this occurred with the move to Java 11. Without this adjustment, you'll get a java.io.IOException "handshake failed - connection prematurely closed" error.

A workaround for the 8.1.4 image would be to just specify those JVM args in the run command for the container instead of relying on the GATEWAY_DEBUG_ENABLED=1 environment variable.

Incorrect hash triggered by some characters in GATEWAY_ADMIN_PASSWORD

Consider the line in the docker-entrypoint.sh script for Ignition 8.0:

local auth_pwhash=$(echo -en ${GATEWAY_ADMIN_PASSWORD}${auth_salt} | sha256sum - | cut -c -64)

There are some values of GATEWAY_ADMIN_PASSWORD that will cause unexpected behavior, resulting in an invalid password hash. I suggest changing that line to:

local auth_pwhash=$(printf %s "${GATEWAY_ADMIN_PASSWORD}${auth_salt}" | sha256sum - | cut -c -64) 

Explanation

The -e flag on the echo command means character sequences like \n and \t will be output as newline and tab, respectively. So passwords like a\tb won't be hashed correctly because the echo -en command won't output them literally.

Also since ${GATEWAY_ADMIN_PASSWORD}${auth_salt} is not in double quotes, the shell will perform word-splitting and filename expansion. This means passwords with multiple spaces in a row will get squished into a single space: a b becomes a b. Passwords that contain patterns matching local files/directories will be expanded: /* becomes /bin /boot /dev /etc /home /lib /lib64 /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var. Quoting the variables will prevent this from happening. (See https://unix.stackexchange.com/a/171347 for an excellent answer about quoting shell variables)

Even using echo -n with quoted variables can output the wrong thing since echo can misinterpret arguments as flags. For example, echo -n "$VALUE" will output the empty string if $VALUE looks like a flag: -e, -n, -En, etc. I think using echo -n with quoted values will work in this case because ${auth_salt} will never look like it's part of a valid echo flag, but switching to using printf can be a good way to avoid this problem in general.

Bind Mount to /var/lib/ignition not functioning properly.

The README.md suggests that preservation of the Ignition Gateway "instance data" can be done with a named volume against /var/lib/ignition/data. In the current Dockerfile, there are several files linked from there back to /etc/ignition, namely:

  • ignition.conf
  • gateway.xml
  • log4j.properties

Should probably have these directly placed in /var/lib/ignition/data by the install process instead of linked as a convenience measure for volume-mapping.

JVM Startup Failed - Edge Running on Wago PFC200 (arm32)

I am trying to get Ignition Edge running on a Wago PFC200 using docker. I initially had the issue from this thread: https://forum.inductiveautomation.com/t/docker-container-does-not-start/43915

Which has now been partially remedied (no more nanosleep errors and the container makes it past starting), however I am still having an issue with the JVM wrapper not responding. The container now remains running but with the (unhealthy) status and the gateway still doesn't respond.

run command:
docker run -p 8088:8088 --privileged --name my-ignition-edge -e GATEWAY_ADMIN_PASSWORD=password -e IGNITION_EDITION=edge -d kcollins/ignition:latest

logs:
init | Starting Ignition Gateway...
init | Initiating commissioning helper functions...
wrapper | 2021/04/14 00:42:04 | --> Wrapper Started as Console
wrapper | 2021/04/14 00:42:04 | Java Service Wrapper Standard Edition 32-bit 3. 5.42
wrapper | 2021/04/14 00:42:04 | Copyright (C) 1999-2020 Tanuki Software, Ltd. All Rights Reserved.
wrapper | 2021/04/14 00:42:04 | http://wrapper.tanukisoftware.com
wrapper | 2021/04/14 00:42:04 | Licensed to Inductive Automation for Inductiv e Automation
wrapper | 2021/04/14 00:42:04 |
wrapper | 2021/04/14 00:42:09 | Launching a JVM...
jvm 1 | 2021/04/14 00:42:19 | WrapperManager: Initializing...
wrapper | 2021/04/14 00:42:39 | Startup failed: Timed out waiting for a signal from the JVM.
wrapper | 2021/04/14 00:42:40 | JVM did not exit on request, termination reques ted.
wrapper | 2021/04/14 00:42:40 | JVM received a signal SIGKILL (9).
wrapper | 2021/04/14 00:42:40 | JVM process is gone.
wrapper | 2021/04/14 00:42:40 | JVM exited after being requested to terminate.
wrapper | 2021/04/14 00:42:45 | Reloading Wrapper configuration...
wrapper | 2021/04/14 00:42:50 | JVM process is gone.
wrapper | 2021/04/14 00:42:50 | Launching a JVM...
jvm 2 | 2021/04/14 00:42:59 | WrapperManager: Initializing...
wrapper | 2021/04/14 00:43:20 | Startup failed: Timed out waiting for a signal from the JVM.
wrapper | 2021/04/14 00:43:20 | JVM did not exit on request, termination reques ted.
wrapper | 2021/04/14 00:43:20 | JVM received a signal SIGKILL (9).
wrapper | 2021/04/14 00:43:20 | JVM process is gone.
wrapper | 2021/04/14 00:43:20 | JVM exited after being requested to terminate.
wrapper | 2021/04/14 00:43:26 | Reloading Wrapper configuration...
wrapper | 2021/04/14 00:43:29 | JVM process is gone.
wrapper | 2021/04/14 00:43:29 | Launching a JVM...
jvm 3 | 2021/04/14 00:43:38 | WrapperManager: Initializing...
wrapper | 2021/04/14 00:43:59 | Startup failed: Timed out waiting for a signal from the JVM.
wrapper | 2021/04/14 00:43:59 | JVM did not exit on request, termination reques ted.
wrapper | 2021/04/14 00:44:00 | JVM received a signal SIGKILL (9).
wrapper | 2021/04/14 00:44:00 | JVM process is gone.
wrapper | 2021/04/14 00:44:00 | JVM exited after being requested to terminate.
wrapper | 2021/04/14 00:44:05 | Reloading Wrapper configuration...
wrapper | 2021/04/14 00:44:08 | JVM process is gone.
wrapper | 2021/04/14 00:44:08 | Launching a JVM...

Latest nightly fails with 400 Bad Request on `connections` commissioning

It seems that the latest nightly build of the Docker Image no longer is sending a valid payload to the CommissioningServlet for the connections phase. Suggest modifying the GATEWAY_SKIP_COMMISSIONING to completely omit any commissioning steps to allow for a workaround in this situation (and also to allow for easy debugging of it via a Docker container! 😁).

503 Runtime does not exist error from 8.0.0

As it turns out, the Designer Launcher will retrieve the Java runtime from the Gateway if it doesn't already have a downloaded version. This makes sense in retrospect; some of the space savings that I had done in purging the runtime files from the gateway image should not have been done.

Comissioning Failure when persistent volume mounted

The following docker-compose works great for me:

  ignition:
    image:           kcollins/ignition:latest
    container_name:  ignition
    restart:         unless-stopped
    ports:
      - "2088:8088"
    # volumes:
    #   - ./ignitiondata:/var/lib/ignition/data
    environment:
      - GATEWAY_ADMIN_PASSWORD=password
      - IGNITION_EDITION=full
      # - IGNITION_STARTUP_DELAY=120
      # - IGNITION_COMMISSIONING_DELAY=60

When I uncomment the volumes:

  ignition:
    image:           kcollins/ignition:latest
    container_name:  ignition
    restart:         unless-stopped
    ports:
      - "2088:8088"
    volumes:
      - ./ignitiondata:/var/lib/ignition/data
    environment:
      - GATEWAY_ADMIN_PASSWORD=password
      - IGNITION_EDITION=full
      # - IGNITION_STARTUP_DELAY=120
      # - IGNITION_COMMISSIONING_DELAY=60

I get the following logs on repeat:

Provisioning will be logged here: /usr/local/share/ignition/logs/provisioning.log

Waiting for commissioning servlet to become active...

Failed to detect RUNNING status during Commissioning Phase after 30 delay.

Supplied password not applied to restored gateway backups

When you bind-mount a restore.gwbk to /restore.gwbk, the container handles restoring that gateway on first launch. However, the gateway credentials supplied by GATEWAY_ADMIN_PASSWORD do not override what is stored in the gateway backup, resulting in the need to run a reset by exec'ing into the container and using the gwcmd.sh script.

Expected Behavior

When container launches and restores a gateway backup as directed, the credentials supplied to the container should dominate. No additional user interaction should be required to log into the restored gateway with the specified credentials.

Upgrade ignition version

Do you have any experience with updating ignition versions on docker?

To upgrade the ignition version on a regular system, you just have to run the installer again. I imagine that, next to updating the binaries, this installer is also capable of making changes to the internal DB.

But I wonder what happens when you launch a new Ignition version on an older database. Does anyone have some experience with this?

How to print from ignition container

Dear Kevin.
I am grateful and impressed for your commitment and passion.
I ask you for information for your experience.

This is not an issue.

I need to print from your container (ignition 8.1.3) to a network printer and I have no idea how to do it. I have read that I could use preconfigured containers with CUPS installed but I have no idea how to reach the container from your image. I have read about installing cups-client in the container but I need your kindness and detailed instructions.

I thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.