Giter VIP home page Giter VIP logo

fadi's People

Contributors

alexnuttinck avatar allcontributors[bot] avatar ayadiamen avatar banzo avatar fabiansteels avatar fossabot avatar fzalila avatar zakaria2905 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fadi's Issues

Add Documentation on TSimulus

Is your feature request related to a problem? Please describe.
Describe the use of TSimulus in FADI.

Describe the solution you'd like
A new section in the userguide that explains how to enable TSimulus in FADI and how to use it.
See cetic/helm-fadi#13

Stack fails to start

On OS X + Virtualbox and Manjaro + KVM, the stack deployment fails since this morning.

➜  helm git:(develop) ✗ minikube start --vm-driver=kvm2 --cpus 6 --memory 16000 --disk-size=50GB
➜  helm git:(develop) ✗ ./deploy.sh
[...]
➜  helm git:(develop) ✗ kubectl get pods -n fadi --watch
NAME                                  READY   STATUS              RESTARTS   AGE
fadi-grafana-67d79bd495-m4m2t         0/1     Init:0/1            0          104s
fadi-minio-85dfbb4556-cdhx5           0/1     ContainerCreating   0          104s
fadi-nifi-0                           0/4     Init:0/1            0          104s
fadi-openldap-5bbd47899d-c99bq        0/1     Init:0/1            0          104s
fadi-pgadmin-8496f475fc-fhfks         0/1     ContainerCreating   0          104s
fadi-phpldapadmin-77bbb4f969-67j8m    1/1     Running             0          104s
fadi-postgresql-0                     0/1     Init:0/1            0          104s
fadi-spark-master-b4f7f688f-mzbc7     0/1     ContainerCreating   0          104s
fadi-spark-worker-675f86b497-4h6hl    0/1     ContainerCreating   0          104s
fadi-spark-worker-675f86b497-6jnx2    0/1     ContainerCreating   0          104s
fadi-spark-worker-675f86b497-h8bsd    0/1     ContainerCreating   0          104s
fadi-spark-zeppelin-8fd4986bc-kc8mm   0/1     ContainerCreating   0          104s
fadi-superset-7d4dff84df-mjvxs        0/1     ContainerCreating   0          103s
fadi-zookeeper-0                      0/1     ContainerCreating   0          104s
hub-64c6c4c654-nmzmg                  0/1     ContainerCreating   0          104s
pg-ldap-sync-1568110260-vb2vf         0/1     ContainerCreating   0          66s
pg-ldap-sync-1568110320-jjfg8         0/1     ContainerCreating   0          5s
proxy-76ccfdfdf7-wmmv6                1/1     Running             0          104s
fadi-spark-master-b4f7f688f-mzbc7     1/1     Running             0          108s
fadi-openldap-5bbd47899d-c99bq        0/1     PodInitializing     0          112s
fadi-spark-worker-675f86b497-4h6hl    1/1     Running             0          113s
fadi-spark-worker-675f86b497-6jnx2    1/1     Running             0          114s
fadi-spark-worker-675f86b497-h8bsd    1/1     Running             0          115s

➜  helm git:(develop) ✗ minikube logs
[...]
==> storage-provisioner <==
E0910 09:24:27.976273       1 controller.go:682] Error watching for provisioning success, can't provision for claim "fadi/fadi-grafana": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resp "" in the namespace "fadi"
E0910 09:24:27.979141       1 controller.go:682] Error watching for provisioning success, can't provisioir": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resou"" in the namespace "fadi"
E0910 09:24:27.988939       1 controller.go:682] Error watching for provisioning success, can't provisioio": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resou"" in the namespace "fadi"
E0910 09:24:27.994803       1 controller.go:682] Error watching for provisioning success, can't provisiodmin": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list resp "" in the namespace "fadi"
E0910 09:24:29.163613       1 controller.go:682] Error watching for provisioning success, can't provisioerset": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot list reup "" in the namespace "fadi"
E0910 09:24:29.962365       1 controller.go:682] Error watching for provisioning success, can't provisioi-zookeeper-0": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot API group "" in the namespace "fadi"
E0910 09:24:30.164302       1 controller.go:682] Error watching for provisioning success, can't provisioi-postgresql-0": events is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannon API group "" in the namespace "fadi"

Version info:

➜  helm git:(develop) ✗ helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Error: could not find tiller
➜  helm git:(develop) ✗ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"archive", BuildDate:"2019-08-29T18:43:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
➜  helm git:(develop) ✗ minikube version
minikube version: v1.3.1
commit: ca60a424ce69a4d79f502650199ca2b52f29e631

Add message broker component

Is your feature request related to a problem? Please describe.

The toolbox needs tools to facilitate streaming ingestion and processing.
This is currently handled by Apache Nifi, a more complete message queuing solution is needed.

Describe the solution you'd like

Add the Kafka message broker to the stack, document setup and usage instructions.

Describe alternatives you've considered

Apache Pulsar was considered but is currently on hold - WIP here:
https://github.com/cetic/helm-fadi/tree/feature/pulsar

Additional context

  • Kafka can be leveraged to add MQTT support down the line (#54)

Automate databases definition in pgadmin

Is your feature request related to a problem? Please describe.
We need to manually configure pgadmin.

Describe the solution you'd like
Automation of the databases configuration.

I add a way to enable the servers configuration in helm-pgadmin via the values.yaml file. If enabled, server definitions found in servers.config will be loaded at launch time.

The idea would be to setup this configuration in the helm-fadi and update the doc of this repo accordingly.

Handle sensors disconnect

Is your feature request related to a problem? Please describe.

Sometimes, sensors get disconnected or just stop working.

Describe the solution you'd like

FADI should provide a way to detect such situations and react to it accordingly.

Describe alternatives you've considered

  • monitoring: twin devices pattern (heartbeat or cron check based on expected data reception frequency)
  • alerting (email, sms, MQ, ...)
  • remediation: degraded operation mode

None service accessible after a FADI installation on a generic kubernetes cluster

Hello,

First, we have tried to install the Fadi Framework on a generic kubernetes cluster from scratch by following the installation procedure.
[https://github.com/cetic/fadi/blob/master/INSTALL.md]
We were face to an issue due to the default storageClass that were not automaticaly configure during the installation.
Amen, a guy from the cetic, had taken the hand remotely and fix the issue, thanks to him
[https://github.com/cetic/helm-fadi/issues/15]

Now, FADI seems to be installed, I mean all the pods are running but now we are not able to access to any application of the FADI Framework from any http request. (Pgadmin, nifi, postgress, grafana, ...)

We are just trying to follow the step 2 of the sample case userguide
[https://github.com/cetic/fadi/blob/master/USERGUIDE.md]

The Minikube command "minikube service -n fadi fadi-pgadmin" is dedicated to a FADI installed on a minikube VM but we are here on a generic kubernetes cluster and the procedure does not indicate how to access to a Fadi Framework installed on a generic kubernetes cluster like the ours.

We have checked the settings by two commands :
kubectl cluster-info
kubectl get services

According these settings, we have tried for example to access to the pgadmin entrypoint :
https://192.168.1.30:30374 or http://192.168.1.30:30374
We have also tried by using the kubedns rest api but without success.

Could you help us on that because we have not yet received any answer to our last email, the 24 october ?

Best regards

no services

Stéphane Rousseau
Safran Aero Boosters

Grafana deployment failed

Describe the bug

The Grafana services is not deploying anymore, upon deploy.sh:

Error: release fadi-grafana failed: Deployment in version "v1beta2" cannot be handled as a Deployment: v1beta2.Deployment.Spec: v1beta2.DeploymentSpec.Strategy: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|trategy":"RollingUpd|..., bigger context ...|:"grafana","release":"fadi-grafana"}},"strategy":"RollingUpdate","template":{"metadata":{"annotation|...

Which chart:

Grafana

What happened:

Tried to deploy, Grafana deployment failed

How to reproduce it (as minimally and precisely as possible):

deploy.sh

Jupyter service name

To launch the Jupyter service, we do bash minikube service -n fadi proxy-public .

proxy-public is it the appropriate name of this service?

Add support for Seldon

In order to improve the usability of FADI when deployed for Machine Learning / Data Science projects, a support for Seldon should be added.

Seldon is an open source project for packaging machine learning models and scoring them. It hides such models behind a REST API, and build Docker containers for deploying them.

From the user's point of view, Seldon is essentially a tool for converting source code into containers, making it "easier and faster to deploy your machine learning models and experiments at scale on Kubernetes". Related advanced features are also provided. In particular, performances metrics are generated by Seldon

Is your feature request related to a problem? Please describe.
No, it's a suggestion for an extension improving the functional coverage of FADI instances.

Describe the solution you'd like
Helm charts should be added to FADI, in order to be able to deploy and exploit an instance of Seldon.

Describe alternatives you've considered
None.

Additional context
N/A

Service enabling override is not taken into account

Describe the bug

Modifications to the values.yml file in this repo are not taken into account

Environment:

  • latest Arch and OSX
  • latest Minikube
  • latest kvm/Virtualbox

What happened:

Tired to disable unused services (enabled:false) such as Spark, but the whole Spark cluster was still started by deploy.sh.

What you expected to happen:

disabled services in this repo's config file should not be started, overriding the default helm-fadi config

How to reproduce it (as minimally and precisely as possible):

disable e.g. Spark in this repo's values.yml, start a fresh minikube and deploy

Provide details on security aspects

We need to provide some information on the security aspects of the framework: what security mechanisms are in place by default, guidelines on how to secure a FADI installation, ...

This should take the form of a FAQ item, an update to the presentation slides and maybe a dedicated page.

pg-ldap-sync pods always no stable

The pg-ldap-sync pods are always difficult to be stable when we deploy fadi (even after changing the cron job).

NAME                                  READY   STATUS      RESTARTS   AGE
fadi-grafana-ddb678f48-gfh6w          1/1     Running     0          63m
fadi-nifi-0                           4/4     Running     0          63m
fadi-openldap-5bbd47899d-rltpk        1/1     Running     0          63m
fadi-pgadmin-8496f475fc-5ssck         1/1     Running     0          63m
fadi-postgresql-0                     1/1     Running     0          63m
fadi-spark-master-b4f7f688f-pw277     1/1     Running     0          63m
fadi-spark-worker-675f86b497-chht2    1/1     Running     0          63m
fadi-spark-worker-675f86b497-m77gr    1/1     Running     0          63m
fadi-spark-worker-675f86b497-r9ls5    1/1     Running     0          63m
fadi-spark-zeppelin-8fd4986bc-t4ns9   1/1     Running     0          63m
fadi-superset-7d4dff84df-nn6t6        1/1     Running     0          63m
fadi-zookeeper-0                      1/1     Running     0          63m
fadi-zookeeper-1                      1/1     Running     0          49m
fadi-zookeeper-2                      1/1     Running     0          49m
hub-5fd9f966c-sfc7c                   1/1     Running     0          63m
pg-ldap-sync-1568878800-6ncfb         0/1     Completed   0          47m
pg-ldap-sync-1568878800-crcg6         0/1     Error       0          59m
pg-ldap-sync-1568880000-tdxpd         0/1     Completed   0          39m
pg-ldap-sync-1568881200-fgcjm         0/1     Completed   0          19m
proxy-76ccfdfdf7-5jfj6                1/1     Running     0          63m

Cron job is it the appropriate solution

To deal with the multiple pg-ldap-sync pods created simultaneously, an alternative solution is to change the cron job for example to 20 minutes. Good solution!

However, imagine that the last cron was triggered at a given time t.

Just after, at x+ε time, a new user is created in the openldap service.
So, he/she will need to wait around 20 minutes to be able to access to the postgresql service.

Does exit an alternative to the cron job to deal with kind of situation (a listener for example).

Best,
FZ.

Postgresql databases not persistent

Describe the bug
When stopping and restarting Minikube, all the Databases and tables created with pgAdmin disappears.

Environment:

  • OS Microsoft Windows 10 Pro 10.0.17763 Build 17763
  • VM driver Virtualbox
  • Minikube version v1.5.2
  • Kubernetes version v1.15.4

What happened:
When stopping then restarting Minikube, all the Databases and Tables created with pgAdmin disappears.
Only the Server created is still present.

What you expected to happen:
When restarting Minikube, the tables and databases created are still present

How to reproduce it (as minimally and precisely as possible):

  1. Start Minikube
  2. Deploy fadi
  3. Add a server connected to fadi-postgresql
  4. Add databases and tables with pgAdmin
  5. Stop Minikube
  6. Start Minikube

Output of minikube logs (if applicable):
MinikubeLogs_191120.TXT

Parametrise the installation

Is your feature request related to a problem? Please describe.

When instanciating the stack, need a way to disable/enable services (e.g. use Superset and not Grafana, or both, or none, ...), maybe also provide custom configurations at the stack level (namespace, ...)

Describe the solution you'd like

A way to parametrise the stack: e.g. decide which services are active, add some configs, ...

Describe alternatives you've considered

Some ways to achieve that:

  1. iterate on the deploy.sh to add shell parameters, and logic inside -> quick and dirty
  2. translate deploy.sh into an Ansible script to be executed on the host. -> forces users to isntall ansible on top of helm and kubectl, adds a tech to the stack
  3. use an Ansible pod to take care of the helm and kubectl stuff (maybe this would remove the dependency on the helm client and kubectl on the host system?) -> same as 2. above but removes the requirements to install Ansible
  4. leverage Helm charts dependencies, maybe having one central "fadi" chart that references all the others -> this will not take care of the kubectl stuff, but may be sufficient to provide modularity/parameters

Additional context

This would offer a nice way to provide examples (e.g. "minimal stack", or Tensorflow vs Spark stacks, ...

JupyterHub seems not to be deployable on a Kubernetes > 1.15

I try to deploy FADI on cluster under the 1.16 version of Kubernetes. I stuck on issue...

Issue

I can see the webpage of JupyterHub and i can connect to them with the default OpenLdap User/Password. But when i would like to choose a notebook, I come across this error :

error

Diagnostic

So, When i go to the logs of the hub pod, i can see :

[E 2020-02-05 10:47:55.598 JupyterHub gen:593] Exception in Future <Task finished coro=<BaseHandler.spawn_single_user.<locals>.finish_user_spawn() done, defined at /usr/local/lib/python3.6/dist-packages/jupyterhub/handlers/base.py:629> exception=TypeError("'<' not supported between instances of 'NoneType' and 'datetime.datetime'",)> after timeout
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 589, in error_callback
        future.result()
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/handlers/base.py", line 636, in finish_user_spawn
        await spawn_future
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/user.py", line 489, in spawn
        raise e
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/user.py", line 409, in spawn
        url = await gen.with_timeout(timedelta(seconds=spawner.start_timeout), f)
      File "/usr/local/lib/python3.6/dist-packages/kubespawner/spawner.py", line 1636, in _start
        events = self.events
      File "/usr/local/lib/python3.6/dist-packages/kubespawner/spawner.py", line 1491, in events
        for event in self.event_reflector.events:
      File "/usr/local/lib/python3.6/dist-packages/kubespawner/spawner.py", line 72, in events
        key=lambda x: x.last_timestamp,
    TypeError: '<' not supported between instances of 'NoneType' and 'datetime.datetime'
    
[E 2020-02-05 10:47:55.603 JupyterHub pages:148] Failed to spawn single-user server with form
    Traceback (most recent call last):
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/handlers/pages.py", line 146, in post
        await self.spawn_single_user(user, options=options)
      File "/usr/local/lib/python3.6/dist-packages/jupyterhub/handlers/base.py", line 737, in spawn_single_user
        status, spawner._log_name))
    tornado.web.HTTPError: HTTP 500: Internal Server Error (Spawner failed to start [status=1]. The logs for admin may contain details.)

The problem come from the format used by Kubernetes to send a time variable.

How to quickly solve the problem

  • Stay on Kubernetes 1.15
    or
  • Disable the events by modifying the values.yaml file of FADI. add the following configuration:
  jupyterhub: 
    enabled: true
    singleuser:
      events: true

How to definitively solve the problem

  • Update the version of kubespawner in the jupyterHub subchart. Now the version is :
jupyterhub-kubespawner==0.10.1

Reference

Create a local version vs production

Is your feature request related to a problem? Please describe.

Get a local version to be able to run the FADI stack locally and a production version. It uses currently too much resources for the local version.

Describe the solution you'd like

Get two different values.yaml files, one for minikube, one for production.

We could also pass a value minikube.yaml file which configure the ressources used (RAMs, CPUs).

helm upgrade --install ${NAMESPACE} cetic/fadi -f ./values.yaml -f minikube.yaml --namespace ${NAMESPACE} --tiller-namespace tiller

Describe alternatives you've considered

Explain in the documentation how to use less/more resources.

Data Catalog

Is your feature request related to a problem? Please describe.

  • data exchange format changes over time
  • various formats to describe the data (xls, doc, XML schema DTD, json schema, ...)

Describe the solution you'd like

The aims of a Data Catalog is focused on the metadata management. The Data Catalog tools centralize metadata in one location helping to make data sources more discoverable for users.

Describe alternatives you've considered

Dremio, CKAN

Additional information

This issue is linked with

  • data loading considerations: loading from the data lake to the data warehouse would use information from the data catalog
  • data validation: data ingested (and produced) by an implementation of FADI should be validated according to the schema described in the data catalog

Enhancing the configuration of the time range in the user guide to explore data using Superset

Is your feature request related to a problem? Please describe.

By following the user guide to explore data using Superset, I can't display the chart. After investigation, I find the solution which consists in customizing the time range from 2019-06-23 16:00:00 to 2019-06-28 16:00:00 (instead of choosing the default option last quarter).

Describe the solution you'd like
To fix this limitation, the documentation should be enhanced as follow (go to the part of creating a new chart in the user guide):

  • In the Data tab
    • in Time section,
      • Time Grain: hour.
      • in Time range, press the custom tab
        • Start / end: copy/paste 2019-06-23 16:00:00 and 2019-06-28 16:00:00

Capture d’écran 2020-01-21 à 11 06 15

Postgresql secrets not found

What happened:

When I try to do the user guide, the command export POSTGRES_PASSWORD=$(kubectl get secret --namespace fadi fadi-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) returns this error Error from server (NotFound): secrets "fadi-postgresql" not found.

Any idea how to resolve this issue?

Add support for MLFlow

In order to improve the usability of FADI when deployed for Machine Learning / Data Science projects, a support for MLFlow should be added.

MLFlow is a relatively recent, open source project from Databricks for storing and managing metrics that relate to ML models. Due to its loose coupling, this tool can be used with a large set of ML libraries.

From the user's point of view, MLFlow is essentially a REST API for submitting quality metrics, plus a Web application for managing them.

Is your feature request related to a problem? Please describe.
No, it's a suggestion for an extension improving the functional coverage of FADI instances.

Describe the solution you'd like
Helm charts should be added to FADI, in order to be able to deploy and exploit an instance of MLFlow.

Describe alternatives you've considered
KubeFlow looks like a "natural" alternative, but it only focuses on the Tensorflow framework, which makes it more specific.

Additional context
N/A

Set HttpInvoke schedule to 120s

Update the user guide NiFi section to change the csv download frequency to something else than 0 (it puts a bit too much strain on minikube by flooding the DB).

Launch Jupyter

When we launch the Jupyter service using bash minikube service -n fadi proxy-public, two tabs with two ip adresses are created in the browser: one is responsive and the second one is non-responsive.

Is it correct as behavior?

Management of application logs

Is your feature request related to a problem? Please describe.

Easily monitor and consult the application logs (errors, warnings, debug), to audit / trace the actions of the users a posteriori (for example in case of audit).
The goal is to set up a tool for log management to centralize logs of the FADI components and other system components.

Describe the solution you'd like

  • add Elastic-stack to the toolbox: ElasticSearch, Logstash, Kibana, ...
  • installation and usage documentation

Describe alternatives you've considered

Apache Solr

Windows support

Check and update the (Minikube) installation instructions for windows support:

  • deploy.sh adaptations (seems there is a Windows vs Linux file encoding issue)
  • can/should the WSL be used? If yes, how?

More easier install of fadi stack

The following bloc allows to install and configure the fadi cluster.

kubectl config set-context minikube
minikube addons enable ingress
cd helm
# you can edit values.yaml file to customise the stack
./deploy.sh
# see deploy.log for connection information to the various services
# specify the fadi namespace to see the different pods
kubectl config set-context minikube --namespace fadi

To make the install of fadi more easier, is it possible to integrate the different commands (kubectl config...) inside the deploy.sh script?

Add support for custom environment with Jupyter

Is your feature request related to a problem? Please describe.

No, it's a suggestion for improving the functional coverage of FADI.

Describe the solution you'd like

A data scientist can use Jupyter Hub for iteratively explore data sets and provide technical solutions to various problems.

In order to do so, she frequently has to change the Jupyter environment of her notebooks in order to include some specific package, to test alternative processing frameworks, etc. Typically, each project / use case can have one or many dedicated environments with daily or weekly undergoing changes.

FADI should foster such a dynamic adaptation of the data scientist's needs, by providing a way to efficiently manage extra dependencies.

For instance, a Web application could be provided for specifying, adapting or copying the environment right before instantiating Jupyter Hub. An interesting feature would be the possibility to inherit environments, and to share them among stakeholders.

Describe alternatives you've considered

The current recommanded way to do it is to adapt the Helm view file of the underlying Kubernetes cluster, and to restart the appropriate services. This is not really acceptable for a end user.

An alternative consists in specifying the additional dependencies in "conda install"-like commands at the beginning of the notebooks, but that makes these specifications notebooks-specific. It also implies the additional dependencies must be satisfied each time the notebook is loaded. Environment variables/secrets must be set in the notebooks, which raises securities issues. Etc, etc.

Additional context

Please have a look on how Domino provides this features. Basically, a Docker file can be edited by the finale user for personalizing the environment.

A nice optimization would consist in caching popular / recent / frequently used environments, in such a way running notebooks using these environments would be faster.

Postgresql deployment failed

Describe the bug

PostgreSQL pod error in local setup with minikube

Version of Helm and Kubernetes:

Latest

What happened:

minikube start
cd helm
./deploy.sh

In the pod logs:

WARN  ==> Data directory is set with a legacy value, adapting POSTGRESQL_DATA_DIR...
WARN  ==> POSTGRESQL_DATA_DIR set to "/bitnami/postgresql/data"!!
Welcome to the Bitnami postgresql container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
Send us your feedback at [email protected]
INFO  ==> ** Starting PostgreSQL setup **
INFO  ==> Validating settings in POSTGRESQL_* env vars..
INFO  ==> Initializing PostgreSQL database...
INFO  ==> postgresql.conf file not detected. Generating it...
INFO  ==> pg_hba.conf file not detected. Generating it...
INFO  ==> Deploying PostgreSQL with persisted data...
INFO  ==> Configuring replication parameters
INFO  ==> Loading custom scripts...
INFO  ==> Enabling remote connections
INFO  ==> Stopping PostgreSQL...
INFO  ==> ** PostgreSQL setup finished! **
INFO  ==> ** Starting PostgreSQL **
2019-06-18 08:34:25.737 GMT [1] LOG:  skipping missing configuration file "/bitnami/postgresql/data/postgresql.auto.conf"
2019-06-18 08:34:25.737 GMT [1] FATAL:  "/bitnami/postgresql/data" is not a valid data directory
2019-06-18 08:34:25.737 GMT [1] DETAIL:  File "/bitnami/postgresql/data/PG_VERSION" is missing.

What you expected to happen:

The stack to be available

How to reproduce it (as minimally and precisely as possible):

see above

Anything else we need to know:

Using KVM hypervisor

MQTT integration

The stack would benefit from mqtt support: ability to ingest data sent via mqtt protocol.

This means we would need to add an optional mqtt broker component in the architecture:

  • Helm chart if needed
  • add it to the FADI stack
  • update the doc
  • provide a usage example

Add Documentation about Adminer, and deprecate pgAdmin

Is your feature request related to a problem? Please describe.

Adminer has been added to the FADI stack (See Issue #89). (https://www.adminer.org/ supports MySQL, MariaDB, PostgreSQL, SQLite, MS SQL, Oracle, SimpleDB, Elasticsearch, MongoDB)
Now we need to update the doc (install, userguide, slides) to deprecate pgadmin.

Checklist

  • Update the install script (update helm-fadi), and the documentation related to.
  • Update the userguide.
  • Update the different slides and presentations. (https://fadi.presentations.cetic.be , ...).
  • Update the install script, and the documentation related to.

Grafana is readonly

Describe the bug: In a GCP kubernetes deployment of FADI, Grafana is readonly (cannot create dashboards, playlist, ...)
Version of Helm and Kubernetes: latest on GCP
Which chart: Grafana
What happened: log in as admin/password1, dashboard creation UI elements are missing
What you expected to happen: the dashboard creation UI elements to be present (along with the other admin features for the admin user)
How to reproduce it: go to the bd@MA testbed (probably it can also be reproduced in the minikube dev env)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.