Comments (10)
Dear @TheOneWhoKnocks96,
thanks for asking. I will try to explain as good as possible.
In the option
source
you utilize a HTTP GET request to the database or is to get the JSON payload published in the topic and transform it in order to put in Influx?
The configuration object [weewx.data-export]
referenced above is not about "utilizing" but rather "defining". It's also not about "putting" data into InfluxDB but rather "retreiving" the data from InfluxDB as it actually implements the data export functionality.
The source
option defines a HTTP GET endpoint which, when requested, will run an InfluxDB query against the thing defined by target
after being transformed through the machinery defined by transform
.
The transform option i don't really understand, i just know you guys transform the parameters in JSON to queries.
Exactly. All the heavy lifting is under the hood and implemented by the designated software components. The configuration object just ties things together.
Hope this helps.
With kind regards,
Andreas.
from kotori.
Ah okay, thank you for responding quickly!!
So the option transform is essentially responsible for posting data into InfluxDB?
Is through that way you guys POST data into InfluxDB?
The source and target options i understand, they are for retrieving data from InfluxDB, but i don't understand why? Is for Grafana?
Thanks a lot for the help.
Best Regards.
from kotori.
Data export
So the option transform is essentially responsible for requesting data from InfluxDB?
Right. The transform
setting is for defining components which take the request data and build a sensible InfluxDB query and appropriate request from it in order to pull data from InfluxDB.
The source and target options i understand, they are for retrieving data from InfluxDB.
Yeah, they kind of connect a generic HTTP endpoint provided by Kotori semantically to InfluxDB, with some smart transformations in between.
Data acquisition
Is through that way you guys POST data into InfluxDB?
No, this is solely for getting data from InfluxDB. [weewx.data-export]
clearly designates this configuration object as export-related.
Grafana
but i don't understand why? Is for Grafana?
At this place, nothing is going on with Grafana here. The export functionality is solely for requesting data from InfluxDB with added convenience, see also https://getkotori.org/docs/handbook/export/.
Grafana will not talk to Kotori at all, it will directly talk to InfluxDB for getting data to be displayed on dashboards. The export subsystem of Kotori outlined here is completely orthogonal to this.
With kind regards,
Andreas.
from kotori.
Data export
Okay now I understand. But why is the export necessary if I can see data in Grafana already?
Data acquisition
So if this is the way to retrieve data from InfluxDB, how is the data imported into InfluxDB? Is it through the HTTP POST endpoint, with the defined topic?
Thanks for the help!!
Best Regards.
from kotori.
Dear @TheOneWhoKnocks96,
Data export
Okay now i understand. So this is the way to retrieve data from InfluxDB.
Right.
Grafana
But why is the export necessary if I can see data in Grafana already?
The export feature is a generic functionality and has nothing to do with the Grafana interconnect feature for instantly graphing measurement data. It can be used as a restful API for exporting per-channel data using all sorts of convenient filtering, projection and different kinds of output rendering options.
Data acquisition through MQTT
How is the data imported to InfluxDB?
When coming from WeeWX, the data is ingested through MQTT [1] there. However, ingesting data using HTTP [2] is also possible by using a similar configuration object like outlined above. For getting an idea how to do this, you might want to have a look at [3].
Data acquisition through HTTP
Is data acquisition also possible through HTTP POST requests to the defined topic / channel address?
Absolutely. As you can see from a respective configuration snippet like outlined through [4], the HTTP URI configured within thesource
address option is designated as being a HTTP POST. This endpoint will converge HTTP POST requests to it towards MQTT using the topic defined within the target
address option.
With kind regards,
Andreas.
[1] https://getkotori.org/docs/handbook/acquisition/protocol/mqtt.html
[2] https://getkotori.org/docs/handbook/acquisition/protocol/http.html
[3] https://getkotori.org/docs/applications/forwarders/http-to-mqtt.html
[4] https://github.com/daq-tools/kotori/blob/0.22.7/etc/examples/forwarders/http-to-mqtt.ini#L34-L42
from kotori.
MQTT data acquisition by example
On [1], you have:
Define where to send data to:
export MQTT_BROKER=localhost
export MQTT_TOPIC=mqttkit-1/testdrive/area-42/node-1
And this:
# Publish single sensor readings
mosquitto_pub -h $MQTT_BROKER -t $MQTT_TOPIC/data/temperature -m '42.84'
This is used to publish measurement data to the MQTT broker and not directly into the InfluxDB database, right?
Recap
Let me get this straight by reiterating it once more:
- Firstly, I utilized a plugin MQTT for WeeWX to publish data to the MQTT broker with a specific topic.
- Then, with Kotori I subscribed to that topic specified in the configuration file
weewx.ini
. - By subscribing to that topic, I now have the JSON payload and now I want to ingest that data into InfluxDB accordingly. I don't want to publish again data into the broker like in [1]. Are you sure the data is ingested into InfluxDB through MQTT without using HTTP POST requests?
I am sorry if i am bothering in any kind of way. I just want to understand how the ingestion of data in influxdb is performed.
Best Regards.
[1] https://getkotori.org/docs/handbook/acquisition/protocol/mqtt.html
from kotori.
I think i got it:
In the weewx.ini
file you have:
; -----------------------------
; Data acquisition through MQTT
; -----------------------------
[weewx]
enable = true
type = application
realm = weewx
mqtt_topics = weewx/#
app_factory = kotori.daq.application.mqttkit:mqttkit_application
You utilize the mqttkit application to ingest data into InfluxDB?
What the app_factory option mean?
Best Regards.
from kotori.
Dear @TheOneWhoKnocks96,
Introduction
I am sorry if i am bothering in any kind of way. I just want to understand how the ingestion of data in influxdb is performed.
No worries, just keep asking.
The command line example outlined above is used to publish measurement data to the MQTT broker and not directly into the InfluxDB database, right?
Right, by publishing measurement data to the MQTT bus and configuring Kotori appropriately, it will converge the data into the InfluxDB timeseries database without further ado.
Firstly, I utilized a plugin MQTT for WeeWX to publish data to the MQTT broker with a specific topic. Then, with Kotori I subscribed to that topic specified in the configuration file
weewx.ini
.
Exactly, that's how it works.
The main per-realm configuration object
The core of the data acquisition path through MQTT is exactly the configuration object [weewx]
you referenced above.
- This guy establishes a realm called
weewx
inside Kotori which designates the tenant/vendor from the perspective of a multi-tenant system. - Kotori then subscribes to the MQTT topic
weewx/#
to be able to see everything what's going on in this space. Then, the service just sits there and will wait for things to happen. - When something is published to the MQTT topic this application is subscribed to, the payload will be evaluated through the machinery defined through the software component
kotori.daq.application.mqttkit
which in turn is assembled from several baseline software components from the core of Kotori.
Details
The main workhorse here is definitively MqttInfluxGrafanaService
, which is spun up through https://github.com/daq-tools/kotori/blob/0.22.7/kotori/daq/application/mqttkit.py#L26-L32, essentially tying all things together.
One of the convention of the MQTTKit communication flavor is that is uses several MQTT topic segments implementing the »quadruple hierarchy strategy« in a wide-network addressing scheme.
Actually, this is implemented through the WanBusStrategy
component used within MqttInfluxGrafanaService
which will take the responsibility of properly receiving and decoding message payloads arriving on the MQTT bus according to the specific conventions implemented in there.
The MQTTKit communication flavor is also used as a blueprint for more vendor-specific implementations like operated within the environmental sensor network of the Hiveeyes Project. After reading through the inline documentation of WanBusStrategy
, Hiveeyes data acquisition channel addressing will feel familiar.
We hope this will shed more light onto the whole thing how data is flowing from MQTT through the Kotori into InfluxDB. Feel free to ask for more specific things which might still be unclear.
With kind regards,
Andreas.
P.S.: Just for the sake of completeness to answer all your questions.
By subscribing to that topic, I now have the JSON payload and now I want to ingest that data into InfluxDB accordingly. I don't want to publish again data into the broker like in [1]. Are you sure the data is ingested into InfluxDB through MQTT without using HTTP POST requests?
This is exactly what Kotori is doing here for you. Additionally, it communicates with Grafana in order to kickstart some instant dashboards reflecting the telemetry fields of measurement data ingested through the process outlined above.
P.P.S.: All of these mechanisms and subsystems have been designed to a) get you started with data acquisition and graphing instantly and b) will give you the freedom to establish any number of data acquisition channels without having to provision them explicitly beforehand. The motto here is: Just throw a bunch of JSON at the system and watch the data flowing into Grafana.
from kotori.
Thanks for the clarification!!
Best Regards.
from kotori.
Dear @RuiPinto96,
thank you very much for the conversation we had about the internals of Kotori the other day. I finally have been able to refactor the contents into a FAQ section within the documentation, see [1-3]. If you feel this doesn't answer your questions yet, don't hesitate to reopen.
With kind regards,
Andreas.
[1] https://getkotori.org/docs/faq/acquisition.html
[2] https://getkotori.org/docs/faq/weewx.html
[3] https://getkotori.org/docs/faq/grafana-vs-export.html
from kotori.
Related Issues (20)
- docker-compose up is taking very long HOT 2
- Video tutorial HOT 1
- Panels are not updated on instant dashboards after update to Grafana 9.3.1 and Kotori 0.27.0 HOT 1
- Support new devices for DAQ-SIG
- Add ISEMS project to gallery
- Add "Well Depth Monitor" to project gallery
- Grafana: Adjust a few integration details
- Error channel reports `'NoneType' object has no attribute 'endswith'`
- Docker is sunsetting Free Team organizations HOT 5
- [Proposal] Add a generic device-based addressing scheme for "WAN" networks HOT 1
- Modernize firmware builder to use PlatformIO
- bunch » munch » benedict
- Support FIWARE NGSI-LD, NGSIv2, and Ultralight 2.0 protocols
- Support Sparkplug MQTT protocol HOT 2
- Make plumbing less opinionated
- Support Shelly devices HOT 2
- Support Jesth / Paradict / Braq
- Support receiving data via AMQP
- Problem when using unicode characters in channel name or field name
- Unhandled exception: module 'pandas' has no attribute 'tslib' HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kotori.