clojurewerkz / machine_head Goto Github PK
View Code? Open in Web Editor NEWClojure MQTT client
Clojure MQTT client
I see that machine head has been in "beta" for quite a while. Given that and the low number of issues here, it is perhaps ready to release 1.0.0? If not - what is missing / needed?
Thanks!
I think the domain name has expired, I can't seem to get to the docs website and I think it's trying to show me a namecheap webpage?
Worth moving this to GitHub pages or something and using a GitHub domain to avoid having to worry about it?
Raised a PR to address to this. #29
Hi,
I'm trying to have clean session false
. But since the subscription is possible only after connection, I see that the messages goes to :on-unhandled-message
.
But now I need to router myself the messages coming in :on-unhandled-message
, and then subscribe and having some logic to just subscribe after dealing w/ most :on-unhandled-message
.
I thought that subscribing before connecting could avoid this (and tried), but didn't work (using mosquitto, and mostly the same code in machine_head client source).
Is it possible to handle clean session false
to keeping the subscribe to subscribe to these messages and receiving them all in the broker order (re-publish messages from :on-unhandled-message
should be avoided)?
ps: related to this.
Just saw that that generate-id can generate ids over the allowed 23 bytes if the host name is rather long.
I guess the best thing is to take the last 23 bytes?
It's been nearly 6 years since the latest release (1.0.0). In the mean time even Pahoo has released several new versions with some fixes. And there are already a couple of commits exposing new options for the connect operation.
Do you think it would be a good time to do a 1.1.0 release with all those changes?
This example code:
(mh/subscribe conn ["hello"] (fn [^String topic _ ^bytes payload]
(println (String. payload "UTF-8"))))
Does this:
java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map$Entry
at clojure.lang.APersistentMap$KeySeq.first (APersistentMap.java:166)
clojure.lang.RT.seqToTypedArray (RT.java:1719)
clojure.core$into_array.invoke (core.clj:3321)
clojurewerkz.machine_head.conversion$__GT_topic_array.invoke (form-init299278996084011721.clj:6)
clojurewerkz.machine_head.client$subscribe.invoke (client.clj:113)
clojurewerkz.machine_head.client$subscribe.invoke (client.clj:101)
strophetest.core$eval12913.invoke (form-init299278996084011721.clj:1)
clojure.lang.Compiler.eval (Compiler.java:6782)
clojure.lang.Compiler.eval (Compiler.java:6745)
clojure.core$eval.invoke (core.clj:3081)
clojure.main$repl$read_eval_print__7099$fn__7102.invoke (main.clj:240)
clojure.main$repl$read_eval_print__7099.invoke (main.clj:240)
clojure.main$repl$fn__7108.invoke (main.clj:258)
clojure.main$repl.doInvoke (main.clj:258)
clojure.lang.RestFn.invoke (RestFn.java:1523)
clojure.tools.nrepl.middleware.interruptible_eval$evaluate$fn__623.invoke (interruptible_eval.clj:58)
clojure.lang.AFn.applyToHelper (AFn.java:152)
clojure.lang.AFn.applyTo (AFn.java:144)
clojure.core$apply.invoke (core.clj:630)
clojure.core$with_bindings_STAR_.doInvoke (core.clj:1868)
clojure.lang.RestFn.invoke (RestFn.java:425)
clojure.tools.nrepl.middleware.interruptible_eval$evaluate.invoke (interruptible_eval.clj:56)
clojure.tools.nrepl.middleware.interruptible_eval$interruptible_eval$fn__665$fn__668.invoke (interruptible_eval.clj:191)
clojure.tools.nrepl.middleware.interruptible_eval$run_next$fn__660.invoke (interruptible_eval.clj:159)
clojure.lang.AFn.run (AFn.java:22)
java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:617)
java.lang.Thread.run (Thread.java:745)
The solution is to supply the topic as {"hello" 0} not ["hello"].
The documentation is probably out of date.
I don't know if this is an issue but I am accumulated lots of directories with long names, starting with the number 13, in the current working directory, whenever I use machine head - perhaps this is a Paho thing, if so please close this.
MQTT clients can use write-ahead log implementations (persisters). MH needs to provide a way to specify one when creating a client. Paho Java client provides two default implementations (in RAM and file-based).
The broker I want to connect to needs a username
and a password
. I don't find something about this topic in the docs. Is there a way to specify them? Thank you in advance!
For machine_head 1.0.0 with AWS IoT MQTT.
I am using connect options as below:
{:on-connect-complete (fn [_ reconnection? server-uri]
(log/info (if reconnection? "reconnected" "connected") "to" server-uri))
:on-connection-lost (fn [reason]
(log/warn "connection lost:" reason))
:client-id (str "client-" (str (UUID/randomUUID)))
:opts {:socket-factory socket-factory
:auto-reconnect true
:clean-session false}}
In the AWS IoT logs I can see this disconnect event logged:
{
"timestamp": "2022-01-26 14:55:18.467",
"logLevel": "INFO",
"traceId": "e9c8a069-700b-1b9b-9288-c15fe84fc522",
"accountId": "123456789012",
"status": "Success",
"eventType": "Disconnect",
"protocol": "MQTT",
"clientId": "client-9b3fe2f7-f962-4e03-892c-d6a661d0d3e1",
"principalId": "9ba1cd8b6a7925833e460593461b7597581ddead9a065abf35346f8e537dffce",
"sourceIp": "21.356.16.092",
"sourcePort": 36532,
"disconnectReason": "MQTT_KEEP_ALIVE_TIMEOUT"
}
Even with auto-reconnect
set to true
the client did not reconnect and resume processing.
How should I handle this scenario using machine_head?
The documentation suggests it should be possible to subscribe to more than one topic with the same connection, which is sort of true, however all of the subscriptions now point to the last function meaning it's impossible to send different topics to different functions without writing custom dispatch code.
The following I would expect messages to test.one to print out "1: my message", but it actually prints "3: my message", as do messages to test.two and test.thr
(defn testsubs []
(doto (mh/connect "tcp://127.0.0.1:1883" (mh/generate-id))
(mh/subscribe ["test.one"] (fn [_ _ data] (println "1: " data)))
(mh/subscribe ["test.two"] (fn [_ _ data] (println "2: " data)))
(mh/subscribe ["test.thr"] (fn [_ _ data] (println "3: " data)))))
We need to add some integration tests that actually demonstrate how QoS works as opposed to simply specifying a value to various functions but never asserting on QoS-related behaviour.
I have been starting to use machine-head in a project, and I find myself craving a core.async-based API, more along the lines of https://github.com/bguthrie/async-sockets
Would you want something like that included in machine-head? Or would it complect it unnecessarily?
No big deal if not, I'll probably either write my own library that either depends on machine-head or forks it.
QOS level 0 and 1 work as expected and QOS level 3 rejects as expected.
Publishing to QOS level 2 gives the following error:
EOFException java.io.DataInputStream.readByte (DataInputStream.java:267)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.