Giter VIP home page Giter VIP logo

Comments (54)

chrigi avatar chrigi commented on May 18, 2024 10

That is absolutely fine I and I didn't want to put any pressure/blame on regular contributors. The way it currently looks to me when I see the issue and PR count as well as created/updated dates on issues/milestones is that Huginn does not get much attention anymore.

If so many pull requests and issues are stale it would benefit the project (imho) to close these if they are old. I've seen there are already quite a few labels setup as well and even very well made milestones but they seem to have been unused for the last 2-3 years.

I think a good cleanup of all the old/stale issues & PRs (& milestones) as well as triaging what remains with labels & milestones would make it a lot clearer what is being worked on and what is going on with the project. Maybe even creating isse & PR templates (and a stale bot) could help organize this for the future.

from huginn.

alrik11es avatar alrik11es commented on May 18, 2024 6

I've been testing Open FaaS from this morning. Wow, it is really easy to setup on my shabby VPS server. It works flawlessly with PHP so I'm really happy with it.
With solutions like this then this protocol interface is clearly not needed.
The only complain that I have with Open FaaS is that the interface is awful as hell and faulty of options... but I will survive with it anyways.

Thanks guys.

from huginn.

chrigi avatar chrigi commented on May 18, 2024 3

How active is this project? It seems like a lot of Pull Requests and Issues have amassed so I was wondering if Huginn is still being maintained or even developed further. What's the plan?

from huginn.

virtadpt avatar virtadpt commented on May 18, 2024 2

What about extending the Scheduler and Agent registry inside Huginn (I don't know what else to call the subsystem where Huginn keeps a list of the Agent types it supports) so that a user could upload custom agents that they wrote in other languages (PHP, Python, shell scripts, etc), tell Huginn that they exist (so that, when setting up a new Agent they're available as options), and run them with all the others.

An interface would need to be defined to pass events to those user-defined agents in the same way that they are to the internal ones, and an interface would need to be defined to pass events from those user-defined agents to everything else.

Heck, I'd kick some money in on Bountysource for this kind of functionality now that I think about it a little. Can we crowdfund features on Bountysource?

from huginn.

0xdevalias avatar 0xdevalias commented on May 18, 2024 2

from huginn.

alrik11es avatar alrik11es commented on May 18, 2024 2

Ok ok I'm going to try OpenFaaS if you all guys say that is what I need I trust you. I don't want to reinvent the wheel obviously. If the wheel that you guys suggest work easily over a cheap vps I'm ok with that. Let me learn and try to work with this over the next few days to have a real opinion based in something real.

from huginn.

0xdevalias avatar 0xdevalias commented on May 18, 2024 1

I don't really have the capacity to explore/implement it right now.. (maybe one day..) but my feeling is that it wouldn't be all that hard to create an OpenFasS Huginn Agent, to allow it to easily be plugged into Huginn workflows, and then everyone could easily get the best of both worlds!

https://docs.openfaas.com/architecture/gateway/#explore-or-update-the-swagger-api-documentation

from huginn.

ohingardail avatar ohingardail commented on May 18, 2024 1

Acknowledged. I wasn't sure how important this Ubuntu EOL issue was, since it's some way off and there may be technical details and mitigations the community knows about that I don't. However, a new issue report may be the best way to find out!

from huginn.

virtadpt avatar virtadpt commented on May 18, 2024 1

On the subject of the future of Huginn; it appears that the current version of Huginn doesn't work on the latest version of Ubuntu (v22.04). There's no reason why it should; the Huginn installation notes only specify Ubuntus v14 and v18 as supported.

I'm running Huginn on Ubuntu Server 22.04 LTS and have been since a month after it came out. One thing I'm doing is using Ruby installed with rvm, not the Ubuntu Ruby packages.

from huginn.

biznickman avatar biznickman commented on May 18, 2024

Ultimately I think the #1 step is creating a standard deployment approach. After working on the Heroku side of things I suddenly realized how each new platform that you can deploy to adds additional complexity (in the sense that we'll now need people to maintain multiple forks of the project). Whether it's run on Heroku or AWS, linode, or something else ... I think there should be a standardized deployment approach first and foremost. Thoughts?

from huginn.

progrium avatar progrium commented on May 18, 2024

Interesting that was brought up. I've been thinking about a Homebrew-like
system for Heroku, DotCloud, maybe EC2 for deploying specific apps. This is
because I've made a few apps that "deploy themselves" and I'm realizing it
would probably be better to put that logic somewhere else.

On Tue, Apr 2, 2013 at 8:48 AM, biznickman [email protected] wrote:

Ultimately I think the #1 https://github.com/cantino/huginn/issues/1step is creating a standard deployment approach. After working on the
Heroku side of things I suddenly realized how each new platform that you
can deploy to adds additional complexity (in the sense that we'll now need
people to maintain multiple forks of the project). Whether it's run on
Heroku or AWS, linode, or something else ... I think there should be a
standardized deployment approach first and foremost. Thoughts?


Reply to this email directly or view it on GitHubhttps://github.com//issues/34#issuecomment-15783538
.

Jeff Lindsay
http://progrium.com

from huginn.

cantino avatar cantino commented on May 18, 2024

If the 'deployments' branch looks good to you both, I'll merge it. I agree
that is foremost short-term.

On Tue, Apr 2, 2013 at 9:55 AM, Jeff Lindsay [email protected]:

Interesting that was brought up. I've been thinking about a Homebrew-like
system for Heroku, DotCloud, maybe EC2 for deploying specific apps. This
is
because I've made a few apps that "deploy themselves" and I'm realizing it
would probably be better to put that logic somewhere else.

On Tue, Apr 2, 2013 at 8:48 AM, biznickman [email protected]
wrote:

Ultimately I think the #1 https://github.com/cantino/huginn/issues/1step
is creating a standard deployment approach. After working on the
Heroku side of things I suddenly realized how each new platform that you
can deploy to adds additional complexity (in the sense that we'll now
need
people to maintain multiple forks of the project). Whether it's run on
Heroku or AWS, linode, or something else ... I think there should be a
standardized deployment approach first and foremost. Thoughts?


Reply to this email directly or view it on GitHub<
https://github.com/cantino/huginn/issues/34#issuecomment-15783538>
.

Jeff Lindsay
http://progrium.com


Reply to this email directly or view it on GitHubhttps://github.com//issues/34#issuecomment-15788021
.

from huginn.

biznickman avatar biznickman commented on May 18, 2024

@cantino what are the changes for the deployments branch? Just an FYI, that's one major reason I've yet to integrate the Heroku branch. It really only takes 30 minutes for someone to switch it to a heroku configuration. However it would be best if they did that with the latest version rather than trying to maintain a separate branch. Would be curious to hear your thoughts on that.

from huginn.

baldown avatar baldown commented on May 18, 2024

Certainly while "installability" is important, I think some architecture points are really important as well to make Huginn as flexible as possible. Frankly, the more flexible and easy it is to produce or consume data (ie write agents), the more it will happen, and the more it happens, the more powerful/useful it becomes, and the more the developer and user base grows.

There are 2 key pieces in my mind to making that happen.

  1. Putting together a protocol/communications spec. This is an important aspect of standardizing and opening up development of the platform in many directions. Most importantly I think it enables #2. Below as well are two ideas for a communications architecture.
  2. Enable agents to be language agnostic. Personally, I'm not a Ruby dev nor do I care to be. However, if I could write Perl-based agents, I'd be putting many together in short order. In fact, I'd probably write some code to make an agent dead simple and release it on CPAN, enabling more developers. Now, this isn't Perl specific, as I'm sure the same is true for other devs in other languages. Regardless, the end goal is the same: more agents faster.

Protocol options:

  1. One really awesome option given the type of communications we're talking about would be a standard JSON or the like format passed through an MQ. You can then do single subscribers, broadcasts, etc, all fairly dynamically.
  2. Another option to consider, mostly because of its existence as a standard already is XMPP. The fact that it already gracefully handles things like:
  • dispersed servers with differing sets of clients that can all stand alone or intercommunicate.
  • one-to-one r/w communications channels
  • multi-consumer/multi-producer communications channels
  • subscription tracking for all of the above

The only real question is how well XMPP (or particular XMPP libraries) would scale if volume goes way up. Outside of this, XMPP could be a bridge to significantly simplify some of the interactions between processes.

These of course are just a couple ideas, but ones that I think would be significant steps forward in building a community and architecture for development. Thoughts?

from huginn.

progrium avatar progrium commented on May 18, 2024

Regarding protocol I'd look into building on our work on HTTP Subscriptions
which is based on webhooks. It's HTTP which makes it very simple. XMPP is a
bit of a hassle and a completely different stack. It's not worth the
complexity it introduces to the project.

On Tue, Apr 2, 2013 at 2:59 PM, Josh Ballard [email protected]:

Certainly while "installability" is important, I think some architecture
points are really important as well to make Huginn as flexible as possible.
Frankly, the more flexible and easy it is to produce or consume data (ie
write agents), the more it will happen, and the more it happens, the more
powerful/useful it becomes, and the more the developer and user base grows.

There are 2 key pieces in my mind to making that happen.

  1. Putting together a protocol/communications spec. This is an
    important aspect of standardizing and opening up development of the
    platform in many directions. Most importantly I think it enables #2#2.
    Below as well are two ideas for a communications architecture.
  2. Enable agents to be language agnostic. Personally, I'm not a Ruby
    dev nor do I care to be. However, if I could write Perl-based agents, I'd
    be putting many together in short order. In fact, I'd probably write some
    code to make an agent dead simple and release it on CPAN, enabling more
    developers. Now, this isn't Perl specific, as I'm sure the same is true for
    other devs in other languages. Regardless, the end goal is the same: more
    agents faster.

Protocol options:

  1. One really awesome option given the type of communications we're
    talking about would be a standard JSON or the like format passed through an
    MQ. You can then do single subscribers, broadcasts, etc, all fairly
    dynamically.
  2. Another option to consider, mostly because of its existence as a
    standard already is XMPP. The fact that it already gracefully handles
    things like:
  • dispersed servers with differing sets of clients that can all stand
    alone or intercommunicate.
  • one-to-one r/w communications channels
  • multi-consumer/multi-producer communications channels
  • subscription tracking for all of the above

The only real question is how well XMPP (or particular XMPP libraries)
would scale if volume goes way up. Outside of this, XMPP could be a bridge
to significantly simplify some of the interactions between processes.

These of course are just a couple ideas, but ones that I think would be
significant steps forward in building a community and architecture for
development. Thoughts?


Reply to this email directly or view it on GitHubhttps://github.com//issues/34#issuecomment-15805501
.

Jeff Lindsay
http://progrium.com

from huginn.

robertjwhitney avatar robertjwhitney commented on May 18, 2024

Don't overload the project with new features, concentrate on making it dirt easy to spin this up locally, and simple to deploy. Minimize configuration.

from huginn.

jmartelletti avatar jmartelletti commented on May 18, 2024

I agree with @progrium, agent subscriptions should be handled via the Pubsubhubbub protocol, documented at pubsubhubbub.googlecode.com. Subscriptions can be requested with GET parameters, which also allows for strings/hashes/arrays to be used for configuration options or queries.

HTTP should be preferred over XMPP due to simplicity and familiarity, but also things like proxy support and integration with other web services.

from huginn.

progrium avatar progrium commented on May 18, 2024

I don't think PubSubHubbub is actually appropriate. I think HTTP
Subscriptions (simpler version of PSHB) is good enough:
https://github.com/progrium/http-subscriptions

On Thu, Apr 4, 2013 at 6:52 AM, James Martelletti
[email protected]:

I agree with @progrium https://github.com/progrium, agent subscriptions
should be handled via the Pubsubhubbub protocol, documented at
pubsubhubbub.googlecode.comhttp://pubsubhubbub.googlecode.com/svn/trunk/pubsubhubbub-core-0.3.html.
Subscriptions can be requested with GET parameters, which also allows for
strings/hashes/arrays to be used for configuration options or queries.

HTTP should be preferred over XMPP due to simplicity and familiarity, but
also things like proxy support and integration with other web services.


Reply to this email directly or view it on GitHubhttps://github.com//issues/34#issuecomment-15898219
.

Jeff Lindsay
http://progrium.com

from huginn.

fizx avatar fizx commented on May 18, 2024

I feel like services that always jump straight to http for computer-to-computer communication are doing it wrong. There's opportunity to have a simpler implementation with more stream-friendliness (e.g. json objects, one per line, simple (web)socket, pipelined).

Alternately, maybe we need to simply agree what messages look like, and wrapping a message in an http request vs a socket doesn't really matter much.

from huginn.

jmartelletti avatar jmartelletti commented on May 18, 2024

@progrium what reason do you see for using HTTP subscriptions over Pubsubhubbub? I appreciate that it's a simpler specification, but also seems incomplete when standing next to the pshb spec, I think agents could make good use of the pshb discovery features too.

@fizx I think there's good reason to stick with what works, especially when you're talking about "web-scale" protocols, what better is there to leverage than the HTTP protocol, which is also perfectly stream-friendly. There's plenty of protocols out there that are suitable for subscription and messaging, whether it's XMPP, SIP, HTTP or AMQP but the advantage HTTP has over all of them is the ubiquitousness and the availability of simple tools and libraries, not to mention the greatest benefit of all, a protocol that is already well supported and equipped to pass through corporate networks and firewalls which is always a major sticking point with alternative protocols.

from huginn.

progrium avatar progrium commented on May 18, 2024

Because it solves the specific problem at hand, not a bunch of other
problems it doesn't have. Ideally it would be nice if this played well with
WebPipes, which is based on HTTP Subscriptions.

On Thu, Apr 4, 2013 at 5:53 PM, James Martelletti
[email protected]:

@progrium https://github.com/progrium what reason do you see for using
HTTP subscriptions over Pubsubhubbub? I appreciate that it's a simpler
specification, but also seems incomplete when standing next to the pshb
spec, I think agents could make good use of the pshb discovery features too.

@fizx https://github.com/fizx I think there's good reason to stick with
what works, especially when you're talking about "web-scale" protocols,
what better is there to leverage than the HTTP protocol, which is also
perfectly stream-friendly. There's plenty of protocols out there that are
suitable for subscription and messaging, whether it's XMPP, SIP, HTTP or
AMQP but the advantage HTTP has over all of them is the ubiquitousness and
the availability of simple tools and libraries, not to mention the greatest
benefit of all, a protocol that is already well supported and equipped to
pass through corporate networks and firewalls which is always a major
sticking point with alternative protocols.


Reply to this email directly or view it on GitHubhttps://github.com//issues/34#issuecomment-15933493
.

Jeff Lindsay
http://progrium.com

from huginn.

jmartelletti avatar jmartelletti commented on May 18, 2024

@progrium just took a look at your background, you've clearly though more about this issue than I have! I really like what you've done so far with the whole webpipes idea and I can see you know exactly where you're coming from. I'm just wondering why you chose to define your own spec instead of using pshb to begin with? Was pshb really that more complex?

from huginn.

progrium avatar progrium commented on May 18, 2024

Yeah, they really had a specific problem to solve (feeds) even though they
had something that was much more generally useful. A few of us have been
pushing it to generalize / simplify, however, momentum has pretty much
stalled. My goal was to solve my problem (which is actually many, many
people's problem ... much more than PSHB) in a way that would be compatible
with PSHB requirements in the long-term so that HTTP Subscriptions and a
couple of other specs could be combined to form PSHB 2.0.

On Thu, Apr 4, 2013 at 6:31 PM, James Martelletti
[email protected]:

@progrium https://github.com/progrium just took a look at your
background, you've clearly though more about this issue than I have! I
really like what you've done so far with the whole webpipes idea and I can
see you know exactly where you're coming from. I'm just wondering why you
chose to define your own spec instead of using pshb to begin with? Was pshb
really that more complex?


Reply to this email directly or view it on GitHubhttps://github.com//issues/34#issuecomment-15934722
.

Jeff Lindsay
http://progrium.com

from huginn.

fizx avatar fizx commented on May 18, 2024

@martelletti What features of HTTP do you find useful in this problem domain? You mentioned firewall piercing (which i don't find useful in a server-to-server context, but realize others might want). Any others?

from huginn.

progrium avatar progrium commented on May 18, 2024

http://timothyfitz.com/2009/02/12/why-http/

On Thu, Apr 4, 2013 at 9:46 PM, Kyle Maxwell [email protected]:

@martelletti What features of HTTP do you find useful in this problem
domain? You mentioned firewall piercing (which i don't find useful in a
server-to-server context, but realize others might want). Any others?


Reply to this email directly or view it on GitHubhttps://github.com//issues/34#issuecomment-15938805
.

Jeff Lindsay
http://progrium.com

from huginn.

jmartelletti avatar jmartelletti commented on May 18, 2024

@fizx the fact that there's already fairly established protocols to take care of this problem of subscribing/notifying remote agents, whether it's pshb or http subscriptions as @progrium suggests. It's a format that everyone's familiar with, there's tools for developing and debugging, load balancing, proxying, etc.

One of the immediate advantages of huginn agents I thought of is that you'll be able to run one from within a corporate network and be able to integrate with a service outside of the network, and this is why I think it should definitely be HTTP based.

from huginn.

cantino avatar cantino commented on May 18, 2024

Hey guys, I just want to say thanks for this excellent discussion. I'll be responding in more depth this weekend, but please keep talking!

from huginn.

fizx avatar fizx commented on May 18, 2024

@progrium That article is mostly FUD. Debugging memcached or redis text protocol in telnet is easier than looking up http headers in chrome devtools or getting the flags right in curl. All of the tools cited work fine with raw tcp.

@jmartelletti I'm following you on the corporate network argument. I also see an argument for heroku deploys.

More generally, http has a sweet sport as a protocol for serving persistent named nouns. You can use it as a message-passing protocol, but to the extent that your messages are unnamed, transient, and verb-like, http starts to introduce impedence mismatch and overhead.

To the extent we create the ability to represent agent output as meaningful, long-lived urls (e.g. /weather/94110/2013/12/25.json), I'm excited about http. I'm more skeptical about using http for real-time notifications and message passing among nodes that are all huginn.

from huginn.

jmartelletti avatar jmartelletti commented on May 18, 2024

I disagree that it's mostly FUD, it's clearly skewed towards the pro-HTTP side but I think all of the points are perfectly valid, and you're not wrong about debugging those simpler text-based protocols, but I bet you're quite comfortable with HTTP packets too. The only real advantage that I see a custom lower level protocol providing would be performance, but I reckon HTTP has that covered! Besides that, with a project like this the real focus should be on interoperability.

Then there's also the browser, I don't think you could understate the value in being able to integrate with browsers.

from huginn.

progrium avatar progrium commented on May 18, 2024

Timothy's article was an honest defense of an opinion that many people
share. I've worked for companies and have friends that believe it stronger
than both Timothy and I now.

I'm amused by this argument because I've been having it since 2008. I'm
tired of it. I understand both sides. Honestly, I don't care anymore.
Though in general, I feel that if you have REST API and you want to have
notifications or callbacks, use webhooks and HTTP to remain consistent with
your own API. If you want to play well with HTTP and Evented Web
infrastructure, obviously use HTTP. But if you want to do whatever you
want, do whatever you want. There is no right answer.

In this case, Huginn can do whatever it wants because for the most part,
the problem it solves is in the process of being solved, for me, by
WebPipes. Which will use HTTP. Because it's serving a particular role as
part of a bigger picture. Huginn can do whatever it wants because of this
... I will still get what I want. I just thought it would be nice if it
aligned itself with our work, but I'm not going to try hard because I'm
sick of these sorts of politics.

On Fri, Apr 5, 2013 at 2:49 AM, James Martelletti
[email protected]:

I disagree that it's mostly FUD, it's clearly skewed towards the pro-HTTP
side but I think all of the points are perfectly valid, and you're not
wrong about debugging those simpler text-based protocols, but I bet you're
quite comfortable with HTTP packets too. The only real advantage that I see
a custom lower level protocol providing would be performance, but I reckon
HTTP has that covered! Besides that, with a project like this the real
focus should be on interoperability.

Then there's also the browser, I don't think you could understate the
value in being able to integrate with browsers.


Reply to this email directly or view it on GitHubhttps://github.com//issues/34#issuecomment-15947195
.

Jeff Lindsay
http://progrium.com

from huginn.

meesterdude avatar meesterdude commented on May 18, 2024

Maybe relevant; the network monitoring system nagios supports multiple plugins which can be written in any language. they're just scripts that just have to run, return some data and an exit code. Makes for easy interfacing and expanding. Distant checks can be submitted via HTTP POST; or called via a variety of other methods.

My point here is, nagios can do whatever you want with data in / data out. Such flexibility and simplicity was the reason it beat out everything else.

from huginn.

cantino avatar cantino commented on May 18, 2024

Do you think that a good model is "arbitrary scripts inside of docker"? On the other hand, I want to make sure that Agents have a rich API available to them, and pushing them into a sandboxed may make that a bit more difficult.

from huginn.

meesterdude avatar meesterdude commented on May 18, 2024

The benefit to nagios was it's simplicity. Rich API's can end up as complex API's, so I would say making them arbitrary like that did help keep clear lines and strict formatting while allowing just enough flexibility to do what you need.

So, it depends. :)

from huginn.

fizx avatar fizx commented on May 18, 2024

In, say linux, pipes and signals are an orthogonal concern to say cgroups. I don't think specifying docker helps. You want to think about how they communicate.

from huginn.

fizx avatar fizx commented on May 18, 2024

Though I do like the idea of docker+mesos+chronos(ish)+huginn.

from huginn.

meesterdude avatar meesterdude commented on May 18, 2024

@fizx, ELI5

from huginn.

ClashTheBunny avatar ClashTheBunny commented on May 18, 2024

Consul has some great thoughts on scaleability and protocol:

http://www.consul.io/intro/vs/nagios-sensu.html

from huginn.

alrik11es avatar alrik11es commented on May 18, 2024

If you do not mind I would like to revive this thread to talk a little about the possibilities of interfacing other programming languages ​​in Huginn.

I think the best bet is to create an interface that somehow allows those languages ​​to communicate with Huginn.

It can be approached in different ways, for example one option would be to create a small GUI that allows creating files that will later become agents. And these files once saved will be executed by the interpreter of the selected language.

For example, the language that I'm used to PHP only requires to have one of the versions installed on the server.

Once this file is created let's suppose that the list of agents that appears when creating a new agent is able to find this file and in some way Huginn is able to execute the interpreter and execute the first method that would be for the creation of the GUI of the agent. Suppose that it returns an XML/HTML that Huginn is able to reinterpret and create the fields.

Once viewing the GUI of the specified agent we would see fields and configurations defined.

I've created a simple example class in PHP that creates a simple Telegram Agent. As you can see Huginn should interface with this class in order to pass $options variable and the $event.

<?php
namespace App\Agents;

use App\AgentInterface;
use \React\EventLoop\Factory;
use \unreal4u\TelegramAPI\HttpClientRequestHandler;
use \unreal4u\TelegramAPI\TgLog;
use \unreal4u\TelegramAPI\Telegram\Methods\SendMessage;

class Telegram implements AgentInterface
{
    public $name = 'Telegram agent';
    public $description = 'The Telegram Agent receives and collects events and sends them via Telegram.';

    public function process($options, $event)
    {
        $loop = Factory::create();
        $tgLog = new TgLog($options->token, new HttpClientRequestHandler($loop));
        $sendMessage = new SendMessage();
        $sendMessage->chat_id = $options->chat_id;
        $sendMessage->text = $event->text;
        $tgLog->performApiRequest($sendMessage);
        $loop->run();
    }

    public function gui()
    {
        return '
        <input name="token" type="text">
        <input name="chat_id" type="text">
        ';
    }
}

IF the direct interfacing possibility is not possible then we can define a communications protocol to ensure this works via API or something like that.

What do you think?

from huginn.

0xdevalias avatar 0xdevalias commented on May 18, 2024

Not sure if I’ve mentioned it here before, but these days a lot of my thought process has tended more towards AWS lambda and similar ‘serverless’ technologies in place of what huginn would have solved for me in the past.

One ‘run it yourself’ serverless tech that would marry well with this concept of ‘agents in any language’ is OpenFaaS ( https://github.com/openfaas/faas )

While i’m sure you could approximate an integration with the web agent, I think a more directly supported bridge/maybe even some howto could be a really powerful addition to opening up huginn to a much larger pool of potential agents.

I feel like huginn has the ‘workflow’ and UI angle down pretty solidly, where OpenFaaS/Lambda/etc could provide nicer agent plugin potentials.

from huginn.

alrik11es avatar alrik11es commented on May 18, 2024

Could be that a solution for someone with a simple huginn server on a 3€/month VPS like me?

Don't know if we are using huginn the same way but at least for me the most condensed solutions on one location the best for me. Spreading all my flux through other platforms are a possibility but also adds another mesh of costs and complexity. My server with Huginn is always dancing free of load and could be doing this work.

While I agree with you I feel that I need this on huginn to be flexible enough to become a finished product with full potential.

In fact you gave me an Idea of making a simple platform that collects events from huggin over web or shell and process them using just functions that are defined by the user but as I said I prefer this to be integrated in huginn and not as a different application.

from huginn.

alrik11es avatar alrik11es commented on May 18, 2024

This is definitely the most desirable option. Don't know if possible without a major refactoring, shame on me that I'm not a Ruby developer.

But meanwhile and slowly unless a solution for this need happens I'm creating a port of Huginn in Laravel for PHP, intending to have all this requirements in mind.

from huginn.

dsander avatar dsander commented on May 18, 2024

IF the direct interfacing possibility is not possible then we can define a communications protocol to ensure this works via API or something like that.

We have the ShellCommandAgent which can execute any command that is installed on the Huginn server (granted that only really works when Huginn is not installed using docker). The fact that the needed interpreter would need to be present limits the potential. I don't see us adding every interpreter on the planet to the docker image ontop of that dependency management for the other languages could be annoying.

I feel like huginn has the ‘workflow’ and UI angle down pretty solidly, where OpenFaaS/Lambda/etc could provide nicer agent plugin potentials.

I think I agree, OpenFasS and co are nice because they already have packaging and deployment figured out.

I believe the universal way to interact with other languages would be some sort of HTTP interface. It might be possible to have an Agent in Huginn that once the endpoint is configured shows the Agent description, validates the options and runs the Agent by making HTTP calls to the "agent server" that is written in a different language and is deployed somewhere (docker, locally, openfass, labda).

from huginn.

alrik11es avatar alrik11es commented on May 18, 2024

So lets imagine this HTTP API:

Somewhere in Huginn you define a new remote Agent and set the URL.
After setting the URL a GET somewhere.com/my-agent/get-info will be done to that URL, and will return:

  • A definition of the agent (Name, description ...)
  • A definition of GUI for that agent in (XML, JSON...) that will be drawn for users in Huginn.

User will define name, config, sources and receivers as if a typical agent and save.

If an event is received into the new defined agent:

  • Huginn sends a request to the remote agent POST somewhere.com/my-agent/run with the event payload.
  • Remote agent process the payload and generates a response.
  • The response should be parsed by huginn in order to get the new event and re-emit inside for receivers.

If visualizations are needed in the summary option of the agent then you could add another URL.

  • GET somewhere.com/my-agent/summary

I'm thinking in a process that cannot be answered instantly or takes a while to process. Then this kind of process should POST back to Huginn someway. This could be used for event streaming only agents also.

What do you think?

from huginn.

alrik11es avatar alrik11es commented on May 18, 2024

Maybe by writing the .yml and create a function inside somewhat called "functions" directory trough the API and the file system. And redirecting the events through HTTP. I mean with Huginn and a UI interface that allows the user to create new functions including the source and the ability to build and deploy.
The OpenFaaS API does not allow to create a function with code so you need to create the code first on the server. Or at least that's what I've found until now.

I think that I would be more than happy with something that fixes that.

It really seems not so hard for me to implement that in PHP instead of continuing porting huginn. Which really is itself a project that I've started just for fun sometime ago. And this should be simpler and useful than a huginn port to PHP.

from huginn.

0xdevalias avatar 0xdevalias commented on May 18, 2024

@alrik11es If you follow the link I pasted above through to http://editor.swagger.io/ and import the swagger URL provided (that my link above tells you to import into it), you can see that a new OpenFaaS function can be deployed with POST /system/functions.

OpenFaaS is built around docker containers, so you're basically telling it where to get the pre-made function docker container from.


Though when I said "create an OpenFasS Huginn Agent" above, I meant implementing the OpenFaaS API on the Huginn side. Initially I wouldn't even care so much about creating/deploying new functions, the tools in OpenFaaS (WebUI, CLI, etc) are already perfectly good at that. The thing I would see value in implementing is the ability for Huginn to invoke OpenFaaS functions, and retrieve the result.

The swagger docs linked above show that this would be done with POST /function/{functionName} or POST /async-function/{functionName} (though for the latter, [you would need to have a webhook agent in Huginn to get the result](https://docs.openfaas.com/reference/async/#how-it-works) I believe). So basically you could already connect these really easily using the Huginn Web Agent.

Where I would see more value in a 'deeper' integration is to automate some of the setup/selection of functions. For example, you could use GET /system/functions and GET /system/function/{functionName} to list the available OpenFaaS functions/details about them, and then build some UI showing them within Huginn as 'external agents'. You could connect Huginn's concept of secrets with OpenFaaS's concept of secrets, managed by the /system/secrets API endpoint.

Those are the sorts of things I envisage when I think about a 'rich' Huginn/OpenFaaS integration.

from huginn.

alrik11es avatar alrik11es commented on May 18, 2024

I'm a little embarrassed for not expressing myself perfectly. Sometimes expressing myself correctly in English is really difficult for me. It happens sometimes to me in Spanish which is my native language so...

The swagger docs linked above show that this would be done with POST /function/{functionName} or POST /async-function/{functionName} (though for the latter, you would need to have a webhook agent in Huginn to get the result I believe). So basically you could already connect these really easily using the Huginn Web Agent.

Yes it works like that, I've tested it.

{
  "service": "nodeinfo",
  "network": "func_functions",
  "image": "functions/nodeinfo:latest",
  "envProcess": "node main.js",
  "envVars": {
    "additionalProp1": "string",
    "additionalProp2": "string",
    "additionalProp3": "string"
  },
  "constraints": [
    "node.platform.os == linux"
  ],
  "labels": {
    "foo": "bar"
  },
  "annotations": {
    "topics": "awesome-kafka-topic",
    "foo": "bar"
  },
  "secrets": [
    "secret-name-1"
  ],
  "registryAuth": "dXNlcjpwYXNzd29yZA==",
  "limits": {
    "memory": "128M",
    "cpu": "0.01"
  },
  "requests": {
    "memory": "128M",
    "cpu": "0.01"
  },
  "readOnlyRootFilesystem": true
}

POST /system/functions until what I've seen this request do not allow creating new code but it does push existent from a docker registry.

When I talked like in my other comment I though like if I were implementing this kind of technologies for my company's agnostic users. They are no ops, not even specialist developers. So this people are in the need of something that makes their lives easier not over complicate it. Of course you can connect through CLI and create a new function and so on. But I don't want my users have the need to. It's a matter of efficiency. What is efficient for them is efficient for me as well.

So your latest answer is a good opinion and I respect it and I agree mostly with it. But in my opinion the value is having everything that allows someone to create new agents/functions or whatever condensed in the same place or at least in the same application.

It's a matter of different way of thinking. Just my opinion. Not trying to deny other opinions or solutions.

image

This is just an example I've made on my huginn port to show what I mean.

Anyways I have to really thank you for pointing me towards new architectures that really makes my life easier. I'm really up into implementing this for lots of minor tasks on my company when I arrive on Monday. Thank you very much.

from huginn.

virtadpt avatar virtadpt commented on May 18, 2024

I've been playing around a little with OpenFaaS, and I think a couple of POST Agents would be ideal for triggering functions (though probably not writing functions). POST Agent would also be ideal for triggering the OpenFaaS API rail for updating its local directory of ready-to-run function images.

from huginn.

dsander avatar dsander commented on May 18, 2024

Huginn is OSS, everyone is welcome to contribute. As far as I know all pull requests that are still open are either stale because requested changes were not made or never got to a merge able state because of disagreements.

I don't think we ever had a collaborative plan for Huginn. Contributors added the features and Agents (for that we now have huginn-agent) they wanted/needed.

from huginn.

pkrolkgp avatar pkrolkgp commented on May 18, 2024

It would be nice if we can have some translation service, so anyone can contibute with translations.

from huginn.

smithjoelt avatar smithjoelt commented on May 18, 2024

Is this issue currently being worked on? Should we close it?

from huginn.

cantino avatar cantino commented on May 18, 2024

I am not pursuing this at the moment. It was more of a vision statement.

from huginn.

smithjoelt avatar smithjoelt commented on May 18, 2024

Thanks. Just running some triage and wanting to close up old issues. 🙂

from huginn.

ohingardail avatar ohingardail commented on May 18, 2024

On the subject of the future of Huginn; it appears that the current version of Huginn doesn't work on the latest version of Ubuntu (v22.04). There's no reason why it should; the Huginn installation notes only specify Ubuntus v14 and v18 as supported.

Even so, I have managed to get Huginn running on Ubuntus v20 and v21 without too much problem.

One reason appears why Ubuntu v22 doesn't play well with Huginn is that the elderly versions of Ruby that Huginn relies on mandatorily require OpenSSL v1, and Ubuntu v22.04 installs OpenSSL v3. Downgrading OpenSSL, or reconfiguring Ruby v2.6.9 to support OpenSSL v3 (or possibly some version of LibreSSL) is a fiddly task with potential security issues that complicates future upgrade paths; it's not a goer for most users, and I would be disinclined to try it if I wanted a reliable instance of Huginn. And then there may be other problems as Huginn and its pre-requisites become increasingly aged...

Ubuntu v21 was an interim release that's been EOL'ed; there are no further updates to it, and you can't install a new instance of Ubuntu v21 because the installer can't find said updates, and you wouldn't be able to install Huginn OS pre-requisites on it anyway.

This leaves Ubuntu v20.04 LTS as the longest-lived option known to work with Huginn which will receive maintenance updates until 2025, and security updates until 2030. Ubuntu v18.04, the latest version specified as supported by Huginn will have maintenance updates until 2023 and security updates until 2028.

from huginn.

0xdevalias avatar 0xdevalias commented on May 18, 2024

@ohingardail That seems like something that would be worth a new issue for for visibility/tracking, as I suspect it will get lost/buried away tacked onto the end of this now closed issue.

from huginn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.