Giter VIP home page Giter VIP logo

Comments (9)

calmofthestorm avatar calmofthestorm commented on July 28, 2024

To make sure we're talking about the same thing, could you verify this still happens if you set connect_retry_cooldown to something really big (like 600)? Then your first command will be slow, but no other commands should be slow for 5 mins. (This is just to confirm, NOT a proposed solution:-) )

Interesting. This does not happen for me. When I point the server at a non-open port and dictate local commands, there is a slight slowdown every now and then (at most every 5s, when it retries the connection), and it is definitely noticeable, but never more than a second delay.

One possibility might be how the connection is refused -- if the other end gives "Connection refused" immediately (what occurs in my setup), the behavior may be different than if the connection times out (what many firewalls do).

I'm not sure it's possible for local keys to be completely unimpacted, since Dragon has to query all contexts every time you start to speak, and that has to (potentially) time out. One option would be to add an (on by default) config setting to disable the proxy whenever a connection is refused, requiring users to reenable it explicitly. When the server is up I hardly ever get connection failures, so I'd be ok with this. We could also raise the currently 5s auto-retry timeout to something bigger, or have a backoff.

One solution might be to add a retry thread that retries the connection asynchronously and notifies the main thread once it's active. I'm hesitant to add such complexity however, and there's still the issue of the first command being slow.

Of these I'd most favor the auto disable (along with a warning and/or a notification) after failure.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

Hmm another option might be to mess with select() if windows has it or something like it. We could also try connecting wih a very low timeout since the typical use case is local, so if we can't connect quickly t's not going to connect.

Regarding the default setting for host/port -- they match the install instructions in the README; VirtualBox host only adaptor defaults to 192.168.56.1.

from aenea.

nirvdrum avatar nirvdrum commented on July 28, 2024

Well, this is fun. If I bump the number to 600, there's a small, but acceptable delay, as you indicate. If I change it back to 5, I'm back to ~12s delay.

FWIW, my use case is I use Dragon on a laptop so I can dictate on the go if needed. My primary work is on a desktop, however. So most of the time I'm using aenea to connect to the desktop. But, sometimes I just want to use the thing as a laptop. At least now I can say "disable proxy server" and get where I need to. The other use case is I open the laptop before I turn on the desktop. The current retry logic seems to handle that case very well. So thanks for that!

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

I was able to reproduce this (I think) by messing with iptables. Can you try this branch and let me know if it solves this particular pain point https://github.com/calmofthestorm/aenea/tree/better_connection_failure_behavior ? You may want to set socket_timeout to a different value; I defaulted to 0.1s. I added some logic. If it's been more than 10s since the last successful connect to the CURRENT server, we do a quick connect via socket with low timeout to see if it's up, and only attempt the blocking op if it is. It's not perfect (in particular, you'll still have one slow command if a server crashes mid use), but I'm not seeing an easy way to customize jsonrpclib's socket behavior. I'll keep thinking about it; the current logic's complexity makes me uneasy. Communications and config could both use some TLC in terms of design, once we get the bugs worked out:-)

A side note -- remember we don't authenticate or encrypt or anything. Be sure you trust your network:-) I guess if people are using this over non-lo we should probably add that functionality.

I appreciate these issues, I really do. Your use case is different from mine but it's one that definitely should work, and that I want to see work well.

from aenea.

nirvdrum avatar nirvdrum commented on July 28, 2024

I'll take a look. The simple reproduction case for me is to just change the host IP to a unroutable IP that isn't on the current subnet.

from aenea.

nirvdrum avatar nirvdrum commented on July 28, 2024

Also, I bind my aenea server to a private IP. If anyone got on my home network, sending random voice commands would be the least of my worries :-)

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

Would you consider being able to set the timeout in a non-hackish way a sufficient solution to this problem, combined with the retry logic? So, say, you can set timeout to 0.1s and retry at most once every 5s. Then, at most once per 5s (if the server is down) you get a 0.1s delay. Both numbers user-configurable. Take a look at https://github.com/calmofthestorm/aenea/tree/configurable_timeout it solves the problem for me and the code is far less hacky.

from aenea.

nirvdrum avatar nirvdrum commented on July 28, 2024

This new branch is a remarkable improvement. In general, I'm of the mindset that having configuration available for advanced tunin is great, but the out-of-the-box experience should work well in most cases. I didn't require any tuning here, so that looks like a great start to me :-)

I'll try to take a look at the networking stuff soon, myself. Out of curiosity, I was wondering if something like msgpack would gain us anything or not. But in any case, I'm going to file issues as I find them. Please don't take them as a todo list (although great if you want to ;-)). I plan on circling back around on them, but am trying to get my feet wet with the new aenea first.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

Glad it helps! We can mess with the settings; 0.1s timeout for local network is a LOT longer than it should take.

I'd be extremely surprised if JSON serialization performance mattered at all in terms of latency next to the xdotool timing, speech recognition, etc. YouCompleteMe uses it for as-you-type completions with no issues at all. That said, try msgpack out if you like and time RPC execution time; if it does make a difference we can add it.

An easy thing you can do is install cjson. jsonrpclib goes to it first, then to built-in json for the serialization. If it is indeed a bottleneck, installing cjson (client and server) should help.

from aenea.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.