Giter VIP home page Giter VIP logo

Comments (28)

calmofthestorm avatar calmofthestorm commented on July 28, 2024

@grayjay

from aenea.

poppe1219 avatar poppe1219 commented on July 28, 2024

I wasn't aware of the problem with nested grammars. I guess my grammars haven't reached that level of complexity. Since my focus right now is on making use of your server-client solution to control Linux, with a multi-screen mousegrid, it's not very likely I will bump into these problems anytime soon.

But it will be very interesting to see what conclusions you come to and what solutions you find.

from aenea.

poppe1219 avatar poppe1219 commented on July 28, 2024

Apart from the problem with Dragonfly's limitations of nested grammars, I haven't grasped the problem of what you want to achieve. And due to lack of time and my example-driven brain, I haven't understood your verbal_emacs scripts yet.
Maybe it's a pain of me to ask, but perhaps if you could give a usage example of how you would like the solution to be used, I would understand the problem better? A usage example that causes nesting of the kind that made you reach Dragonfly's limits?

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

I can in fact give two real-world examples I ran into when writing verbal_emacs. First, let me say that it is not just about depth, but more generally about "complexity", as defined by Dragon (So far the only way I've found to measure it is binary search -- either Dragon refuses to load it or not).

The first thing I ran into was how I handled letters of the alphabet, digits, etc. Initially, I wanted to have a sub mode for spelling -- so there would be a command "letters", after which you could say any number of letters, followed by "done" (or somesuch). This would be a single atom in the chaining loop -- meaning I could give a couple of movement commands, enter some letters, and then keep giving commands all without pausing. Note that this is actually a fairly simple grammar --inside spelling mode, Dragon need only track something like 52 literals.

The problem was that dragonfly simply threw an error when I tried to write this grammar, saying it was too complex. No explanation, nothing in the dragonfly source to explain either. What I ended up going with was making each letter of the alphabet an atom. This sucks for several reasons -- recognition is slowed and accuracy decreased because Dragon has 52 more atoms in the main chain it has to recognize. Likewise, it means that you can only enter max_chain_length letters in one go, which brings me to the second problem: chain length.

In multiedit you can speak up to 16 atoms in one command. This sounds like a lot, and it is if each atom is a variable name, a looping construct, a movement command, etc. In verbal_emacs, you are limited to 10 atoms. Usually this is still enough, but suppose you want to spell a bunch of letters, combined with a movement command or two. You can run up against this limit without realizing it, especially as you gain practice with using it and tend to speak more complex phrases. In practice I find that I have to interrupt my flow frequently to avoid losing things -- if you go for longer than 10 atoms, it can lose the entire sequence.

Note that nothing about this issue would address this problem, as far as I know. Short of digging into Natlink (I already understand Dragonfly reasonably well) and getting lucky, I am not sure whether this can be fixed. It is quite possible/likely that this constraint comes from Dragon.

Rather, I mentioned the problem here because it is likely to be a big limit on what I am able to do.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

Why I want this architecture:

  • Make it easier to write grammars like verbal_emacs and multiedit. I would like grammars like that to be within the reach of beginners.
  • Share as much code as possible between these grammars (verbal_emacs and multiedit have a lot of redundant code besides the formatting functions related to the chain loop).
  • Open the possibility of having one chain loop that can have atoms dynamically activated or deactivated based on what program you are in, where you are in it, etc.
  • Open the possibility of chaining together commands from different grammars/programs. (I am not sure the something I personally would want enabled but I could see it making sense for some use cases.)
  • Make it easy for grammars to have plugins and be more configurable. For example, I like many vim users have custom plug-ins, key bindings, etc. I would like to pull out all or most of the me-specific stuff to make it easy to adapt the grammar to your setup. It is possible that this would only be necessary for vim, but I could see it mattering for the shell as well -- not everyone uses bash.

A big part of this project for me is enabling people to bring voice to their current setup. Obviously the weirder your setup the more adaptation you're going to have to do yourself:-).

from aenea.

poppe1219 avatar poppe1219 commented on July 28, 2024

Well I don't have any useful ideas on the architecture at this point.

But when it comes to the particular problem of spelling a sequence as a single command, my thoughts immediately goes to trying to reuse Dragon's built in Spell mode. But I haven't found a way to trigger the different modes. I found this in natlinkmain:

DNSmode = 0 # can be changed in grammarX by the setMode command to
# 1 dictate, 2 command, 3 numbers, 4 spell
# commands currently from _general7,
# is reset temporarily in DisplayMessage function.
# it is only safe when changing modes is performed through
# this setMode function

But I haven't found a way to actually use this and switch between these modes.
I have tried to use Mimic, Mimic("Start spell mode"), but I have never gotten Mimic to do anything sensible for me (probably because I haven't understood it). And even if I did get that to work, there should be a much more direct way of switching between the modes.
If it could be done, it's quite possible that a command for spelling an arbitrary number of letters/characters could be built and is still treated as a single atom, where Dragon itself does all the heavy lifting.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

The problem is that that would require a pause between modes, at least using DNSmode would. if we are willing to tolerate a pause between modes, I can think of a number of ways to solve this problem. In particular, one thing I did in multiedit was to have "finishing rules" -- once you say "letters", the rest of the chain will be interpreted as a sequence of letters (this is similar to how "literal" works for entering reserved words). Unfortunately, once you enter this mode you must pause before saying something that is not a letter. This makes sense for "literal", since it is intended to allow any word to be typed literally, but I would like to be able to say "end letters" and to keep going.

In general the problem I have found with relying on built in Dragon capabilities is that their design takes to an extreme an emphasis on discoverability and memorability versus usability. This means it has a much less steep learning curve, which is great for casual users, but severely limits what power users can do. (Think about how slow Dragon's built-in editing commands are, since you must pause after each one.)

Perhaps I should put off designing the architecture until I have thoroughly studied Natlink to see just what is possible. Another thing I would like to do is integrate a test architecture for grammars, so that we can write automated unit tests for them. I believe mimic is the way to do this but like you I have not been able to get it to work.

One thing you mentioned to me that I appreciate is how to disable built-in Dragon commands so that I can appropriate those words to my use.

from aenea.

poppe1219 avatar poppe1219 commented on July 28, 2024

I was actually thinking that, if the mode was changed directly by using Natlink, there would be no pause. But that probably wouldn't matter anyway because the entire spoken sequence would be interpreted before the mode switch would be executed.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

Yeah, that seems equivalent behavior to the finishing rule. I am actually kind of surprised that the hackery for chaining is necessary, I would expect Dragon to natively support trailing commands together. Given this doesn't seem to work for any built-in commands and you cannot program it in Professional Edition, I am pessimistic about avoiding the manual chaining.

What might be possible is completely rethinking how Natlink discovers and loads grammars to be friendlier to modules and plug-ins.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

The new dynamic vocabulary system does 90% of this in a very simple way, and the dynamics from dragonfly-scripts can do something similar, though not integrating into other grammars. I can't think of any user stories for the more general form, and I'm not even sure NatLink could handle them, so I'm closing this.

from aenea.

sboosali avatar sboosali commented on July 28, 2024
  1. Can you explain what and how the new dynamic vocabulary system handles? And thanks for all your work, I was trying to set this up myself :-) but I got sad when I read this about nested grammars :-( because I had this whole awesome complex grammar worked out.
  2. I'm new to NatLink, but if I understand correctly, that the problem is grammar depth, could automatically "inlining" finite rules help?

e.g. the nice modular grammar:

<command> exported
 = <number> <command>
 | <action> <region>
;
<number> = ([<tens>] <ones>) | <special_number>;
<special_number> = zero | eleven | twelve | ...;
<ones> = one | two | three | ...;
<tens> = twenty | thirty | ...;
<action> = prev | next | del | ...;
<region> = char | word | line | ...;

becomes (inlining <number>'s children):

<command> exported
 = <number> <command>
 | <action> <region>
;
<number> = one | two | three | ... | eleven | twelve | ... | twenty one | twenty two | ... | thirty one | ...;
<action> = prev | next | del | ...;
<region> = char | word | line | ...;

becomes (inlining <action> and <region>):

<command> exported
 = <number> <command>
 | prev char
 | next char
 | del char
 ...
 | del line
 ...
;
<number> = ...;

becomes (eliminating recursion):

<command> exported = <number> <command_>
<command_> = ...;

becomes (inlining <number> and <command_>):

<command>
 = prev char
 | del line 
 ...
 | one prev char
 ...
 | ninety nine del line
 ...
;

with some loss of generality in the recursion. you would then need to parse the output, without relying on rule callbacks.

or is breath as well as depth a problem?

given 100 <number>s, 20 <action>s, 20 <region>s, that's ~40,000 variants. We could try to "balance" the tree (well, DAG) if we knew the breadth/depth constraints. if we use this grammar:

<command> exported = [<number>] <editing>;
<number> = zero | ... | ninety nine;
<editing> = prev char | ... | del line;

we have a breath of 400 and depth of 2.

thoughts?

from aenea.

sboosali avatar sboosali commented on July 28, 2024

Initially, I wanted to have a sub mode for spelling -- so there would be a command "letters", after which you could say any number of letters, followed by "done" (or somesuch).

my first plan. drat.

I found this in natlinkmain:

DNSmode = 0 # can be changed in grammarX by the setMode command to
# 1 dictate, 2 command, 3 numbers, 4 spell
# commands currently from _general7,
# is reset temporarily in DisplayMessage function.
# it is only safe when changing modes is performed through
# this setMode function

my second plan, in case the first plan didn't work. double drat.

I feel like I'm three steps behind!

My question: is there a way to do something like letters {letter}+ done being nested in other commands, with the update?

from aenea.

sboosali avatar sboosali commented on July 28, 2024

if you go for longer than 10 atoms, it can lose the entire sequence.

also, maybe you could recover the first 10 atoms (what exactly do you mean by "atom"?) by saving hypotheses. I haven't been able to trigger gotHypothesis yet you enable triggering the call back with load(hypothesis=1)

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

The main thing the vocabulary system handles is decoupling vocabulary (things that are important words for a particular language/user/etc. Examples include Eclipse shortcuts, Python keywords, even common variable names in a project) from how to type them (in VIM, for example, we must enter insert mode, type the word, then return to normal mode). This decoupling should ensure that people can add new grammars that will work with existing vocabularies (whether user-custom or included with the project), and (especially) that people can add new vocabularies that will then work with existing (or new) grammars, even if they don't know about them.

It also allows the user to enable and disable different vocabularies as appropriate (so, eg, you don't get Python keywords when you're working in C++ or whatnot), though I'd consider this a less crucial feature. This is the reason for static vs dynamic grammars -- static ones are a tiny bit more powerful but require reloading the grammar to update. Dynamic ones can be switched on and off at will. The distinction between the two is not super important for understanding the high level need for the feature.

For a simple example, consider Python keywords "lambda" and "def". I want to be able to use them in any Python file, regardless of whether I'm using VIM or multiedit. VIM and multiedit both have ways of entering text. Multiedit just has a "loop" of commands, and any command can be to enter a keyword, which it does by typing it. VIM is a bit more complex -- it also has the loop, but there's also the mode switch.

If you envision a grammar as a tree, vocabularies are a way for grammar authors to define places in the tree that users can add custom vocabulary. VIM exposes a tag for arbitrary keywords it should recognize.

Now suppose I decide I want to write Ruby. I can just write a vocabulary file with the language's keywords, and it will automagically work with existing grammars. Likewise, I could write a new editing grammar (emacs, for example) that if written against the vocabulary system would work with any custom vocabularies someone may write.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

Inlining -- If I'm understanding you correctly, there are basically two ways of doing inlining. One is more or less a lossless grammar rewrite (essentially transforming the grammar's structure but not the language it recognizes). The second is to change (probably enlarging) the language recognized by simplifying the grammar, then doing post-processing to determine appropriate the action.

The first form could at least in principle help, but I suspect I'd need better understanding of Natlink/Dragon internals, and it's also worth pointing out that Dragon is in the best position to do such automated simplification.

I don't like the second form because Dragon uses the structure of the grammar to determine which words were said (or to make an analogy, lexing takes the grammar into account, rather than the two being independent passes). By allowing the grammar to recognize phrases we don't want, we will hurt recognition performance and accuracy, and also sometimes find ourselves with a phrase we're not sure what to do with.

The main obstacle to my understanding is that I would expect performance (both speed and accuracy) of processing to be a function of the size of the language -- the more possible valid phrases you can say, the harder it is to tell them apart. Dragon seems to hurt more from deep grammars than it does from size of the language.

To be clear, the issue with a spelling mode is a purely performance driven one -- indeed, for a fairly simple editing grammar you could probably have a mode just as you describe that would work without issues. The problem is that Dragon will reject grammars that are too "complex". Increasing max sequence increases complexity, and having a spelling mode dramatically increases complexity.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

Why long sequences drop: Remember that the way grammars like my vim, multiedit, etc work is by every phrase you say being one giant command parsed all at once. You may think of a sequence as a list of commands, but Dragonfly, natlink, and dragon see it as one phrase. Thus, if your phrase is too long, it simple fails to recognize.

As a simple example, consider a grammar that recognizes any one, two, or three digit number. If you give it a four digit number, it won't enter the first three digits -- it will fail to recognize the phrase.

Maybe you could do something with saving hypotheses? That seems really hacky, and of dubious benefit to me. I'm not really seeing the angle here.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

(by atom I mean what you think of as a single command. Multiedit etc commands are a sequence of atoms. Examples of atoms include "up 10", "with statement" (if using Python vocabulary), and "score hello world").

from aenea.

sboosali avatar sboosali commented on July 28, 2024

Thanks!

"and it's also worth pointing out that Dragon is in the best position to do such automated simplification."

I was thinking that myself, but Dragon would be in the best position document their API, or to open source their engine too :p

also, are static vocabularies Natlink rules and dynamic vocabularies Natlink lists?

I've been doing my own experiments, and I haven't been able to trigger a BadGrammar: too complex error. could you send me: either the high-level grammar that triggered the error for you, or even the low-level Natlink grammar that it was compiled down to?

e.g. this grammar was not rejected on complexity and did successfully recognize long chains.

<dgndictation> imported;

<command> exported
 = <phrase_9>
 ;

<phrase_9> = <phrase_cons> <phrase_8> | <phrase_0>;
<phrase_8> = <phrase_cons> <phrase_7> | <phrase_0>;
<phrase_7> = <phrase_cons> <phrase_6> | <phrase_0>;
<phrase_6> = <phrase_cons> <phrase_5> | <phrase_0>;
<phrase_5> = <phrase_cons> <phrase_4> | <phrase_0>;
<phrase_4> = <phrase_cons> <phrase_3> | <phrase_0>;
<phrase_3> = <phrase_cons> <phrase_2> | <phrase_0>;
<phrase_2> = <phrase_cons> <phrase_1> | <phrase_0>;
<phrase_1> = <phrase_cons> <phrase_0> | <phrase_0>;
<phrase_0> = <dgndictation>;

<phrase_cons>
 = <casing>
 | <joiner>
 | <surround>
 | <letter>+
 ;

<casing>
 = lower
 | upper
 | capper
 ;

<joiner>
 = camel
 | class
 | file
 | snake
 | list
 | dash
 | squeeze
 ;

<surround>
 = string
 | circle
 | square
 | braced
 | diamond
 | spaced

 ;

<letter>
 = ay
 | bee
 | sea
 | dee
 | ee
 | eff
 | gee
 | aych
 | i
 | jay
 | kay
 | el
 | em
 | en
 | oh
 | pea
 | Q
 | are
 | ess
 | tea
 | you
 | vee
 | dub
 | ex
 | why
 | zee
 ;

"lexing takes the grammar into account, rather than the two being independent passes"

that's a great point.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

I'm currently in the process of moving and don't have any microphones with me at the moment, and Dragon won't even start without one:/ That said, try taking multiedit from https://github.com/dictation-toolbox/aenea-grammars/blob/master/_multiedit/_multiedit.py and max=16 to something larger on line 238. IIRC 32 is enough to trigger it, but try higher and come down.

If this doesn't work, Ill see if I can produce an example once I get my mic set up again.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

was thinking that myself, but Dragon would be in the best position document their API, or to open source their engine too :p

Full disclosure; I haven't dug too much into the lower levels of the Dragon -> Natlink -> Dragonfly stack. I've studied Dragonfly a bit but I mostly treat Natlink as a black box. It's entirely possible that limitations I think are present aren't actually, so don't take my "I don't think it can be done"s too seriously:-) The main reason I haven't looked deeper is because aenea already does basically everything I want, and I stopped finding reverse engineering low-level binary formats fun about 15 years ago:-)

also, are static vocabularies Natlink rules and dynamic vocabularies Natlink lists?

Exactly.

from aenea.

calmofthestorm avatar calmofthestorm commented on July 28, 2024

I just attempted to reproduce the "grammar too complex" issue but was unable to do so. By increasing repeat counts, etc, I did notice a decrease in recognition accuracy and speed, but not a hard fail as I recalled.

The last time I encountered this issue it was about a year ago when I was working on my VIM bindings. Initially I envisioned a deeper grammar with verbal modes and cancellations ("enter letters mode a b c leave letters mode" or whatnot), and ran into it then. Unfortunately I don't seem to have committed the grammars that caused the issue.

Based on this, I guess I understand this issue even less well than I previously did. Sorry I couldn't be more helpful.

from aenea.

sboosali avatar sboosali commented on July 28, 2024

Thanks for the follow-up.

Deeply nested grammars sounds like the right way to make grammars composable. if you do implement something supporting:

"enter letters mode a b c leave letters mode"

in the framework, let me know!

from aenea.

jgarvin avatar jgarvin commented on July 28, 2024

I've been on rolling my own server/client to use Dragon on Linux (I started my project before I know about aenea) and stumbled on this thread trying to find if anyone knew the exact conditions that trigger natlink.BadGrammar complaining about the grammar being too complex. AFAICT it is a matter of raw size, not just nesting. I had a grammar that was working fine until I added two new entries, now it's too big. Try repetitions of large mapping rules. I have a set of voice commands for emacs in python mode and one for in lisp mode, where the only difference in the grammar is the number of language keywords supported, where 4 phrases are added to the mapping rule for every keyword. The python one works fine, the lisp one is now too big. They have the same level of nesting.

from aenea.

sboosali avatar sboosali commented on July 28, 2024

that's interesting. can you send a link to your two grammars?

On Tuesday, January 27, 2015, jgarvin [email protected] wrote:

I've been on rolling my own server/client to use Dragon on Linux (I
started my project before I know about aenea) and stumbled on this thread
trying to find if anyone knew the exact conditions that trigger
natlink.BadGrammar complaining about the grammar being too complex. AFAICT
it is a matter of raw size, not just nesting. I had a grammar that was
working fine until I added two new entries, now it's too big. Try
repetitions of large mapping rules. I have a set of voice commands for
emacs in python mode and one for in lisp mode, where the only difference in
the grammar is the number of language keywords supported, where 4 phrases
are added to the mapping rule for every keyword. The python one works fine,
the lisp one is now too big. They have the same level of nesting.


Reply to this email directly or view it on GitHub
#19 (comment)
.

(this message was composed with dictation: charitably interpret typos)Sam
Boosalis

from aenea.

jgarvin avatar jgarvin commented on July 28, 2024

Here's the 128 keyword list for the lisp mode:
https://bpaste.net/show/8d97200e37f9

Note that some elements are lists of two strings rather than a string --
this is for when the spoken and written form should differ.

From the list I would generate a mapping rule that had 4 rules for each
keyword in the list, "future []", "prior []", "key
", "new ". So there would be 4 * 128 entries -- which being a
power of 2 makes sense as a limit, could be it always dies at 512. It died
when I added the future/prior commands. That was too big apparently.

Moving the keyword list into it's own rule and the 4 possible commands into
their own rule and making the new grammar just " []" made
the error go away, which ironically has more possibilities because before
key and new didn't have a number after and now they do.

My guess is Dragon barfs if you try to make an alternative with 512
entries, since I assume that's what MappingRule builds for you.

On Tue, Jan 27, 2015 at 4:53 PM, Sam Boosalis [email protected]
wrote:

that's interesting. can you send a link to your two grammars?

On Tuesday, January 27, 2015, jgarvin [email protected] wrote:

I've been on rolling my own server/client to use Dragon on Linux (I
started my project before I know about aenea) and stumbled on this
thread
trying to find if anyone knew the exact conditions that trigger
natlink.BadGrammar complaining about the grammar being too complex.
AFAICT
it is a matter of raw size, not just nesting. I had a grammar that was
working fine until I added two new entries, now it's too big. Try
repetitions of large mapping rules. I have a set of voice commands for
emacs in python mode and one for in lisp mode, where the only difference
in
the grammar is the number of language keywords supported, where 4
phrases
are added to the mapping rule for every keyword. The python one works
fine,
the lisp one is now too big. They have the same level of nesting.


Reply to this email directly or view it on GitHub
<
https://github.com/dictation-toolbox/aenea/issues/19#issuecomment-71723853>

.

(this message was composed with dictation: charitably interpret typos)Sam
Boosalis


Reply to this email directly or view it on GitHub
#19 (comment)
.

from aenea.

jgarvin avatar jgarvin commented on July 28, 2024

Actually, I forgot to subtract out the leading lines, there are only 116
keywords, that the source file containing them was 128 lines is pure luck
:p Still, adding future/prior would have pushed it over 256, which could be
a limit.

On Tue, Jan 27, 2015 at 6:04 PM, Joseph Garvin [email protected]
wrote:

Here's the 128 keyword list for the lisp mode:
https://bpaste.net/show/8d97200e37f9

Note that some elements are lists of two strings rather than a string --
this is for when the spoken and written form should differ.

From the list I would generate a mapping rule that had 4 rules for each
keyword in the list, "future []", "prior []", "key
", "new ". So there would be 4 * 128 entries -- which being a
power of 2 makes sense as a limit, could be it always dies at 512. It died
when I added the future/prior commands. That was too big apparently.

Moving the keyword list into it's own rule and the 4 possible commands
into their own rule and making the new grammar just " []"
made the error go away, which ironically has more possibilities because
before key and new didn't have a number after and now they do.

My guess is Dragon barfs if you try to make an alternative with 512
entries, since I assume that's what MappingRule builds for you.

On Tue, Jan 27, 2015 at 4:53 PM, Sam Boosalis [email protected]
wrote:

that's interesting. can you send a link to your two grammars?

On Tuesday, January 27, 2015, jgarvin [email protected] wrote:

I've been on rolling my own server/client to use Dragon on Linux (I
started my project before I know about aenea) and stumbled on this
thread
trying to find if anyone knew the exact conditions that trigger
natlink.BadGrammar complaining about the grammar being too complex.
AFAICT
it is a matter of raw size, not just nesting. I had a grammar that was
working fine until I added two new entries, now it's too big. Try
repetitions of large mapping rules. I have a set of voice commands for
emacs in python mode and one for in lisp mode, where the only
difference in
the grammar is the number of language keywords supported, where 4
phrases
are added to the mapping rule for every keyword. The python one works
fine,
the lisp one is now too big. They have the same level of nesting.


Reply to this email directly or view it on GitHub
<
https://github.com/dictation-toolbox/aenea/issues/19#issuecomment-71723853>

.

(this message was composed with dictation: charitably interpret
typos)Sam
Boosalis


Reply to this email directly or view it on GitHub
#19 (comment)
.

from aenea.

sboosali avatar sboosali commented on July 28, 2024

I tried a list of 1000 terminals (http://simple.wikipedia.org/wiki/Wikipedia:List_of_1000_basic_words) in a rule and both did not throw any bad grammar errors and successfully recognized them when spoken.

Can you reproduce this:

https://bpaste.net/show/aa6c2117c91e

if you drop this file into your MacroSystem folder (that's a NatLink thing, not an Aenea thing), it should disable the other grammars and activate. There are other commented out grammars that should all work too, if you're interested. I think the only problem I had was with a recursive grammar which didn't throw bad grammar error but just crashed Dragon. lol.

The limit might also depend on the system and version and stuff. Like RAM? I'm running Dragon 13 in a VM that has two CPUs and 4 GB of RAM fwiw.

from aenea.

sboosali avatar sboosali commented on July 28, 2024

@poppe1219

this worked for me:

        natlink.recognitionMimic(["Start","spell","mode"])

since it takes a list, not a string: https://github.com/sboosali/NatLink/blob/9545436181f23652224041afa2035f12fa60d949/NatlinkSource/natlink.txt#L209

curse you dynamic types!

;)

you might need to deactivate your own grammars:

 # natlink, not dragonfly
 self.activateSet([], exclusive=0)

or explicitly handle the recognition yourself with:

# natlink, not dragonfly
def initialize(self):
    self.load(self.gramSpec, allResults=1)
    self.activateSet(["..."], exclusive=1)
    ...

def gotResultsObject(self, recognitionType, resultsObject):
    if recognitionType == 'other':  # when a different grammar has recognized the utterance, e.g. dragons built-in spelling grammar
    ...

(and of course, you still can't embed this in the middle of another rule)

from aenea.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.