Giter VIP home page Giter VIP logo

anna_anki_neuronal_appendix's Introduction

AnnA : Anki neuronal Appendix

Tired of having to deal with anki flashcards that are too similar when grinding through your backlog? This python script creates filtered deck in optimal review order. It uses Machine Learning / AI to make semantically linked cards far from one another.

Table of contents

One sentence summaries

Here are different ways of looking at what AnnA can do for you in a few words:

  • When you don't have the time to complete all your daily reviews, use this to create a special filtered deck that makes sure you will only review the cards that are most different from the rest of your reviews.
  • When you have too many learning cards and fear that some of them are too similar, use this to automatically review a subset of them.
  • AnnA helps to avoid reviewing similar cards on the same day.
  • AnnA allows to reduce the number of daily reviews while increasing (and not keeping the same) retention.
  • AnnA allows to automatically write in each note which other notes are its semantic neighbours.

Note to readers

  1. The dev branch is usually less outdated than the main branch.
  2. I would really like to integrate this into anki somehow but am not knowledgeable enough about how to do it, how to manage anki versions, how to handle different platforms etc. All help is much appreciated!
  3. This project communicates with anki using a fork of the addon AnkiConnect called AnnA-companion. Note that AnnA-companion was tested from anki 2.1.44 to 2.1.54 only.
  4. Although I've been using it daily for months, I am still changing the code base almost every day, if you tried AnnA and were disappointed, maybe try it another time later. Major improvements are regularly made.
  5. In the past I implemented several vectorization methods. I now only kep subword TF_IDF and sentence-embeddings. TF_IDF is known to be reliable, fast, very general (it does not assume anything about your cards and will work for just about any language, format, phrasing etc). TF_IDF works very well if you have large number of cards. Sentence embeddings is more AI than ML in a way : aligned vectors are supposed to have similar meaning / topic but it way longer to compute than TFIDF so I implemented a caching mechanism that checks if the note has changed since last time and avoids unecessary recomputation.
  6. If you want to know how I'm using this, take a look at authors_routine.md
  7. If you like this, another project of mine can be used to do semantic search on your anki collection using AI : I call it Anki SemSearch
  8. An other project you might be interested in is anki_PrioriTag. It allows to create a filtered deck automatically to review the most forgotten tags.

Other features

  • Code is PEP8 compliant, dynamically typed, all the functions have a detailed docstrings. Contributions are welcome, opening issues is encouraged and appreciated.
  • Keeps the OCR data of pictures in your cards, if you analyzed them beforehand using AnkiOCR.
  • Can automatically replace acronyms in your cards (e.g. 'PNO' can be replaced to "pneumothorax" if you tell it to), regexp are supported (if you want access to my homemade list of french medical acronyms, open an issue)
  • Can attribute more importance to some fields of some cards if needed.
  • Can be used to optimize the order of a deck of new cards, read this thread to know more
  • Previous feature like clustering, plotting, searching have been removed as I don't expect them to be useful to users. But the code was clean so if you need it for some reason don't hesitate to open an issue.

FAQ

  • This looks awesome! Are you working on other things like that? Yes and thank you! Another project of mine aims to do semantic search on your anki collection using AI : I call it Anki SemSearch

  • How does it work? (Layman version) AnnA connects to its companion addon to access your anki collection. This allows to use any python library without having to go through the trouble how packaging those libraries into an addon. It uses a vectorizer to assign numbers (vectors) to each cards. If the numbers of two cards are very close, then the cards have similar content and should not be reviewed too close to each other.

  • And in more details? The vectorization method AnnA is using is either sentence-transformers or subword TF_IDF. The latter is a way to count words in a document (anki cards in this case) and understand which are more important. "subword" here means that I used LLM tokenization (i.e. splitting "hypernatremia" into "hyper na tremia" which can be linked to cards dealing with "Na Cl" for example). TF_IDF is generally considered as part of machine learning, and sentence-transfomers as part of AI. AnnA leverages this information to make sure you won't review cards on the same day that are too similar. This is very useful when you have to many cards to review in a day. I initially tried to combine several vectorizer into a common scoring but it proved unreliable, I also experimented with fastText (only one language at a time, can't be packaged and large RAM usage), sentence-BERT (too large and depends too much on phrasing to be reliable). So I decided to keep it simple and provide only TFIDF. The goal is to review only the most useful cards, kinda like pareto distribution (i.e. review less cards, but review the right one and you should be able to keep afloat in med school). The code is mainly composed of a python class called AnnA. When you instantiate this class, you have to supply the name of the deck you want to filter from. It will then automatically fetch the cards from your collection, then use TF_IDF to assign vectors to each card, compute the distance matrix of the cards and create a filtered deck containing the cards in the optimal order (or bury the cards you don't have to review today). Note that rated cards of the last X days of the same deck will be used as reference to avoid having cards that are too similar to yesterday's reviews too. If you want to know more, either open an issue or read the docstrings in the code.

  • Will this change anything to my anki collection? Apart from adding tags or filling the field 'KNN_neighbours", AnnA should not modify anything, if you delete the filtered deck, everything will be back to normal. That being said, the project is young and errors might still be present.

  • What is the KNN_neighbours field? A field that, if manually added in the notetype will contain a query that, if entered in the anki browser will show you which card of the deck are dimmed most similar by AnnA. The number of neighbour by default is 0.1% of the size of the deck or at least 5.

  • Does it work if I have learning steps over multiple days? Yes, that's my use case. AnnA, depending on the chosen task, either only deals with review queue and ignores learning cards and new cards, or only buries the part of your learning cards that are too similar (ignoring due cards). You can use both one after the other every morning like I do. If you have learning cards in your filtered deck it's because you lapsed those cards yesterday.

  • Does this only work for English cards? No! You can use either a multilingual sentence-transformer or TF_IDF that uses a multilingual LLM tokenization so should work on most languages (even if you have several different languages in the same deck).

  • Can I use this if I don't know python? Yes! Installing the thing might not be easy but it's absolutely doable. And you don't need to know python to run AnnA. I tried my best to make it accessible and help is welcome.

  • What do you call "optimal review order"? The order that minimizes the chance of reviewing similar cards in a day. You see, Anki has no knowledge of the content of cards and only cares about their interval and ease. Its built-in system of "siblings" is useful but I think we can do better. AnnA was made to create filtered decks sorted by "relative_overdueness" (or other) BUT in a way that keeps semantic siblings far from each other.

  • When should I use this? It seems good for dealing with the huge backlog you get in medical school, or just everyday to reduce the workload. If you have 2 000 reviews to do, but can only do 500 in a day: AnnA is making sure that you will get the most out of those 500.

  • How do you use this? I described my routine in a separate file called docs/authors_routine.md.

  • Can I use AnnA to optimize a list of new cards? I never did it personally but helped a user doing it: see related thread

  • What are the power requirements to run this? I wanted to make it as efficient as possible but am still improving the code. Computing the distance matrix can be long if you do this on very large amount of cards but this step is done in parallel on all cores so should not be the bottleneck. Let me know if some steps are unusually slow and I will try to optimize it. There are ways to make it way easier on the CPU, see arguments low_power_mode and TFIDF_dim.

  • How long does it take to run? Is it fast? It takes about 1 min to run for a deck of less than 500 due cards, I've seen it take as long as 4 minutes on a 10 000 due cards deck. This is on a powerful laptop with no GPU on Linux.

  • Why is creating the queue taking longer and longer? Each card is added to the queue after having been compared to the rated cards of the last few days and the current queue. As the queue grows, more computation have to be done. If this is an issue, consider creating decks that are as big as you think you can review in a day. With recent improvements in the code the speed should really not be an issue.

  • Does this work only on Linux? It should work on all platform, provided you have anki installed and AnnA-companion enabled. But the script (not the addon) uses libraries that might only work on some CPU architectures, so I'm guessing ARM system would not work but please tell me if you know tried.

  • What is the current status of this project? I use it daily but am still working on the code. You can expect breaking. I intend to keep developing until I have no free time left. Take a look at the TODO list of the dev branch if you want to know what's in the works. When in doubt, open an issue.

  • Do you mind if I open an issue? Not at all! It is very much encouraged, even just for a typo. That will at least make me happy. Of course PR are always welcome too.

  • Can this be made into an anki addon instead of a python script? I have never packaged things into anki addons so I'm not so sure. I heard that packaging complex modules into anki is a pain, and cross platform will be an issue. If you'd like to make this a reality, show yourself by opening an issue! I would really like this to be implemented into anki, and the search function would be pretty nice :)

  • What version of anki does this run on? I've been using AnnA from anki 2.1.44 and am currently on 2.1.54 Compatibility relies heavily on anki-connect. Please tell me if you run into issues.

  • If I create a filtered deck using AnnA, how can I rebuild it? You can't rebuilt it or empty it through anki directly as it would leave you with anki's order and not AnnA's. You have to delete the filtered deck then run the script. Hence, I suggest creating large filtered decks in advance.

  • I don't think the reviews I do on AnnA's filtered decks are saved, wtf? It might be because you're using multiple device and are deleting the filtered deck on one of the device without syncing first.

  • What is subword TF_IDF? Short for "subword term frequency–inverse document frequency". It's a clever way to split words into subparts then count the parts to figure out which cards are related.

  • Does it work with images? Not currently but sBERT can be used with CLIP models so I could pretty simply implement this. If you think you would find it useful I can implement it :).

  • What are the supported languages using TF_IDF? TF_IDF is language agnostic, but the language model used to split the words was trained on the 102 largest wikipedia corpus. If your languages is very weird and non unicode or non standard in some ways you might have issues, don't hesitate to open an issue as I would gladly take a look.

  • What is the field_mapping.py file? It's a file with a unique python dictionary used by AnnA to figure out which field of which card to keep. Using it is optional. By default, each notetype will see only it's first field kept. But if you use this file you can keep multiple fields. Due to how TF_IDF works, you can add a field multiple times to give it more importance relative to other fields.

  • What is "XXX - AnnA Optideck"? The default name for the filtered deck created by AnnA. It contains the reviews in the best order for you.

  • Why are there only reviews and no learning cards in the filtered decks? When task is set to filter_review_cards, AnnA will fetch only review cards and not deal with learning cards. This is because I'm afraid of some weird behavior that would arise if I change the due order of learning cards. Whereas I can change it just find using review cards.

  • Why does the progress bar of "Computing optimal review order" doesn't always start at 0? It's just cosmetic. At each step of the loop, the next card to review is computed by comparing it against the previously added cards and the cards rated in the last few days. This means that each turn is slower than the last. So the progress bar begins with the number of cards rated in the past to indicate that. It took great care to optimize the loop so it should not really be an issue.

  • Does AnnA take into account my card's tags ? If you set the argument append_tags to True, the 2 deepest level of each tags are appended at the end of the text and used like if it was part of the content of the card. Note that "_ - and /" are replaced by a space. For example : a::b::c::d some::thing will be turned into c d some thing.

  • Why does the task "bury_excess_learning_cards" seem to ignore a few cards? I decided to have this task not take into account cards that were failed today or the day before, those are usually top priority and I don't won't AnnA to bury them.

  • How can I know AnnA is actually working and not burying random cards? Good question, I tried to implement a metric called Improvement ratio that displays a score at the end of the run. It's a sort of sum of the distance between all cards in the new queue over the sum of the distance between all cards in what would have been the queue. It's not really indicative of anything beyond the fact that it's over 1 (=it helped) or under 1 (=it was detrimental). But the latter is kinda hard to get right because it depends on the reliability of reference_order in itself. I am especially doubtful of the meaning of the ratio when dealing with learning cards but so far it seems to work ok. Consider it a work in progress.

  • I have underlined or put in bold the most important parts of my cards, does it matter? Words that are put in bold or underlined are actually duplicated in the text, this gives them twice as much importance.

  • What scheduler should I use with AnnA? Can I use the V3 scheduler? I strongly suggest leaving the V1 as it messes with learning cards quite a lot. I personally use V2 and developed AnnA while using V2. I don't fully understand the changes in V3 and am not really certain that it works correctly with AnnA so I can't really say. I plan to investigate switching to V3 around October 2022.

Getting started

  • First, read this page in its entirety, this is a complicated piece of software and you don't want to use it irresponsibly on your cards. The usage section is especially useful.
  • Install the addon AnnA Companion (Anki neuronal Appendix) - do LESS reviews with MORE retention!
  • Clone this repository (for example with git clone https://github.com/thiswillbeyourgithub/AnnA_Anki_neuronal_Appendix)
  • Install the required python libraries : pip install -r requirements.txt (in case of errors, try using python 3.9)
  • Edit file utils/field_mapping.py: it contains a dictionary where the keys are notetypes and values are lists of which field to take into account.
  • Edit file utils/acronym_example.py: it contains dictionaries where keys are words to replace and values are what the words should be replaced with.
  • There are two ways to run AnnA:
    • Either in a Python console : from AnnA import * ; AnnA(YOUR_ARGUMENTS)
    • Or directly in the terminal : python3 AnnA.py --help (note that the terminal mode was added after the python console and might still contain error when parsing arguments)
  • If you want to run AnnA on several decks in a row like I do, edit the file utils/autorun_example.py, move it one folder up then execute it with python3 ./autorun.py
  • Open an issue telling me your remarks and suggestion

Usage and arguments

AnnA was made with customizability in mind. All the settings you might want to edit are arguments of the call of AnnA Class. Don't be frightened, many of those settings are rarely used and the default values should be good for almost anyone. Here's a link to the --help page with detailed explanations: usage page. Otherwise you can display it yourself python ./AnnA.py --help

AnnA includes built-in methods you can run after instantiating the class. Note that methods beginning with a "_" are not supposed to be called by the user and are reserved for backend use. Here's a list of sometimes useful methods:

note: to call python method after running anna outside of a console, use argument --keep_console_open and access the variable anna. For example anna.display_best_review_order()

  • display_best_review_order used as a debugging tool : only display order. Allows to check if the order seems correct without having to create a filtered deck.
  • save_df saves the dataframe containing the cards and all other infos needed by AnnA as a pickle file. Used mainly for debugging. Files will be saved to the folder DF_backups

TODO

  • see dev branch

Credits and links that were helpful

Crazy ideas

The following is kept as legacy but was made while working on the ancestor of AnnA, don't judge please.

Disclaimer : I'm a medical student extremely interested in AI but who has trouble devoting time to this passion. This project is a way to get to know machine learning tools but can have practical applications. I like to have crazy ideas to motivate my projects and they are listed belows. Don't laugh. Don't hesitate to contribute either.

  • Scheduling Using AnnA to do more different reviews might, somehow, increase the likelihood of eureka moments where your brain just created a new neural paths. That would supposedly help to remember and master a field.
  • Optimize learning via cues : A weird idea of mine is about the notion that hearing a tone when you're learning something will increase your recall if you have the same tone playing while taking the test. So maybe it would be useful to assign a tone that increases in pitch while you advance in the queue of due cards? I told you it was crazy ideas... Also this tone could play the tone assigned to the cluster when doing random reviews.
  • Mental Walk (credit goes to a discussion with Paul Bricman) : integrating AnnA with VR by offering the user to walk in an imaginary and procedurally generated world with the reviews materialized in precise location. So the user would have to walk to the flashcard location then do the review. The cards would be automatically grouped by cluster into rooms or forests or buildings etc. Allowing to not have to create the mental palace but just have to create the flashcards.
    • a possibility would be to do a dimension reduction to 5 dimensions. Use the 2 first to get the spatial location were the cluster would be assigned. Then the cards would be spread across the room but the 3 remaining vectors could be used to derive something about the room like wall whiteness, floor whiteness and the tempo of a music playing in the background.
    • we could ensure that the clusters would always stay in the same room even after adding many cards or even across users by querying a large language model for the vectors associated to the main descriptors of the cluster.
    • Put it on street view for automatic mental palace?
  • Source Walking : it would be interesting to do the anki reviews in VR where the floor would be your source (pdf, epub, ppt, whatever). The cards would be spread over the document, approximately located above the actual source sentence. Hence leveraging the power of mental palace while doing your reviews. Accessing the big picture AND the small picture.

anna_anki_neuronal_appendix's People

Contributors

aleksejrs avatar thiswillbeyourgithub avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

brubsby

anna_anki_neuronal_appendix's Issues

Link cards with same keyword?

Hi! Hope you're doing okay :)

So what I’ve been wondering is:
If I have a card which contains a word in the Expression field, for example Eat.
Is it possible to show all cards which contain this word in their field and parse/link them into another field in the card where Eat is?
For example:
FieldCard1–>Expression: Eat
FieldCard2–>Expression: I like to eat apples
FieldCard3–>Expression: He doesn’t eat apples

FieldCard1–>Sentences: I like to eat apples | He doesn’t eat apples

I have a special field which shows me the focused word of each note, so there are many Eats, drinks and go's. Sorting and learning is impractical because of the sheer amount of notes that would make me have to learn....

This is especially useful if one has many sentences which are correlated and make sense to learn. So one could just see many examples in one note and super-expose oneself to the keyword.

If this is possible I’d greatly appreciate your help! Thanks

Exception: 'Scheduler' object has no attribute 'rebuild_filtered_deck' (Anki 2.1.22)

Improvement ratio:
       _  ___   __    ___ ______  __
   _  / |/ _ \ / /_  ( _ )___ (_)/ /
 _| |_| | | | | '_ \ / _ \ |_ \ / / 
|_   _| | |_| | (_) | (_) |__) / /_ 
  |_| |_|\___(_)___/ \___/____/_/(_)
                                    

Creating deck containing the cards to review: /::s - AnnA Optideck
Traceback (most recent call last):
  File "/mnt/addons/AnnA_Anki_neuronal_Appendix/./autorun.py", line 15, in <module>
    AnnA(deckname="/::s",
  File "/mnt/addons/AnnA_Anki_neuronal_Appendix/AnnA.py", line 316, in __init__
    self._bury_or_create_filtered()
  File "/mnt/addons/AnnA_Anki_neuronal_Appendix/AnnA.py", line 1318, in _bury_or_create_filtered
    self._call_anki(action="createFilteredDeck",
  File "/mnt/addons/AnnA_Anki_neuronal_Appendix/AnnA.py", line 345, in _call_anki
    raise Exception(response['error'])
Exception: 'Scheduler' object has no attribute 'rebuild_filtered_deck'

tqdm.std.TqdmKeyError: "Unknown argument(s): {'delay': 2}"

Traceback (most recent call last):
  File "/mnt/addons/AnnA_Anki_neuronal_Appendix/./autorun.py", line 15, in <module>
    AnnA(deckname="/::s",
  File "/mnt/addons/AnnA_Anki_neuronal_Appendix/AnnA.py", line 307, in __init__
    self._init_dataFrame()
  File "/mnt/addons/AnnA_Anki_neuronal_Appendix/AnnA.py", line 505, in _init_dataFrame
    self._getCardsInfo(
  File "/mnt/addons/AnnA_Anki_neuronal_Appendix/AnnA.py", line 386, in _getCardsInfo
    with tqdm(total=target_thread_n,
  File "/home/fc/.local/lib/python3.9/site-packages/tqdm/std.py", line 1002, in __init__
    raise (
tqdm.std.TqdmKeyError: "Unknown argument(s): {'delay': 2}"
$ pip show tqdm
Name: tqdm
Version: 4.55.1
Summary: Fast, Extensible Progress Meter
Home-page: https://github.com/tqdm/tqdm
Author: None
Author-email: None
License: MPLv2.0, MIT Licences
Location: /home/fc/.local/lib/python3.9/site-packages
Requires: 
Required-by: nltk

Move the tag ignoring settings to a user-customizable place?

Anki adds "leech" to difficult cards.
MorphMan adds lots of tags like "mm_comprehension" or "mm_fresh".
Tags like "relation-blabla": "relation" needs to be ignored.
I have some custom tags like "important", "urgent", "morphman_important" or "morphman_urgent".

Doesn't create logs.txt, and therefore crashes

If logs.txt doesn't exist when I start the program, it fails with

$ python3 ./autorun.py 
Traceback (most recent call last):
  File "/mnt/addons/AnnA_Anki_neuronal_Appendix/./autorun.py", line 7, in <module>
    from AnnA import *
  File "/mnt/addons/AnnA_Anki_neuronal_Appendix/AnnA.py", line 41, in <module>
    Path("logs.txt").read_text().split("\n")[-5000:]))
  File "/usr/lib/python3.9/pathlib.py", line 1255, in read_text
    with self.open(mode='r', encoding=encoding, errors=errors) as f:
  File "/usr/lib/python3.9/pathlib.py", line 1241, in open
    return io.open(self, mode, buffering, encoding, errors, newline,
  File "/usr/lib/python3.9/pathlib.py", line 1109, in _opener
    return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'logs.txt'

commit 5d74d65

Do not use collections in Trash

I added the cache path fix and noticed this:

Ankipandas will use anki collection found at /home/fc/.local/share/Trash/files/gfgsgf/collection.anki2

Note that vifm's trash directory is ~/.local/share/vifm/Trash

Optimize new queue?

Hey! Hope you're doing fine! :)
I was wondering if it's possible to optimize new cards through their content based on what one already learned (is:review).
If one learns sentences and they're all pretty similar in context one ends up learning too much of the same...
Just an idea!
Have a great day

FSRS-AnnA Integration

Just inquiring if there's a reason pandas is locked at this version, my machine is having issues installing this specific version. I figure it might be due to ankipandas requiring it, but didn't see any info about this after a cursory search.

FWIW, I found this addon via a discussion in the FSRS github centered around improving the accuracy of the FSRS algorithm by identifying "conceptual" or (you might call them) "semantic" siblings. So I wanted to try this out, as it seemed a very promising direction of research.

Usability in language learning?

Hey!
This project is just awesome 🥇
Wondering if this can be used for language learning or will be in the future?
Would be cool if you considered it. I'm currently using the MorphMan addon but it lacks a lot of maintenance and isn't really good for some things.
Anyways, congrats for this awesome idea!

Error with Mandarin Chinese

I get this error when trying to run the script with Chinese. Works perfectly with other languages.
First I thought it was due to the fact that there aren't many due cards, but I tried yesterday with more than 200 cards and it dropped the same error.

python3.9 autorun.py                                                          ✔ 
    _                   _    
   / \   _ __  _ __    / \   
  / _ \ | '_ \| '_ \  / _ \  
 / ___ \| | | | | | |/ ___ \ 
/_/   \_\_| |_|_| |_/_/   \_\
                             

(Anki neuronal Appendix)


Couldn't find this deck.
Enter the name of the deck:
>Mandarin
Selected deck: Mandarin

Task : created filtered deck containing review cards
Getting due card list...
 >  '"deck:Mandarin" is:due is:review -is:learn -is:suspended -is:buried -is:new -rated:1'
Found 15 reviews...

Getting cards that where rated in the last 3 days...
 >  '"deck:Mandarin" rated:3 -is:suspended -is:buried'
Found 49 cards...

Asking Anki for information about 64 cards...
(Large number of cards to retrieve: creating 10 threads of size 15)

Done threads: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:02<00:00,  2.00thread/s]
Got all infos in 2 seconds.

Combining relevant fields: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [00:00<00:00, 961.35 card/s]
Formating text: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [00:00<00:00, 911.42 card/s]


Printing 2 random samples of your formated text, to help adjust formating issues:
 * 1631661855764 : 开口: 開口 word OM ts list
 * 1631661854970 : 上去 word fresh ts list


Looking for acronyms that perhaps should be in 'acronym_file'...
No acronym found in those cards.
Vectorizing text using TFIDF: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 64/64 [00:00<00:00, 6736.32it/s]

Reducing dimensions to 100 using SVD... Traceback (most recent call last):
  File "/home/yaoberh/.local/share/Anki2/addons21/AnnA_Anki_neuronal_Appendix/autorun.py", line 16, in <module>
    AnnA(deckname="Some_Deck",
  File "/home/yaoberh/.local/share/Anki2/addons21/AnnA_Anki_neuronal_Appendix/AnnA.py", line 321, in __init__
    self._compute_card_vectors()
  File "/home/yaoberh/.local/share/Anki2/addons21/AnnA_Anki_neuronal_Appendix/AnnA.py", line 853, in _compute_card_vectors
    t_red = svd.fit_transform(t_vec)
  File "/home/yaoberh/.local/lib/python3.9/site-packages/sklearn/decomposition/_truncated_svd.py", line 207, in fit_transform
    raise ValueError(
ValueError: n_components must be < n_features; got 76 >= 76

Several notetypes match, wrong one selected

Several notetypes match 'GEOG-country-flag'. Selecting 'GEOG-country'
Several notetypes match 'C-Bas-hidefrontFwd'. Selecting 'C-Bas'
Several notetypes match 'C-Clz-pron'. Selecting 'C-Clz'

Why? Those are all full names. Does it try fields from the correct matches too?

Anki 2.1.22 (I suppose it kind of works after some tweaking, but I am not sure, because it can't rebuild the filtered deck), AnnA commit e2c353e.

Use "127.0.0.1" instead of "localhost"?

My ISP had a problem, and DNS stopped working. Pinging "localhost" and using nmap on the Anki add-on port worked, but urllib used by AnnA couldn't resolve "localhost" for some reason.

Never mind, it was a simple misconfiguration in my local DNS caching server, that I already corrected.

Numpy wheels Python 3.10

The necessary wheels for numpy on python 3.10 fail to install when trying to install with pip install -r requirements.txt and also, on Manjaro/Arch the package pdb isn't available 😢

Implementing a frequency list for language learning

Hey!
Maybe this is out of your interest but, as a language learner it is very important to acquire the most used words/vocabulary of a language and build from that point on by learning from a set of frequency words. These words are the most used in a language and can be found on wiktionary:
https://en.wiktionary.org/wiki/Wiktionary:Frequency_lists

When I start learning a language I first learn these words and then move on to sentences which contain these words.
Wondering if this can somehow be implemented in this addon by supplying a list of "must contain these words" list as a text file, to tell the addon "hey, this is priority number one" and check which of my cards contain these words. They're ordered from 1->most frequent to for example 2000-> least used.

Maybe this is of no interest to you, but I know a few people that learn languages this way and it would be a game changer!
Anyways, if you're interested I could send you a frequency.txt file in a few languages for you to try out or you could just check out that website.
Have a great day and thanks for reading!

Crash AnnA.py(1517)_compute_opti_rev_order() ro = -1 * (df.loc[due, "interval"].values +

Computing relative overdueness...
NOTIF: This should probably not happen.
> /AnnA_Anki_neuronal_Appendix/AnnA.py(1517)_compute_opti_rev_order()
-> ro = -1 * (df.loc[due, "interval"].values +
(Pdb) bt
  /AnnA_Anki_neuronal_Appendix/autorun.py(19)<module>()
-> AnnA(deckname="/::s",
  /AnnA_Anki_neuronal_Appendix/AnnA.py(525)__init__()
-> self._compute_opti_rev_order()
> /AnnA_Anki_neuronal_Appendix/AnnA.py(1517)_compute_opti_rev_order()
-> ro = -1 * (df.loc[due, "interval"].values +
(Pdb)

[Anki 23.10 beta 6] No module named 'PyQt5'

The Qt5 compatibility code is now off by default, which may break some add-ons that haven't been updated to support Qt6 in the last 2 years. If you find any have broken, please report the issues to the add-on author, and start Anki after setting the env var ENABLE_QT5_COMPAT to 1 to temporarily restore the compatibility code.

Error
Un complemento que has instalado ha fallado al cargarse. Si los problemas persisten, por favor ve a Herramientas> Menú de complementos o deshabilita este complemento.

Mientras cargando '⁨AnnA Companion Anki neuronal Appendix - do LESS reviews with MORE retention⁩':
⁨Traceback (most recent call last):
  File "aqt.addons", line 239, in loadAddons
  File "C:\Users\Khonkhortisan\AppData\Roaming\Anki2\addons21\447942356\__init__.py", line 19, in <module>
    from PyQt5 import QtCore
ModuleNotFoundError: No module named 'PyQt5'

447942356\__init__.py

#from PyQt5 import QtCore
#from PyQt5.QtCore import QTimer
#from PyQt5.QtWidgets import QMessageBox
try:
    from PyQt6 import QtCore
    from PyQt6.QtCore import QTimer
    from PyQt6.QtWidgets import QMessageBox
except ImportError:
    from PyQt5 import QtCore
    from PyQt5.QtCore import QTimer
    from PyQt5.QtWidgets import QMessageBox

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.