daniel-hauser / moneyman Goto Github PK
View Code? Open in Web Editor NEWAutomatically save transactions from all major Israeli banks and credit card companies, using GitHub actions (or a self hosted docker image)
Automatically save transactions from all major Israeli banks and credit card companies, using GitHub actions (or a self hosted docker image)
When using the docker run
I get
2022-05-07T15:50:49.602Z moneyman:notifier ❌ Error: error:1E08010C:DECODER routines::unsupported
I've investigated and it happens because the .env file parsing fails
...
GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----****\n-----END PRIVATE KEY-----\n"
...
How would you structure the RSA key for GOOGLE_SERVICE_ACCOUNT_PRIVATE_KEY?
What I've tried?
Hi Daniel, and thank you for this project.
I'm using your docker to easily export my transactions for json.
a few weeks ago it stopped working for Cal due to their site updating.
I've opened an issue on the main project, and they fixed it.
Can you advance your project to use that project's latest.
npm install
fails in Apple silicon systems with the following error:
npm ERR! The chromium binary is not available for arm64.
suggest adding this to the readme:
if you're using an Apple silicon man, install chromium manually and add the following env variables:
brew install chromium --no-quarantine
// env
export PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
export PUPPETEER_EXECUTABLE_PATH=`which chromium`
I get only the transactions
The current hash algorithm was designed to be compatible with the caspion hash, but this hash causes duplications for subsequent transactions with same date, price and description.
Since most scrapers (at least the ones i use) has an identifier
, the new hash will be "date_companyId_accountNumber_chargedAmount_identifier"
. if identifier
is falsy, "description_memo"
will be used as a fallback.
The new hash will be available on the uniqueId
field of the transaction object.
This change might require manual deduping of transactions, therefore the new hash will be opt-in using a TRANSACTION_HASH_TYPE
env var with the value "moneyman"
.
Since the default scrape window is 10 days, we will add a deprecation message in the sent messages with link to this issue.
After at least 30 days, the default hash will be changed to the new hash.
The scrapers that currently use the old hash
are:
hash
field as the import_id
Hi
During the scrape I would like to see the days to scrape.
I know it was existed in the past on telegram logs
The documentation defines ACCOUNTS_TO_SCRAPE as "A comma separated list of providers to take from ACCOUNTS_JSON. if empty, all accounts will be used" with default value []
Now, with the current code, https://github.com/daniel-hauser/moneyman/blob/main/src/config.ts#L33 only setting the env variable in the following way works
ACCOUNTS_TO_SCRAPE=max,isracard
setting in another way, for example =["max", "isracard"]
and even "max", "isracard"
do not work and no account is scraped.
Hi @daniel-hauser
I got an error when changing to group telegram chat.
I set on debug flag,
adding the relevant error
moneyman:scrape ended +0ms
moneyman:data scraping took 6.6s +7s
uncaughtException, sending error
moneyman:notifier ❌
moneyman:notifier Caught exception: Error: 429: Too Many Requests: retry after 31
moneyman:notifier Exception origin: unhandledRejection +16s
uncaughtException, sending error
moneyman:notifier ❌
moneyman:notifier Caught exception: Error: 429: Too Many Requests: retry after 31
moneyman:notifier Exception origin: unhandledRejection +19ms
moneyman:main Error: 429: Too Many Requests: retry after 31
moneyman:main at Telegram.callApi (/Users//repos/moneyman/node_modules/telegraf/lib/core/network/client.js:315:19)
moneyman:main at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
moneyman:main at async editMessage (file:///Users//repos/moneyman/dst/notifier.js:19:13)
moneyman:main at async file:///Users//repos/moneyman/dst/index.js:30:21
moneyman:main at async file:///Users//repos/moneyman/dst/data/index.js:19:13
moneyman:main at async scrapeAccount (file:///Users//repos/moneyman/dst/data/index.js:41:5)
moneyman:main at async scrapeAccounts (file:///Users//repos/moneyman/dst/data/index.js:17:24)
moneyman:main at async Promise.all (index 0)
moneyman:main at async run (file:///Users//repos/moneyman/dst/index.js:27:31)
moneyman:main at async file:///Users//repos/moneyman/dst/index.js:14:1 +29s
moneyman:notifier ❌ 429: Too Many Requests: retry after 31
moneyman:notifier Error: 429: Too Many Requests: retry after 31
moneyman:notifier at Telegram.callApi (/Users//repos/moneyman/node_modules/telegraf/lib/core/network/client.js:315:19)
I guess the issue that edit message works frequest and cause telegram to rate limit the requests.
Hi Daniel, and thank you for his project.
I've been using it for a while, and recently updated to a newer version.
I'm having trouble with exporting to a local JSON file.
i have.the LOCAL_JSON_STORAGE set to true in the .env file, but no file is created.
When running with docker, it says that the file was successfully created, but no actual file exists.
I tried running with NPM directly, but got another error with creating the file - ENOENT: no such file or directory.
I tried this on another machine i have with the same result.
BTW, it worked fine before pulling the latest version from the repo.
What do you think?
Hi,
Israel bank scrapers have the option to configure custom defaultTimeout for scraping timeout.
Is it possible to config it for Moneyman as well?
Hi,
When you do a fork to my other repo, the docker image can't be pulled.
this is because the name of the docker should be your repo name and not another. see below:
IMAGE_NAME: ghcr.io/${{ github.repository }}:latest
Or I'm confused and the docker image should build on my ecr repository?
Hi,
When I start the scaper manually, I saw duplicates in transactions.
When I trigger the action again, all the transactions are written several times.
note: The hash field exists and I don't touch him.
Hi,
Thanks!
I am using the json output.
I get this message:
Saved to:
📝 LocalJsonStorage (/app/output/2022-11-30T07:23:38.285Z.json)
But there is no dir '/app/output/'
And also, there is no file named '2022-11-30T07:23:38.285Z.json' in all of me computer.
So the question is, where the json saved and how can I change it?
In Caspion, we have a plan to implement a CLI for a long time... Because running it automatically is a required feature.
I hope we will manage to implement it someday, maybe with the help of other contributors.
As mentioned in eshaham/israeli-bank-scrapers#620 some transactions abroad are 2 or 3 months in the future.
For this reason, futureMonthsToScrape
was added in the ScraperOptions
https://github.com/eshaham/israeli-bank-scrapers/blob/master/src/scrapers/interface.ts#L54
I suggest adding a new environment variable to be able to set this parameter.
Hi again,
2 issues encountered after the update:
Caught exception: Error: 400: Bad Request: message is not modified: specified new message content and reply markup are exactly the same as a current content and reply markup of the message
Exception origin: unhandledRejection
❌
Caught exception: Error: 400: Bad Request: message is not modified: specified new message content and reply markup are exactly the same as a current content and reply markup of the message
Exception origin: unhandledRejection
Parsing config
Telegram logger initialized, status and errors will be sent
Scraping started
uncaughtException, sending error
uncaughtException, sending error
uncaughtException, sending error
I have been using your project for some time now and find it to be a great tool. However, I was wondering if there are any plans to support automatic category classification similar to the caspion project?
If not, do you have any suggestions for alternative solutions that could achieve similar results?
Automatic category classification would greatly enhance the usability of the project, as it would allow users to organize and sort their content easily. It would also save time and effort that would otherwise be spent manually categorizing items.
Hi
I have an issue with MAX checked the paswword and rest and still the same issue
anyone have any idea why ?
Orenbot, [7 Dec 2023 at 12:40:25]:
Config:
Worksheet name: _moneyman
Start Date: 2023-11-27T10:40:24.620Z (10 days back)
TZ: Asia/Jerusalem
[visaCal] END_SCRAPING, took 17.3s
[visaCal] END_SCRAPING, took 15.8s
[max] END_SCRAPING, took 30.2s
total time: 64.1s
Accounts updated:
✔️ [visaCal] 1080: 17
✔️ [visaCal] 2380: 19
✔️ [visaCal] 1564: 7
✔️ [visaCal] 1696: 0
✔️ [visaCal] 1098: 0
✔️ [visaCal] 8011: 0
❌ [max] GENERIC
Navigation timeout of 30000 ms exceeded
Saved to:
📝 Google Sheets (_moneyman)
7 added, 36 skipped
(36 existing, 0 pending)
Hi,
I am experiencing a general error when scraping data from Leumi and Mizrahi banks. I have tried both locally and via a GitHub workflow.
I am receiving the following error message:
0 transactions scraped.
Accounts updated:
❌ [leumi] GENERIC
waiting for selector #enter_your_account a failed: timeout 30000ms exceeded
❌ [mizrahi] GENERIC
Cannot read properties of undefined (reading 'getProperty')
Saved to:
😶 None
-------
Pending txns:
😶 None
Do you have any idea what could be causing this issue?
Hi
I experience general error scraping from Max
Hi, Thanks for you'r work :-D
When I run the code locally I get timeout. log below:
npm run start
> [email protected] start /home/ws/moneyman
> node dst/index.js
Parsing config
Telegram logger initialized, status and errors will be sent
Scraping started
moneyman:main Scraping started +0ms
moneyman:notifier
moneyman:notifier Config:
moneyman:notifier Worksheet name: _moneyman
moneyman:notifier Start Date: 2022-11-19T14:35:27.333Z (10 days back)
moneyman:notifier TZ: Asia/Jerusalem
moneyman:notifier +0ms
moneyman:notifier Starting... +554ms
moneyman:data scraping 1 accounts +0ms
moneyman:data start date 2022-11-19T14:35:27.333Z +1ms
moneyman:data scraping account #0 (type=yahav) +0ms
moneyman:scrape started +0ms
moneyman:scrape [yahav] START_SCRAPING +3ms
moneyman:scrape [yahav] INITIALIZING +1ms
moneyman:LocalJsonStorage init +0ms
I am using wsl2.
If I run the code with a docker (on different computer, but with the same .env file) it's works.
Can you help me please?
Hi,
Thanks for sharing your project.
I'm trying to get it to work for the first time.
Here is the output I'm receiving:
> [email protected] start
> node dst/index.js
Parsing config
Telegram logger initialized, status and errors will be sent
Scraping started
uncaughtException, sending error
npm notice
npm notice New minor version of npm available! 9.6.4 -> 9.7.2
npm notice Changelog: <https://github.com/npm/cli/releases/tag/v9.7.2>
npm notice Run `npm install -g [email protected]` to update!
npm notice
But I'm not receiving the error log.
Here is my .env file:
ACCOUNTS_JSON=[{"companyId":"visaCal","username":"my_username","password":"my_password"}]
LOCAL_JSON_STORAGE=TRUE
TELEGRAM_API_KEY="my_string"
TELEGRAM_CHAT_ID="my_string_or_number_tried_both_ways"
Here is the "main.py" python file which I'm running on my ubuntu PC:
import subprocess
command = [
"docker", "run",
"--rm",
"-v",
"my_folder_path:/some_string_path",
"--env-file", ".env",
"ghcr.io/daniel-hauser/moneyman:latest"
]
# Execute the Docker run command
subprocess.run(command)
Can you pls help me debug this?
Thanks a lot,
much appreciated
Hi @daniel-hauser ,
Can you add please an example of ACCOUNTS_JSON?
It helps me and others to implement it fast.
Thanks
When several accounts are indicated, there is no possibility to understand in the transaction from where the transactions are coming.
For example, a single Isracard account can return several CCs, and, in this case, the 'account' field will be the last 4 numbers of the credit card.
If several accounts are requested, there is no possible way (except of looking in the hash field) to know the companyID
.
{
"account": "123456789",
"chargedAmount": 2683,
"date": "2023-07-23T21:00:00.000Z",
"description": "blah"
"hash": data+chargedAmount+description+memo+companyId+accountNumber
"identifier": 12799000803480,
"memo": "",
"originalAmount": 2683,
"originalCurrency": "ILS",
"processedDate": "2023-07-23T21:00:00.000Z",
"status": "completed",
"type": "normal"
},
Just that... what about YNAB support?
It's not a lot of work... a couple of env vars for the YNAB token, budget and accounts and convert from Moneyman to YNAB format the transactions.
@daniel-hauser WDYT about a pull request to support it?
Hi,
An issue was fixed in the main project with CAL login 2 days ago.
Could you please update the code in this project to pull those changes and fix it?
(also update docker and other dependents)
Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.