dipdup-io / dipdup Goto Github PK
View Code? Open in Web Editor NEWModular framework for creating selective indexers and featureful backends for dapps
Home Page: https://dipdup.io
License: MIT License
Modular framework for creating selective indexers and featureful backends for dapps
Home Page: https://dipdup.io
License: MIT License
Hey there
I wanted to update dipdup to version 3.1.1 from 3.0.4
Now i'm wondering if there is a way to check if reindexing is required / reenable reindexing?
Because for automatic deployment i don't want to have to run the schema wipe
with every deployment just for the case that i might have to reindex...
General guidelines for developers.
dipdup is trying to query public_dipdup_state
as far as i could investingate the correct query might be dipdup_state
, if i'm not mistaking...
File "/usr/local/lib/python3.9/site-packages/dipdup/hasura.py", line 309, in _get_fields raise HasuraError(f'Unknown table `{name}`') from edipdup.hasura.HasuraError: Unknown table `public_dipdup_state`
relevant line(s) of code:
( using hasura 1.3.3 with dipdup 2.0.5 works )
Run manually and on release
What feature would you like to see in DipDup?
Dump a long realtime queue on disk to make --early-realtime
flag permanent.
Why do you need this feature, what's the use case?
--early-realtime
plus giant index equals OOM now.
Is there a workaround currently?
Initial sync without the flag.
What feature would you like to see in DipDup?
A standard way to migrate dipdup_...
tables.
Why do you need this feature, what's the use case?
To store more metadata in the database.
Is there a workaround currently?
Ugly scripts.
What feature would you like to see in DipDup?
In config:
logging:
'': INFO
dipdup: WARNING
Why do you need this feature, what's the use case?
Python logging config is too complex.
Is there a workaround currently?
-l logging.yml
option in CLI
Steps to reproduce:
Use big map indexes
What did you expect to happen:
Not crash with this error very quickly during indexing
What actually happened:
Error that happened every now and then before happens very quickly, now even on indexing and not just when something weird happens in real-time
master
? Not applicableconfig export
outputmodels.py
schema export
outputWhen I try to parse contract which has big_map(address, set(nat))
field in storage I get the following error:
pydantic.error_wrappers.ValidationError: 1 validation error for YupanaStorage
storage -> markets -> key
value is not a valid list (type=type_error.list)
This didn't happen in version 3.
Config example to reproduce this can be found below.
spec_version: 1.2
package: yupana_indexer
contracts:
ytoken:
address: KT1LTqpmGJ11EebMVWAzJ7DWd9msgExvHM94
typename: yupana
datasources:
tzkt_testnet:
kind: tzkt
url: https://api.hangzhou2net.tzkt.io/
indexes:
ytoken:
kind: operation
datasource: tzkt_testnet
contracts:
- ytoken
handlers:
- callback: on_enter_market
pattern:
- destination: ytoken
entrypoint: enterMarket
spec_version: 1.2
package: crypto_horses
database:
kind: sqlite
path: crypto_horses.sqlite3
contracts:
crypto_horses_testnet:
address: KT1WgoWgvKJiQ43MkeLoG7LtSZMxeED7Rn8T
typename: crypto_horses
datasources:
tzkt_testnet:
kind: tzkt
url: https://hangzhou2net.tzkt.io
indexes:
crypto_horses:
kind: operation
contracts:
- crypto_horses_testnet
datasource: tzkt_testnet
handlers:
- callback: on_race_launch
pattern:
- destination: crypto_horses_testnet
entrypoint: launch_race
What feature would you like to see in DipDup?
Something like dipdup drop <index_name>
to reset state in dipdup_index
. If the hook is implemented - call if, if not - show a warning or throw an exception.
Why do you need this feature, what's the use case?
More flexibility both in dev and prod.
Is there a workaround currently?
Drop indexes manually with scripts.
What feature would you like to see in DipDup?
Ryan
Another thing that would be very cool/useful would be to publish the schema for dipdup.yml to https://www.schemastore.org/json/ (https://github.com/SchemaStore/schemastore).
IDEs like PyCharm and VSCode automatically grab these schemas and can provide autocompletion for the dipdup.yml file, which would be 100 https://www.jetbrains.com/help/pycharm/json.html#ws_json_using_schemas
Why do you need this feature, what's the use case?
Easier to write a correct config.
Is there a workaround currently?
Posix doesn't exist under windows, any reason why it's used instead of os?
Given a transaction which is a contract call (has entrypoint key)
The below does not capture the transaction
handlers:
- callback: on_callback
pattern:
- type: transaction
source: contract_address
Steps to reproduce:
Use a bigmap index with skip_once, using two contracts
What did you expect to happen:
Matches all handlers / contracts
What actually happened:
Randomly matches either
master
?config export
outputmodels.py
schema export
outputconfig needed to replicate:
contracts:
HEN_swap_v1:
address: KT1RoynRPMidohYtsGc3HxgEWbqhDAtNGVHb
typename: hen_minter
HEN_objkts:
address: KT1Q4TLSG8TZBgp9rbLxUdZ1gaVNmm2t52C5
typename: hen_objkts
templates:
hicetnunc_fast_bm:
kind: big_map
datasource: <datasource>
handlers:
- callback: hicetnunc.objkt.bm.on_royalties
contract: <swap_v1>
path: royalties
- callback: hicetnunc.objkt.bm.on_metadata
contract: <objkt>
path: token_metadata
- callback: hicetnunc.objkt.bm.on_ledger_update
contract: <objkt>
path: ledger
datasources:
tzkt_testnet:
kind: tzkt
url: https://api.hangzhou2net.tzkt.io/
http:
retry_count: 100
retry_sleep: 0.1
retry_multiplier: 1.2
ratelimit_rate: 200
ratelimit_period: 1
connection_limit: 10000
indexes:
hicetnunc_fast_bm:
template: hicetnunc_fast_bm
values:
datasource: tzkt_testnet
objkt: HEN_objkts
swap_v1: HEN_swap_v1
What feature would you like to see in DipDup?
Indexing tz..-accounts (wallets)
Why do you need this feature, what's the use case?
Requested by Veqtor and nicoarbogast#0001 in Discord
Also needed for executors feature.
Is there a workaround currently?
No.
Currently, it seems like we have to drop our database when we change our models:
Reindexing required!
Reason: database schema has been modified
Additional context:
You may want to backup database before proceeding. After that perform one of the following actions:
* Eliminate the cause of reindexing and run `dipdup schema approve`
* Run `dipdup schema wipe [--immune]` command to drop database and start indexing from scratch
Is there not a command like Django's makemigrations to generate migration files?
Are there any plans to integrate aerich migrations?
We have a leading underscore in some entry points to signify internal (cross-contract) or admin entry-points, this crashes dipdup init:
File "/Users/goransandstrom/opt/anaconda3/bin/dipdup", line 8, in <module>
sys.exit(cli())
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/dipdup/cli.py", line 52, in wrapper
return asyncio.run(fn(*args, **kwargs))
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/dipdup/cli.py", line 109, in init
await dipdup.init()
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/dipdup/dipdup.py", line 159, in init
await codegen.generate_types()
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/dipdup/codegen.py", line 238, in generate_types
name = snake_to_pascal(name)
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/dipdup/utils.py", line 34, in snake_to_pascal
return ''.join(map(lambda x: x[0].upper() + x[1:], value.replace('.', '_').split('_')))
File "/Users/goransandstrom/opt/anaconda3/lib/python3.8/site-packages/dipdup/utils.py", line 34, in <lambda>
return ''.join(map(lambda x: x[0].upper() + x[1:], value.replace('.', '_').split('_')))```
Steps to reproduce:
Create a big_map index for bigmap#511 (HEN) on mainnet and set skip_history: once
What did you expect to happen:
DipDup downloads bigmap snapshot (current values for active keys) page by page and passes to handlers batch by batch.
What actually happened:
DipDup accumulates the entire keyset and only then it starts processing items.
master
: yesconfig export
outputmodels.py
schema export
outputWas originally submitted by @veqtor
What feature would you like to see in DipDup?
Call arbitrary SQL queries stored within the project.
Why do you need this feature, what's the use case?
Honestly, IDK. Lost feature request from one of our chats.
Is there a workaround currently?
async with in_transaction() as conn:
conn...
What feature would you like to see in DipDup?
The idea, when having multiple indexes and they actually sort of depending on each other, it would be nice to be able to set a "synch order" allowing one to say, index tokens before a market contract, at real-time this should matter perhaps but historically I can see race conditions happening.
Why do you need this feature, what's the use case?
Is there a workaround currently?
No.
https://discord.com/channels/846362414039695391/846418873233571840/943761927576883240
What feature would you like to see in DipDup?
Ability to generate more complex queries.
Why do you need this feature, what's the use case?
More flexibility clientside.
Is there a workaround currently?
No.
Steps to reproduce:
Combine early_realtime
with a single index with last_level
What did you expect to happen:
No realtime connection at all.
What actually happened:
Realtime connection established, dipdup not exited after sync.
master
? anyWhat feature would you like to see in DipDup?
graphql
datasource for Hasura instances and other GQL API.
Why do you need this feature, what's the use case?
More integrations - more flexibility.
Is there a workaround currently?
http
datasource and manual requests.
Steps to reproduce:
What did you expect to happen:
What actually happened:
master
?config export
outputmodels.py
schema export
outputWhat feature would you like to see in DipDup?
Match activations in operation indexes.
Why do you need this feature, what's the use case?
Requested by @pravindee
in tg
Is there a workaround currently?
No.
Steps to reproduce: class TLD(Model): ...
What did you expect to happen: tld
table
What actually happened: "TLD"
table
master
? yesSteps to reproduce: Run dipdup init
with incorrect TzKT URL
What did you expect to happen: Fixing URL helps
What actually happened: Invalid response was cached. init
command fails until cache clear
command is invoked.
master
? anySteps to reproduce:
Use jobs in 5.0.0rc3
What did you expect to happen:
Not crashing when running
What actually happened:
getting:
File "/home/veqtor/.local/lib/python3.10/site-packages/dipdup/config.py", line 1207, in load
config = cls(**json_config)
File "<string>", line 17, in __init__
File "pydantic/dataclasses.py", line 100, in pydantic.dataclasses._generate_pydantic_post_init._pydantic_post_init
# +-------+-------+-------+
pydantic.error_wrappers.ValidationError: 15 validation errors for DipDupConfig
jobs -> backup_offchain_data
'>' not supported between instances of 'map' and 'int' (type=type_error)
jobs -> index_latest_merkle_trees
'>' not supported between instances of 'map' and 'int' (type=type_error)
jobs -> check_if_main
'>' not supported between instances of 'map' and 'int' (type=type_error)
jobs -> fetch_merkle_claims
'>' not supported between instances of 'map' and 'int' (type=type_error)
master
?Steps to reproduce:
What did you expect to happen:
Expected indexing to proceed without issue as it did in my local setup
What actually happened:
Connection to the database fails producing the following error:
asyncpg.exceptions.ProtocolViolationError: unsupported startup parameter: search_path
master
?Hello,
Is it possible to query the USD quote for the current block level inside DipDup ?
Thanks and have a nice day!
When I have multiple databases connected to an hasura instance, DipDup seems to try to apply the table customisation to tables in the other database as well. And then fails with 400, message='Bad Request'
when updating the hasura metadata on those tables.
Excerpt of the log:
tezland-indexer | INFO dipdup.hasura Waiting for Hasura instance to be ready
tezland-indexer | INFO dipdup.hasura Connected to Hasura v2.4.0
tezland-indexer | INFO dipdup.hasura Fetching existing metadata
tezland-indexer | INFO dipdup.hasura Generating Hasura metadata based on project models
tezland-indexer | INFO dipdup.hasura Found 1 regular and materialized views
tezland-indexer | INFO dipdup.hasura Replacing metadata
tezland-indexer | INFO dipdup.hasura Applying `contract_metadata` table customization
tezland-indexer | WARNING dipdup.http HTTP request attempt 1/3 failed: 400, message='Bad Request', url=URL('http://tezland-hasura:8080/v1/metadata')
tezland-indexer | INFO dipdup.http Waiting 1.0 seconds before retry
tezland-indexer | WARNING dipdup.http HTTP request attempt 2/3 failed: 400, message='Bad Request', url=URL('http://tezland-hasura:8080/v1/metadata')
tezland-indexer | INFO dipdup.http Waiting 1.0 seconds before retry
tezland-hasura | {"type":"http-log","timestamp":"2022-04-11T20:49:19.714+0000","level":"error","detail":{"operation":{"user_vars":{"x-hasura-role":"admin"},"error":{"path":"$.args","error":"table \"contract_metadata\" does not exist in source: default","code":"not-exists"},"request_id":"153ee1e6-68fa-4cfa-afa2-fc5404c55066","response_size":109,"query":{"args":{"source":"default","configuration":{"identifier":"contractMetadataByPk","custom_root_fields":{"insert":"insertContractMetadata","select_aggregate":"contractMetadataAggregate","insert_one":"insertContractMetadataOne","select_by_pk":"contractMetadataByPk","select":"contractMetadata","delete":"deleteContractMetadata","update":"updateContractMetadata","delete_by_pk":"deleteContractMetadataByPk","update_by_pk":"updateContractMetadataByPk"},"custom_column_names":{"retry_count":"retryCount","status":"status","link":"link","update_id":"updateId","network":"network","updated_at":"updatedAt","contract":"contract","created_at":"createdAt","metadata":"metadata","id":"id"}},"table":{"schema":"public","name":"contract_metadata"}},"type":"pg_set_table_customization"},"request_mode":"error"},"request_id":"153ee1e6-68fa-4cfa-afa2-fc5404c55066","http_info":{"status":400,"http_version":"HTTP/1.1","url":"/v1/metadata","ip":"192.168.160.15","method":"POST","content_encoding":null}}}
tezland-indexer | WARNING dipdup.http HTTP request attempt 3/3 failed: 400, message='Bad Request', url=URL('http://tezland-hasura:8080/v1/metadata')
tezland-indexer | INFO dipdup.http Waiting 1.0 seconds before retry
tezland-hasura | {"type":"http-log","timestamp":"2022-04-11T20:49:20.718+0000","level":"error","detail":{"operation":{"user_vars":{"x-hasura-role":"admin"},"error":{"path":"$.args","error":"table \"contract_metadata\" does not exist in source: default","code":"not-exists"},"request_id":"ef5bfc54-e551-4c78-8019-b21c82994078","response_size":109,"query":{"args":{"source":"default","configuration":{"identifier":"contractMetadataByPk","custom_root_fields":{"insert":"insertContractMetadata","select_aggregate":"contractMetadataAggregate","insert_one":"insertContractMetadataOne","select_by_pk":"contractMetadataByPk","select":"contractMetadata","delete":"deleteContractMetadata","update":"updateContractMetadata","delete_by_pk":"deleteContractMetadataByPk","update_by_pk":"updateContractMetadataByPk"},"custom_column_names":{"retry_count":"retryCount","status":"status","link":"link","update_id":"updateId","network":"network","updated_at":"updatedAt","contract":"contract","created_at":"createdAt","metadata":"metadata","id":"id"}},"table":{"schema":"public","name":"contract_metadata"}},"type":"pg_set_table_customization"},"request_mode":"error"},"request_id":"ef5bfc54-e551-4c78-8019-b21c82994078","http_info":{"status":400,"http_version":"HTTP/1.1","url":"/v1/metadata","ip":"192.168.160.15","method":"POST","content_encoding":null}}}
tezland-hasura | {"type":"http-log","timestamp":"2022-04-11T20:49:19.714+0000","level":"info","detail":{"operation":{"request_id":"95407b7d-d381-4e4d-883f-5b8437a0c8c4","response_size":36,"request_mode":"non-graphql"},"request_id":"95407b7d-d381-4e4d-883f-5b8437a0c8c4","http_info":{"status":200,"http_version":"HTTP/1.1","url":"/healthz","ip":"192.168.160.16","method":"GET","content_encoding":null}}}
tezland-indexer | Traceback (most recent call last):
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/hasura.py", line 166, in _hasura_request
tezland-indexer | result = await self.request(
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/http.py", line 59, in request
tezland-indexer | return await self._http.request(method, url, cache, weight, **kwargs)
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/http.py", line 212, in request
tezland-indexer | return await self._retry_request(method, url, weight, **kwargs)
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/http.py", line 136, in _retry_request
tezland-indexer | raise e
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/http.py", line 128, in _retry_request
tezland-indexer | return await self._request(
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/http.py", line 177, in _request
tezland-indexer | async with self._session.request(
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/aiohttp/client.py", line 1138, in __aenter__
tezland-indexer | self._resp = await self._coro
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/aiohttp/client.py", line 640, in _request
tezland-indexer | resp.raise_for_status()
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 1004, in raise_for_status
tezland-indexer | raise ClientResponseError(
tezland-indexer | aiohttp.client_exceptions.ClientResponseError: 400, message='Bad Request', url=URL('http://tezland-hasura:8080/v1/metadata')
tezland-indexer |
tezland-indexer | The above exception was the direct cause of the following exception:
tezland-indexer |
tezland-indexer | Traceback (most recent call last):
tezland-indexer | File "/usr/local/bin/dipdup", line 8, in <module>
tezland-indexer | sys.exit(cli())
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/asyncclick/core.py", line 1150, in __call__
tezland-indexer | return anyio.run(self._main, main, args, kwargs, **({"backend":_anyio_backend} if _anyio_backend is not None else {}))
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/anyio/_core/_eventloop.py", line 56, in run
tezland-indexer | return asynclib.run(func, *args, **backend_options)
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 233, in run
tezland-indexer | return native_run(wrapper(), debug=debug)
tezland-indexer | File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
tezland-indexer | return loop.run_until_complete(main)
tezland-indexer | File "/usr/local/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
tezland-indexer | return future.result()
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 228, in wrapper
tezland-indexer | return await func(*args)
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/asyncclick/core.py", line 1153, in _main
tezland-indexer | return await main(*args, **kwargs)
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/asyncclick/core.py", line 1074, in main
tezland-indexer | rv = await self.invoke(ctx)
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/asyncclick/core.py", line 1684, in invoke
tezland-indexer | return await _process_result(await sub_ctx.command.invoke(sub_ctx))
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/asyncclick/core.py", line 1420, in invoke
tezland-indexer | return await ctx.invoke(self.callback, **ctx.params)
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/asyncclick/core.py", line 774, in invoke
tezland-indexer | rv = await rv
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/cli.py", line 87, in wrapper
tezland-indexer | await fn(*args, **kwargs)
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/cli.py", line 205, in run
tezland-indexer | await dipdup.run()
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/dipdup.py", line 354, in run
tezland-indexer | await self._set_up_hasura(stack)
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/dipdup.py", line 464, in _set_up_hasura
tezland-indexer | await hasura_gateway.configure()
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/hasura.py", line 134, in configure
tezland-indexer | await self._apply_table_customization()
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/hasura.py", line 425, in _apply_table_customization
tezland-indexer | await self._hasura_request(
tezland-indexer | File "/usr/local/lib/python3.10/site-packages/dipdup/hasura.py", line 174, in _hasura_request
tezland-indexer | raise HasuraError(f'{e.status} {e.message}') from e
tezland-indexer | dipdup.exceptions.HasuraError: 400 Bad Request
tezland-indexer | ________________________________________________________________________________
tezland-indexer |
tezland-indexer | Failed to configure Hasura: 400 Bad Request
tezland-indexer |
tezland-indexer | Check out Hasura logs for more information.
tezland-indexer |
tezland-indexer | GraphQL integration docs: https://docs.dipdup.net/graphql/
tezland-indexer |
tezland-hasura | {"type":"http-log","timestamp":"2022-04-11T20:49:21.724+0000","level":"error","detail":{"operation":{"user_vars":{"x-hasura-role":"admin"},"error":{"path":"$.args","error":"table \"contract_metadata\" does not exist in source: default","code":"not-exists"},"request_id":"ef98610b-7e2e-48a4-890b-21c4ed587fff","response_size":109,"query":{"args":{"source":"default","configuration":{"identifier":"contractMetadataByPk","custom_root_fields":{"insert":"insertContractMetadata","select_aggregate":"contractMetadataAggregate","insert_one":"insertContractMetadataOne","select_by_pk":"contractMetadataByPk","select":"contractMetadata","delete":"deleteContractMetadata","update":"updateContractMetadata","delete_by_pk":"deleteContractMetadataByPk","update_by_pk":"updateContractMetadataByPk"},"custom_column_names":{"retry_count":"retryCount","status":"status","link":"link","update_id":"updateId","network":"network","updated_at":"updatedAt","contract":"contract","created_at":"createdAt","metadata":"metadata","id":"id"}},"table":{"schema":"public","name":"contract_metadata"}},"type":"pg_set_table_customization"},"request_mode":"error"},"request_id":"ef98610b-7e2e-48a4-890b-21c4ed587fff","http_info":{"status":400,"http_version":"HTTP/1.1","url":"/v1/metadata","ip":"192.168.160.15","method":"POST","content_encoding":null}}}
The contract_metadata
table is in another database (same postgres instance) also connected to the hasura instance.
Steps to reproduce:
dipdup -l src/zindexer/logger-local.yml -c src/zindexer/dipdup.yml -c src/zindexer/dipdup-local.yml run
What did you expect to happen:
Successful entry to database
What actually happened:
Got error tortoise.exceptions.TransactionManagementError: current transaction is aborted, commands ignored until end of transaction block
master
?2022-02-09 12:27:35,699 - dipdup.http - DEBUG - HTTP request attempt 1/3
2022-02-09 12:27:35,699 - dipdup.http - DEBUG - Calling `https://crxatorz-test.hasura.app/v1/graphql`
2022-02-09 12:27:35,860 - dipdup.http - DEBUG - Closing gateway session (https://crxatorz-test.hasura.app)
2022-02-09 12:27:35,861 - apscheduler.scheduler - INFO - Scheduler has been shut down
2022-02-09 12:27:35,861 - dipdup.http - DEBUG - Closing gateway session (https://api.hangzhou2net.tzkt.io)
Traceback (most recent call last):
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/tortoise/backends/asyncpg/client.py", line 36, in translate_exceptions_
return await func(self, *args)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/tortoise/backends/asyncpg/client.py", line 185, in execute_query
rows = await connection.fetch(*params)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/asyncpg/connection.py", line 601, in fetch
return await self._execute(
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/asyncpg/connection.py", line 1639, in _execute
result, _ = await self.__execute(
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/asyncpg/connection.py", line 1664, in __execute
return await self._do_execute(
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/asyncpg/connection.py", line 1711, in _do_execute
result = await executor(stmt, None)
File "asyncpg/protocol/protocol.pyx", line 201, in bind_execute
asyncpg.exceptions.InFailedSQLTransactionError: current transaction is aborted, commands ignored until end of transaction block
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/context.py", line 385, in _callback_wrapper
yield
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/context.py", line 335, in fire_handler
await handler_config.callback_fn(new_ctx, *args, **kwargs)
File "/Users/andronov04/crxatorz/projects/crxatorz-indexer/src/zindexer/handlers/on_update_profile.py", line 27, in on_update_profile
user, _ = await models.Users.update_or_create(
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/tortoise/models.py", line 1094, in update_or_create
return await cls.get_or_create(defaults, db, **kwargs)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/tortoise/models.py", line 1058, in get_or_create
return await cls.filter(**kwargs).using_db(connection).get(), False
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/tortoise/queryset.py", line 1006, in _execute
instance_list = await self._db.executor_class(
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/tortoise/backends/base/executor.py", line 130, in execute_select
_, raw_results = await self.db.execute_query(query.get_sql())
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/tortoise/backends/asyncpg/client.py", line 42, in translate_exceptions_
raise TransactionManagementError(exc)
tortoise.exceptions.TransactionManagementError: current transaction is aborted, commands ignored until end of transaction block
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/andronov04/venvs/crxatorz-indexer/bin/dipdup", line 5, in <module>
cli()
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/asyncclick/core.py", line 1150, in __call__
return anyio.run(self._main, main, args, kwargs, **({"backend":_anyio_backend} if _anyio_backend is not None else {}))
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/anyio/_core/_eventloop.py", line 56, in run
return asynclib.run(func, *args, **backend_options)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 233, in run
return native_run(wrapper(), debug=debug)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 612, in run_until_complete
return future.result()
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 228, in wrapper
return await func(*args)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/asyncclick/core.py", line 1153, in _main
return await main(*args, **kwargs)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/asyncclick/core.py", line 1074, in main
rv = await self.invoke(ctx)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/asyncclick/core.py", line 1684, in invoke
return await _process_result(await sub_ctx.command.invoke(sub_ctx))
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/asyncclick/core.py", line 1420, in invoke
return await ctx.invoke(self.callback, **ctx.params)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/asyncclick/core.py", line 774, in invoke
rv = await rv
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/cli.py", line 94, in wrapper
await fn(*args, **kwargs)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/cli.py", line 215, in run
await dipdup.run()
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/dipdup.py", line 371, in run
await gather(*tasks)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/dipdup.py", line 123, in run
await gather(*tasks)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/index.py", line 206, in process
await self._synchronize(sync_level)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/index.py", line 327, in _synchronize
await self._process_level_operations(operation_subgroups)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/index.py", line 378, in _process_level_operations
await self._call_matched_handler(handler_config, operation_subgroup, args)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/index.py", line 517, in _call_matched_handler
await self._ctx.fire_handler(
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/context.py", line 104, in fire_handler
await self.callbacks.fire_handler(self, name, index, datasource, fmt, *args, **kwargs)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/context.py", line 335, in fire_handler
await handler_config.callback_fn(new_ctx, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/andronov04/venvs/crxatorz-indexer/lib/python3.8/site-packages/dipdup/context.py", line 392, in _callback_wrapper
raise CallbackError(kind, name) from e
dipdup.exceptions.CallbackError: ('handler', 'on_update_profile')
________________________________________________________________________________
`on_update_profile` handler callback execution failed.
config export
outputadvanced:
early_realtime: true
merge_subscriptions: false
oneshot: false
postpone_jobs: false
reindex: {}
contracts:
tzc_profile:
address: ${TZC_PROFILE:-KT1XZ9rgxb2PcNxRgX2CEA9ZiAfiVT6GUB61}
typename: tzc_profile
database:
kind: postgres
host: ....amazonaws.com
port: 5432
...
datasources:
tzkt_mainnet:
kind: tzkt
url: ${TZKT_URL:-https://api.hangzhou2net.tzkt.io}
hooks: {}
indexes:
z_mainnet:
contracts:
- tzc_profile
datasource: tzkt_mainnet
first_level: 0
handlers:
- callback: on_update_profile
pattern:
- destination: tzc_profile
entrypoint: update
optional: false
type: transaction
kind: operation
last_level: 0
types:
- transaction
jobs: {}
package: zindexer
spec_version: '1.2'
templates: {}
models.py
from enum import IntEnum
from tortoise import Model
from tortoise import fields
class SwapStatus(IntEnum):
ACTIVE = 0
FINISHED = 1
CANCELED = 2
class Users(Model):
address = fields.CharField(36, pk=True)
username = fields.CharField(max_length=36)
timestamp = fields.DatetimeField()
schema export
outputCREATE TABLE "dipdup_contract" (
"name" VARCHAR(256) NOT NULL PRIMARY KEY,
"address" VARCHAR(256) NOT NULL,
"typename" VARCHAR(256),
"created_at" TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE "dipdup_head" (
"name" VARCHAR(256) NOT NULL PRIMARY KEY,
"level" INT NOT NULL,
"hash" VARCHAR(64) NOT NULL,
"timestamp" TIMESTAMP NOT NULL,
"created_at" TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE "dipdup_index" (
"name" VARCHAR(256) NOT NULL PRIMARY KEY,
"type" VARCHAR(9) NOT NULL /* operation: operation\nbig_map: big_map\nhead: head */,
"status" VARCHAR(8) NOT NULL DEFAULT 'NEW' /* NEW: NEW\nSYNCING: SYNCING\nREALTIME: REALTIME\nROLLBACK: ROLLBACK\nONESHOT: ONESHOT */,
"config_hash" VARCHAR(256) NOT NULL,
"template" VARCHAR(256),
"template_values" JSON,
"level" INT NOT NULL DEFAULT 0,
"created_at" TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE "dipdup_schema" (
"name" VARCHAR(256) NOT NULL PRIMARY KEY,
"hash" VARCHAR(256) NOT NULL,
"reindex" VARCHAR(40) /* MANUAL: triggered manually from callback\nMIGRATION: applied migration requires reindexing\nROLLBACK: reorg message received and can't be processed\nCONFIG_HASH_MISMATCH: index config has been modified\nSCHEMA_HASH_MISMATCH: database schema has been modified\nBLOCK_HASH_MISMATCH: block hash mismatch, missed rollback when DipDup was stopped\nMISSING_INDEX_TEMPLATE: index template is missing, can't restore index state */,
"created_at" TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
"updated_at" TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE "users" (
"address" VARCHAR(36) NOT NULL PRIMARY KEY,
"username" VARCHAR(36) NOT NULL,
"timestamp" TIMESTAMP NOT NULL
);
CREATE TABLE "script" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"timestamp" TIMESTAMP NOT NULL,
"creator_id" VARCHAR(36) NOT NULL REFERENCES "users" ("address") ON DELETE CASCADE
);
CREATE TABLE "token" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
"timestamp" TIMESTAMP NOT NULL,
"creator_id" VARCHAR(36) NOT NULL REFERENCES "users" ("address") ON DELETE CASCADE,
"script_id" BIGINT NOT NULL REFERENCES "script" ("id") ON DELETE CASCADE
);
CREATE
OR REPLACE VIEW dipdup_head_status AS
SELECT
name,
CASE
WHEN timestamp < NOW() - interval '3 minutes' THEN 'OUTDATED'
ELSE 'OK'
END AS status
FROM
dipdup_head;
What feature would you like to see in DipDup?
Ability to skip source
operations during sync.
Why do you need this feature, what's the use case?
Faster sync in some cases.
Is there a workaround currently?
No.
Tx of interest
https://api.tzkt.io/v1/operations/transactions?target=KT1E8Qzgx3C5AAE4iGuXvqSQjdd21LK2aXAk&entrypoint=sell&level=1406473
Detailed tx dump at
https://api.tzkt.io/v1/operations/transactions/op86LcibGm9cYzAojPw9Tn284QeKFZfJCZFW8JoKQtgDb94CH36
Dipdup -- dipdup>=4.0.0-rc2
Traceback (most recent call last):
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/exceptions.py", line 70, in wrap
yield
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/cli.py", line 84, in wrapper
await fn(*args, **kwargs)
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/cli.py", line 203, in run
await dipdup.run()
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/dipdup.py", line 367, in run
await gather(*tasks)
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/dipdup.py", line 119, in run
await gather(*tasks)
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/index.py", line 202, in process
await self._synchronize(sync_level)
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/index.py", line 323, in _synchronize
await self._process_level_operations(operation_subgroups)
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/index.py", line 365, in _process_level_operations
matched_handlers += await self._match_operation_subgroup(operation_subgroup)
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/index.py", line 446, in _match_operation_subgroup
args = await self._prepare_handler_args(handler_config, matched_operations)
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/index.py", line 483, in _prepare_handler_args
storage = deserialize_storage(operation_data, storage_type)
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/datasources/tzkt/models.py", line 162, in deserialize_storage
_process_plain_storage(
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/datasources/tzkt/models.py", line 144, in _process_plain_storage
_apply_bigmap_diffs(storage_dict, bigmap_diffs, '', is_array)
File "/home/pravin/.local/lib/python3.8/site-packages/dipdup/datasources/tzkt/models.py", line 77, in _apply_bigmap_diffs
bigmap_dict[bigmap_key][key] = diff['content']['value'] # type: ignore
TypeError: unhashable type: 'dict'
Think the reason for the issue is that the diff key type is assumed from the storage type. If the storage type is plain, diff key is also assumed plain. However this is not the case.
Think the storage and bigmap_diff needs to be handled independently
What feature would you like to see in DipDup?
Full support of Windows, MacOS and non-Linux UNIX-like distros.
Why do you need this feature, what's the use case?
Some folks want to run DipDup on exotic OSes.
Is there a workaround currently?
WSL for win32, plug-and-pray for UNIX-like.
Steps to reproduce:
Init existing sqlite schema
What did you expect to happen:
Nothing
What actually happened:
Crash
master
? anyFixed in 47c76f9, backport!
Would be nice to have.
Thanks btw, very useful tool.
I've encountered some reorg strangeness lately:
Reorg/rollback where to > from:
tezland-indexer | INFO dipdup.tzkt tzkt_mainnet: Realtime message received: head, DATA, 2271111 -> 2271112
tezland-indexer | INFO dipdup.tzkt tzkt_mainnet: Realtime message received: head, REORG, 2271112 -> 2271111
tezland-indexer | INFO dipdup.tzkt tzkt_mainnet: Realtime message received: head, DATA, 2271111 -> 2271111
tezland-indexer | INFO dipdup.tzkt tzkt_mainnet: Realtime message received: operation, REORG, 2271037 -> 2271111
tezland-indexer | INFO dipdup.tzkt tzkt_mainnet: Emitting rollback from 2271037 to 2271111
tezland-indexer | WARNING dipdup Datasource `https://api.tzkt.io` rolled back: 2271037 -> 2271111
Reorg where from is None:
tezland-indexer | INFO dipdup.tzkt tzkt_mainnet: Realtime message received: head, DATA, 2272087 -> 2272088
tezland-indexer | INFO dipdup.tzkt tzkt_mainnet: Realtime message received: head, REORG, 2272088 -> 2272087
tezland-indexer | INFO dipdup.tzkt tzkt_mainnet: Realtime message received: head, DATA, 2272087 -> 2272087
tezland-indexer | INFO dipdup.tzkt tzkt_mainnet: Realtime message received: operation, REORG, None -> 2272087
tezland-indexer | INFO dipdup.tzkt tzkt_mainnet: Realtime message received: head, DATA, 2272087 -> 2272088
No idea if these are really problematic, just leaving here to not get lost.
DipDup 4.2.7.
Thanks!
What feature would you like to see in DipDup?
New command to start fresh DipDup project interactively.
Why do you need this feature, what's the use case?
Mostly for new projects. Optionally, to expand existing ones.
Is there a workaround currently?
Sorta, cookiecutter
template.
Hi Team,
I have seen that you have added the MIT license to most other dipdup projects. Is there the intention to add MIT here as well? I would welcome a LICENSE file in the repository if that is the case.
It is important to properly use and attribute your work.
update: found it here would be great to actually add the LICENSE text as well as this is the actual license and not the SPDX short identifier as it is not clear if you are referring to this.
Best regards
Carlo
Steps to reproduce: deploy, hasura/graphql-engine:v2.2.2
, hasura configure --force
What did you expect to happen: all good
What actually happened: Hasura does not accept metadata generated by DipDup
timescale/timescaledb:latest-pg13
>=2.3.0
master
? ySteps to reproduce:
Add some chars that need to be escaped.
What did you expect to happen:
Nothing unexpected.
What actually happened:
Not escaped, Tortoise use invalid credentials to the driver.
master
? anySteps to reproduce:
Use a big map index, wait for connection loss (1-12h)
What did you expect to happen:
Not crashing
What actually happened:
When connection is lost, and then reestablished, big map index re-runs for current level causing:
RuntimeError: Level of big map batch must be higher than index state level: 2122233 <= 2122233
master
? not alwaysconfig export
outputmodels.py
schema export
outputI think it happens when restarting dipdup without a full reindex.
Not sure this is the correct way to get the current block level, maybe there's a better way to get it from the database maybe?
async def on_synchronized(
ctx: HookContext,
) -> None:
await ctx.execute_sql('on_synchronized')
level = ctx.get_tzkt_datasource("tzkt_mainnet")._level.get(MessageType.head)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.