Comments (22)
in your use case, it is the
deleted_table_expr
yes
from diskquota.
To avoid building large string, maybe we should use COPY
to flush to diskquota.table_size
when the change set is large.
from diskquota.
maybe we should use
COPY
how can we use copy
for deleting?
from diskquota.
maybe we should use
COPY
how can we use
copy
for deleting?
Sorry, I might make a false assumption. Just to confirm, in your use case, it is the deleted_table_expr
rather than the insert_statement
that caused this ERROR, correct?
from diskquota.
OK. Could you please elaborate on why using views can mitigate this issue?
from diskquota.
why using views can mitigate this issue?
because with view one may execute something like
delete from diskquota.table_size where (tableid, segid) in ( SELECT * FROM deleted_table_view );
where deleted_table_view is view, realized via c-function like pg_stat_activity and pg_stat_get_activity
from diskquota.
why using views can mitigate this issue?
because with view one may execute something like
delete from diskquota.table_size where (tableid, segid) in ( SELECT * FROM deleted_table_view );where deleted_table_view is view, realized via c-function like pg_stat_activity and pg_stat_get_activity
Thanks! I think that's a good idea!
from diskquota.
I discovered that leads to this problem. Bug is more deep!
from diskquota.
at start diskquota tries to load all table into hash map in ram
diskquota/src/gp_activetable.c
Line 939 in a939bc0
but in our case it fails with error
"ERROR","XX000","invalid memory alloc request size 1073741824 (context 'SPI TupTable') (mcxt.c:1357)",,,,,"SQL statement ""select tableid, size, segid from diskquota.table_size""",,0,,"mcxt.c",571,
but this error is catched by
Lines 796 to 845 in a939bc0
Cannot enlarge string buffer containing 1073741807 bytes by 20 more bytes.
from diskquota.
instead load all table to ram at once I suggest doing something like in pg_prewarm extension or simply fetch by portions via SPI_cursor_fetch
from diskquota.
Thanks! Great analysis!
from diskquota.
"ERROR","XX000","invalid memory alloc request size 1073741824 (context 'SPI TupTable') (mcxt.c:1357)",,,,,"SQL statement ""select tableid, size, segid from diskquota.table_size""",,0,,"mcxt.c",571,
Can you please tell me the scene of this error? This error will not be raised unless the number of entries exceeds 10^8, and we have not encountered that situation currently.
diskquota.table_size
is used to avoid scanning all tables during cluster startup. And we're considering just scanning all tables and removing diskquota.table_size
in diskquota-3.0, so that we do not need to fetch or store table size. To avoid occupying too many resources, we will scan tables in portions. I hope to get your opinion.
from diskquota.
Can you please tell me the scene of this error? This error will not be raised unless the number of entries exceeds 10^8, and we have not encountered that situation currently.
this error occures on diskquota 2.0 when
select count(*) from diskquota.table_size;
118458142
from diskquota.
I hope to get your opinion.
My opinion: segments should manage quotas instead coordinator
from diskquota.
Can you please tell me the scene of this error? This error will not be raised unless the number of entries exceeds 10^8, and we have not encountered that situation currently.
this error occures on diskquota 2.0 when
select count(*) from diskquota.table_size; 118458142
How do you insert these entries, by creating 10^8 tables?
from diskquota.
How do you insert these entries, by creating 10^8 tables?
it became so big after updating from 1.0 to 2.0
from diskquota.
I hope to get your opinion.
My opinion: segments should manage quotas instead coordinator
The segment just has its own table-size information and does not know the entire table's size and whether the quota limit is exceeded. So we manage the quota config on the coordinator.
from diskquota.
The segment just has its own table-size information and does not know the entire table's size and whether the quota limit is exceeded. So we manage the quota config on the coordinator.
I suggest distribute whole quota limit between all segments equally or by some GUC or even setup limit per segmen
from diskquota.
I suggest distribute whole quota limit between all segments equally or by some GUC or even setup limit per segment
Thanks for your advice. We've added a per-segment quota limit in diskquota-2.0, which indicates each segment can not exceed quota_size * ratio
and the whole size can not exceed quota_size
. If we just dispatch quota_size * ratio
to each segment and quota_size * ratio * segment_number
exceeded quota_size
, the whole quota limit will be invalid.
from diskquota.
it became so big after updating from 1.0 to 2.0
we added a per-segment quota in diskquota-2.0. It splits each table's size with segid, so the number of entries is multiplied by segment_number.
from diskquota.
It splits each table's size with segid, so the number of entries is multiplied by segment_number.
we have over 500 segments
from diskquota.
we have over 500 segments
Too many TableSizeEntrys is the problem of diskquota-2.x and we plan to solve this problem in diskquota-3.0.
from diskquota.
Related Issues (20)
- Which branch/tag is stable? HOT 5
- 2.0.0/2.0.1 use WaitLatch incorrectly, which can cause coredump HOT 5
- Versions of diskquota HOT 13
- wrong required cmake version 3.18
- simplify error message HOT 1
- too frequent refresh HOT 7
- You should not pass database name via background worker structure
- Infinite loop when diskquota receives notice message HOT 11
- user may load diskquota library without shared memory initialization HOT 1
- Build OS and cmake rpm HOT 3
- Disk qouta worker infinitely waits network. HOT 3
- diskquota counts sizes even if transaction is rollbacked HOT 2
- diskquota not working after reaching max_table_segments limit HOT 12
- Unit, regress tests completion HOT 8
- diskquota stop working after removing any extension HOT 2
- dump and restore the user configurations HOT 1
- Postgresql Install HOT 2
- Diskquota uses `pg_total_relation_size()` to compute size when table and indexes are in different tablespaces HOT 2
- Add support for querying blackmap. HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from diskquota.