Giter VIP home page Giter VIP logo

diskquota's People

Contributors

beeender avatar dnskvlnk avatar gfphoenix78 avatar gp-releng avatar haozhouwang avatar higuoxing avatar liuxueyang avatar rekgrpth avatar sasasu avatar woblerr avatar xuebinsu avatar yihong0618 avatar zhangh43 avatar zhrt123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

diskquota's Issues

Add support for querying blackmap.

Currently, diskquota doesn't support querying blackmap items. It might be good to have one which helps engineer debug the extension and helps user view blocked queries.

e.g.,

SELECT diskquota.get_black_map();
  target oid  |  databaseoid  |  tablespaceoid  |  targettype  
---------------------------------------------------------------
  some_oid    |  some_oid     |  some_oid       |   ROLE_QUOTA
...

cc1: some warnings being treated as errors

gdb
diskquota.c: In function ‘dq_object_access_hook’:
diskquota.c:1039:20: error: ‘MyProc’ undeclared (first use in this function)
rc = WaitLatch(&MyProc->procLatch,
^~~~~~
diskquota.c:1041:16: error: ‘PG_WAIT_EXTENSION’ undeclared (first use in this function)
100L, PG_WAIT_EXTENSION);
^~~~~~~~~~~~~~~~~
diskquota.c:1039:9: error: too many arguments to function ‘WaitLatch’
rc = WaitLatch(&MyProc->procLatch,
rc = WaitLatch(&MyProc->procLatch,
^~~~~~~~~
postgresql/server/storage/latch.h:136:12: note: declared here
extern int WaitLatch(volatile Latch *latch, int wakeEvents, long timeout);
^~~~~~~~~
cc1: some warnings being treated as errors
make: *** [diskquota.o] Error 1

how to deal with this problem?

dump and restore the user configurations

hi, when i test the dump/restore of diskquota with pg_dump/pg_restore, i found that what we saved in diskquota.quota_config and diskquota.target (called user configurations for short) is oids but not object names.

yes, i know that storing oids in these tables helps improve processing efficiency, but when i delete database objects and then restore them with dumped file, the database objects are created again and the oids are changed, which making the user configurations became invalid.

test1=# create schema s1;
CREATE SCHEMA
test1=# select diskquota.set_schema_quota('s1', '1 MB');
 set_schema_quota
------------------

(1 row)

test1=# select * from diskquota.config;
 targetoid | quotatype | quotalimitmb | segratio
-----------+-----------+--------------+----------
     16739 |         0 |            1 |        0
(1 row)

test1=# select oid from pg_namespace where nspname like 's1';
  oid
-------
 16793
(1 row)

...

pg_dump -Fc test1 > /tmp/test1.dump
...
dropdb test1
pg_restore -C -d postgres /tmp/test1.dump

...
test1=# select * from diskquota.config;
 targetoid | quotatype | quotalimitmb | segratio
-----------+-----------+--------------+----------
     16739 |         0 |            1 |        0     <----this quota setting is useless
(1 row)

test1=# select oid from pg_namespace where nspname like 's1';
  oid
-------
 16831   <--- new oid
(1 row)

so, is it a problem? how to solve it? or should i write a little piece of customized dump code to convert the data to quota-setting SQLs ?

string may too big

how about instead of constructing large string in

diskquota/src/quotamodel.c

Lines 1158 to 1190 in a939bc0

while ((tsentry = hash_seq_search(&iter)) != NULL)
{
int seg_st = TableSizeEntrySegidStart(tsentry);
int seg_ed = TableSizeEntrySegidEnd(tsentry);
for (int i = seg_st; i < seg_ed; i++)
{
/* delete dropped table from both table_size_map and table table_size */
if (!get_table_size_entry_flag(tsentry, TABLE_EXIST))
{
appendStringInfo(&deleted_table_expr, "(%u,%d), ", tsentry->key.reloid, i);
delete_statement_flag = true;
}
/* update the table size by delete+insert in table table_size */
else if (TableSizeEntryGetFlushFlag(tsentry, i))
{
appendStringInfo(&deleted_table_expr, "(%u,%d), ", tsentry->key.reloid, i);
appendStringInfo(&insert_statement, "(%u,%ld,%d), ", tsentry->key.reloid,
TableSizeEntryGetSize(tsentry, i), i);
delete_statement_flag = true;
insert_statement_flag = true;
TableSizeEntryResetFlushFlag(tsentry, i);
}
}
if (!get_table_size_entry_flag(tsentry, TABLE_EXIST))
{
hash_search(table_size_map, &tsentry->key, HASH_REMOVE, NULL);
pg_atomic_fetch_sub_u32(diskquota_table_size_entry_num, 1);
}
}
truncateStringInfo(&deleted_table_expr, deleted_table_expr.len - strlen(", "));
truncateStringInfo(&insert_statement, insert_statement.len - strlen(", "));
appendStringInfo(&deleted_table_expr, ")");
appendStringInfo(&insert_statement, ";");
use view and it implementation like pg_stat_activity and pg_stat_get_activity to avoid such errors as

Cannot enlarge string buffer containing 1073741807 bytes by 20 more bytes.

?

too frequent refresh

how about instead simply wait latch and continue on any signal occured in

diskquota/src/diskquota.c

Lines 489 to 523 in a939bc0

if (!diskquota_is_paused())
{
/* Refresh quota model with init mode */
refresh_disk_quota_model(!MyWorkerInfo->dbEntry->inited);
MyWorkerInfo->dbEntry->inited = true;
is_gang_destroyed = false;
}
else if (!is_gang_destroyed)
{
DisconnectAndDestroyAllGangs(false);
is_gang_destroyed = true;
}
worker_increase_epoch(MyWorkerInfo->dbEntry->dbid);
// GPDB6 opend a MemoryAccount for us without asking us.
// and GPDB6 did not release the MemoryAccount after SPI finish.
// Reset the MemoryAccount although we never create it.
#if GP_VERSION_NUM < 70000
MemoryAccounting_Reset();
#endif /* GP_VERSION_NUM */
if (DiskquotaLauncherShmem->isDynamicWorker)
{
break;
}
CHECK_FOR_INTERRUPTS();
/*
* Background workers mustn't call usleep() or any direct equivalent:
* instead, they may wait on their process latch, which sleeps as
* necessary, but is awakened if postmaster dies. That way the
* background process goes away immediately in an emergency.
*/
rc = DiskquotaWaitLatch(&MyProc->procLatch, WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
diskquota_naptime * 1000L);
ResetLatch(&MyProc->procLatch);

to use something like https://github.com/greenplum-db/gpdb/blob/6342af5ead95f274e6affae66f2cebd8e4cd0fc6/src/backend/storage/lmgr/condition_variable.c#L170-L232
to avoid situations when with often signals receiving diskquota may call refresh_disk_quota_model frequently than diskquota_naptime

diskquota not working after reaching max_table_segments limit

  1. set minimum cache size
gpconfig -c diskquota.max_table_segments -v 6400
  1. run sql
CREATE EXTENSION diskquota;
CREATE EXTENSION
SELECT diskquota.set_schema_quota(current_schema, '1MB');
 set_schema_quota 
------------------
 
(1 row)

SET client_min_messages = WARNING;
SET
CREATE TABLE t(a int, b int) DISTRIBUTED BY (a) PARTITION BY RANGE (b) (START (1) END (100) EVERY (1));
CREATE TABLE
CREATE TABLE tt(a int, b int) DISTRIBUTED BY (a) PARTITION BY RANGE (b) (START (1) END (100) EVERY (1));
CREATE TABLE
RESET client_min_messages;
RESET
SELECT diskquota.wait_for_worker_new_epoch();
 wait_for_worker_new_epoch 
---------------------------
 t
(1 row)

SELECT * FROM diskquota.show_fast_schema_quota_view;
 schema_name | schema_oid | quota_in_mb | nspsize_in_bytes 
-------------+------------+-------------+------------------
 postgres    |      21746 |           1 |                0
(1 row)

INSERT INTO t SELECT i, i FROM generate_series(1, 9) i;
INSERT 0 9
INSERT INTO tt SELECT i, i FROM generate_series(1, 9) i;
INSERT 0 9
SELECT diskquota.wait_for_worker_new_epoch();
 wait_for_worker_new_epoch 
---------------------------
 t
(1 row)

SELECT * FROM diskquota.show_fast_schema_quota_view;
 schema_name | schema_oid | quota_in_mb | nspsize_in_bytes 
-------------+------------+-------------+------------------
 postgres    |      21746 |           1 |           294912
(1 row)

\dt+ tt_1_prt_1
                            List of relations
  Schema  |    Name    | Type  |  Owner   | Storage | Size  | Description 
----------+------------+-------+----------+---------+-------+-------------
 postgres | tt_1_prt_1 | table | postgres | heap    | 32 kB | 
(1 row)

INSERT INTO tt SELECT i, 1 FROM generate_series(1, 99999) i;
INSERT 0 99999
SELECT diskquota.wait_for_worker_new_epoch();
 wait_for_worker_new_epoch 
---------------------------
 t
(1 row)

SELECT * FROM diskquota.show_fast_schema_quota_view;
 schema_name | schema_oid | quota_in_mb | nspsize_in_bytes 
-------------+------------+-------------+------------------
 postgres    |      21746 |           1 |           294912
(1 row)

\dt+ tt_1_prt_1
                             List of relations
  Schema  |    Name    | Type  |  Owner   | Storage |  Size   | Description 
----------+------------+-------+----------+---------+---------+-------------
 postgres | tt_1_prt_1 | table | postgres | heap    | 3840 kB | 
(1 row)

INSERT INTO tt SELECT i, 1 FROM generate_series(1, 99999) i;
INSERT 0 99999
SELECT diskquota.wait_for_worker_new_epoch();
 wait_for_worker_new_epoch 
---------------------------
 t
(1 row)

SELECT * FROM diskquota.show_fast_schema_quota_view;
 schema_name | schema_oid | quota_in_mb | nspsize_in_bytes 
-------------+------------+-------------+------------------
 postgres    |      21746 |           1 |           294912
(1 row)

\dt+ tt_1_prt_1
                             List of relations
  Schema  |    Name    | Type  |  Owner   | Storage |  Size   | Description 
----------+------------+-------+----------+---------+---------+-------------
 postgres | tt_1_prt_1 | table | postgres | heap    | 7392 kB | 
(1 row)

Warning: out of shared memory

Issue

Some GPDB clusters generate warnings in common conditions (with no significant external influence present). Warnings are in form

WARNING:  out of shared memory
WARNING:  Share memory is not enough for active tables.

This is a known issue mentioned in the README. The document also offers a solution: increase diskquota.max_active_tables GUC cluster-wide (and restart the database).

However, this solution does not work: no matter how huge the mentioned value is, the warnings do not disappear.

Cause

I have made some research of the code and identified the probable cause of the problem.

The warning is generated at

entry = hash_search(active_tables_map, &item, HASH_ENTER_NULL, &found);

and its cause is the lack of shared memory to allocate a new hash map entry. This is the same the warning message says.

The cause of this lack of shared memory is the wrong order of operations in _PG_init(). There,

init_disk_quota_shmem();

is called, which then calls DiskQuotaShmemSize() method, which uses the value of diskquota.max_active_tables GUC:

size = add_size(size, hash_estimate_size(diskquota_max_active_tables, sizeof(DiskQuotaActiveTableEntry)));

However, this GUC value is initialized after the mentioned call from _PG_init():

DefineCustomIntVariable("diskquota.max_active_tables",

The default value of this GUC is

int diskquota_max_active_tables = 0;

which means no shared memory space is reserved for the active tables hash map.

As a result, the active tables hash map uses memory reserved somewhere else (e.g. by some other extension). In case of some clusters, this amount is not enough, and the warnings mentioned above are generated.

Possible solutions

Redefine GUC value / eliminate GUC

The error only affects gpdb branch. In master, the default value of the GUC was changed to 1024 * 1024 and the error does not manifest. In fact, diskquota.max_active_tables GUC may be eliminated and replaced with a constant, as it is not used for any other purpose (at least in the current code).

Reposition GUC initialization

The correct (intended) behaviour can be achieved by repositioning of GUC initialization, so that it is initialized before RequestAddinShmemSpace() call is made.

Postgresql Install

Hello,
I apologise if this is something simple but I am trying to install diskquota for Postgres 10.23 on Alma8 but its failing.

In the newest repo there is no patch for the Postgres db and the cmake gives error : cannot read GP_MAJORVERSION from '/usr/include/pg_config.h'

In the old-master repo I found the patch folder and file but when trying to apply to postgres source I am getting:
error: patch failed: src/backend/storage/smgr/smgr.c:411
error: src/backend/storage/smgr/smgr.c: patch does not apply

Please help 👯

diskquota counts sizes even if transaction is rollbacked

postgres=> CREATE EXTENSION diskquota;
CREATE EXTENSION
postgres=> SELECT diskquota.set_schema_quota('public', '1MB');
 set_schema_quota 
------------------
 
(1 row)

postgres=> CREATE TABLE a(i int);
CREATE TABLE
postgres=> SELECT * FROM diskquota.show_fast_schema_quota_view;
 schema_name | schema_oid | quota_in_mb | nspsize_in_bytes 
-------------+------------+-------------+------------------
 public      |       2200 |           1 |                0
(1 row)

postgres=> BEGIN;
BEGIN
postgres=> INSERT INTO a SELECT generate_series(1, 100000);
INSERT 0 100000
postgres=> ROLLBACK;
ROLLBACK
postgres=> SELECT * FROM diskquota.show_fast_schema_quota_view;
 schema_name | schema_oid | quota_in_mb | nspsize_in_bytes 
-------------+------------+-------------+------------------
 public      |       2200 |           1 |          3932160
(1 row)

simplify error message

diskquota/src/diskquota.c

Lines 858 to 864 in a939bc0

ret_code = SPI_execute(sql, false, 0);
if (ret_code != SPI_OK_UTILITY)
{
int saved_errno = errno;
ereport(ERROR, (errmsg("[diskquota launcher] SPI_execute error, sql: \"%s\", reason: %s, ret_code: %d.",
sql, strerror(saved_errno), ret_code)));
}

why not here and in other places simply use SPI_result_code_string(ret_code) instead? like

ret_code = SPI_execute(sql, false, 0);
if (ret_code != SPI_OK_UTILITY)
{
	ereport(ERROR, (errmsg("[diskquota launcher] SPI_execute error, sql: \"%s\", reason: %s.",
			                     sql, SPI_result_code_string(ret_code))));
}

Build OS and cmake rpm

I'm curious, on which operating system are you building? I usually build on centos 7, but couldn't find any cmake 3.20+ prebuilt rpms. Did you find any for centos 7 or you build cmake yourself? Or just use OS that doesn't have such problems?

Misspelling of words

branch: gpdb
quotamodel.c: line 565
mis-spellings: "reset owenr"
correct spelling: "reset owner"

about diskquota

I already compile the patch to pg source and install pg database ;but when I make
diskquota source ,it has a question:

quotamodel.c: In function 'calculate_table_disk_usage':
quotamodel.c:541:21:error:'FormData_pg_class' has no member named 'oid'
relOid=classForm->oid;
make: ***[quotamodel.o] Error 1

I hope someone can help me,thanks.

Which branch/tag is stable?

Hi,

I want to build and use diskquota extension. Could you ask please, which branch/tag is stable and suited to greenplum >= 6.17 OSS.
Thanks!

Compile alerts

branch: gpdb
diskquota.c: line 391
Inline functions are not declared, resulting in compile-time alerts: "diskquota.c:391:13: warning: no previous prototype for 'isAbnormalLoopTime' [-Wmissing-prototypes]"
 diskquota_compile_alerts

Should be declared near line 89.

You should not pass database name via background worker structure

You should not pass database name via background worker structure

char *dbname = MyBgworkerEntry->bgw_name;

because it is ascii-filtered in https://github.com/greenplum-db/gpdb/blob/f182228992b62e2023e2fac5b4971406abd35c9d/src/backend/postmaster/bgworker.c#L385-L386 or in https://github.com/greenplum-db/gpdb/blob/29d2bbd37779ad54cdf922ba8dfaf42cb1678725/src/backend/postmaster/bgworker.c#L300-L301
thus, if database name contains not-ascii symbols then diskquota does not work with it

diskquota stop working after removing any extension

CREATE EXTENSION diskquota;
CREATE EXTENSION
CREATE EXTENSION gp_inject_fault;
CREATE EXTENSION
SELECT diskquota.init_table_size_table();
 init_table_size_table 
-----------------------
 
(1 row)

SELECT diskquota.set_schema_quota(current_schema, '1MB');
 set_schema_quota 
------------------
 
(1 row)

SELECT diskquota.status();
              status              
----------------------------------
 ("soft limits",on)
 ("hard limits",off)
 ("current binary version",2.2.1)
 ("current schema version",2.2)
(4 rows)

DROP EXTENSION gp_inject_fault;
DROP EXTENSION
SELECT diskquota.status();
              status              
----------------------------------
 ("soft limits",paused)
 ("hard limits",off)
 ("current binary version",2.2.1)
 ("current schema version",2.2)
(4 rows)

SELECT diskquota.resume();
 resume 
--------
 
(1 row)

SELECT diskquota.status();
              status              
----------------------------------
 ("soft limits",paused)
 ("hard limits",off)
 ("current binary version",2.2.1)
 ("current schema version",2.2)
(4 rows)

in log

2023-09-12 21:23:14.502529 +05,"postgres","postgres",p548094,th-680920960,"127.0.0.1","34610",2023-09-12 21:23:13 +05,0,con16,cmd27,seg-1,,,,sx1,"LOG","00000","statement: DROP EXTENSION gp_inject_fault;",,,,,,"DROP EXTENSION gp_inject_fault;",0,,"postgres.c",1668,
2023-09-12 21:23:14.504175 +05,,,p547808,th-680920960,,,,0,con4,cmd16,seg-1,,,,,"LOG","00000","[diskquota launcher]: received create/drop extension diskquota message, extension launcher",,,,,,,0,,"diskquota.c",1050,
2023-09-12 21:23:14.524626 +05,,,p548116,th-680920960,,,,0,con18,cmd5,seg-1,,,,,"LOG","00000","[diskquota] stop disk quota worker process to monitor database:postgres",,,,,,,0,,"diskquota.c",586,

Unit, regress tests completion

Environment: docker container with centos 7, diskquota 2.2.2, gpdb 6.21.3.
It seems like you need gpdb source files to start running "make installcheck", but haven't seen a word about it in README. Am I doing something wrong here?
Anyways, in order to run diskquota tests I've downloaded gpdb source files, built and installed them, still not all tests are clear.

test config                   ... ok (7.47 sec)  (diff:0.07 sec)
test test_create_extension    ... ok (0.93 sec)  (diff:0.06 sec)
test test_fast_quota_view     ... FAILED (3.43 sec)  (diff:0.12 sec)
test test_relation_size       ... FAILED (1.49 sec)  (diff:0.13 sec)
test test_rejectmap           ... ok (17.88 sec)  (diff:0.10 sec)
test test_vacuum              ... FAILED (1.46 sec)  (diff:0.11 sec)
test test_truncate            ... FAILED (1.46 sec)  (diff:0.14 sec)
test test_postmaster_restart  ... ok (9.69 sec)  (diff:0.05 sec)
test test_worker_timeout      ... ok (4.20 sec)  (diff:0.05 sec)
test test_per_segment_config  ... ok (5.50 sec)  (diff:0.06 sec)
test test_relation_cache      ... ok (7.39 sec)  (diff:0.06 sec)
test test_ereport_from_seg    ... FAILED (0.45 sec)  (diff:0.10 sec)
test test_drop_extension      ... ok (0.28 sec)  (diff:0.05 sec)
test reset_config             ... ok (0.76 sec)  (diff:0.05 sec)

=======================
 5 of 14 tests failed.
=======================

What can i do to clear them all?

2.0.0/2.0.1 use WaitLatch incorrectly, which can cause coredump

PG_FUNCTION_INFO_V1(wait_for_worker_new_epoch);
Datum
wait_for_worker_new_epoch(PG_FUNCTION_ARGS)
{
TimestampTz start_time = GetCurrentTimestamp();
uint32 current_epoch = worker_get_epoch(MyDatabaseId);
for (;;)
{
CHECK_FOR_INTERRUPTS();
if (check_for_timeout(start_time)) start_time = GetCurrentTimestamp();
uint32 new_epoch = worker_get_epoch(MyDatabaseId);
/* Unsigned integer underflow is OK /
if (new_epoch - current_epoch >= 2u)
{
PG_RETURN_BOOL(true);
}
/
Sleep for naptime to reduce CPU usage */
(void)WaitLatch(&MyProc->procLatch, WL_LATCH_SET | WL_TIMEOUT, diskquota_naptime ? diskquota_naptime : 1);
ResetLatch(&MyProc->procLatch);
}
PG_RETURN_BOOL(false);
}

this is correct

(void)WaitLatch(&MyProc->procLatch, WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH, diskquota_naptime ? diskquota_naptime : 1);

Disk qouta worker infinitely waits network.

Hi hackers!

Recently we faces problem with one of our Greenplum production cluster. The problem was that diskqouta worker stuck, waiting some messages from network, while holding locks on relations

postgres=# select * from pg_locks where relation=57129168;
 locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |   pid   |           mode           | granted | fastpath | mppsessionid | mpp
iswriter | gp_segment_id
----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+---------+--------------------------+---------+----------+--------------+----
---------+---------------
 relation |  2066355 | 57129168 |      |       |            |               |         |       |          | 7/1474366          | 3820814 | ShareUpdateExclusiveLock | f       | f        |      6982934 | t
         |            -1
 relation |  2066355 | 57129168 |      |       |            |               |         |       |          | 3/73604            |    6007 | RowExclusiveLock         | t       | f        |      3374601 | t
         |            -1
 relation |  2066355 | 57129168 |      |       |            |               |         |       |          | 3/73604            |    6007 | ExclusiveLock            | t       | f        |      3374601 | t
         |            -1
 relation |  2066355 | 57129168 |      |       |            |               |         |       |          | 13/39282           | 1732059 | RowExclusiveLock         | t       | f        |      3374601 | t
         |             0
 relation |  2066355 | 57129168 |      |       |            |               |         |       |          | 47/195657          | 1732144 | RowExclusiveLock         | t       | f        |      3374601 | t
         |             1
 relation |  2066355 | 57129168 |      |       |            |               |         |       |          | 16/255405          | 1732062 | RowExclusiveLock         | t       | f        |      3374601 | t
         |             2
 relation |  2066355 | 57129168 |      |       |            |               |         |       |          | 39/181183          | 1732143 | RowExclusiveLock         | t       | f        |      3374601 | t
         |             3
 relation |  2066355 | 57129168 |      |       |            |               |         |       |          | 19/445630          | 1732061 | RowExclusiveLock         | t       | f        |      3374601 | t
         |             4

This causes analyze diskquota.table_size command to stuck.
Here is backtrace:

root@<cut>~ # bt -p 6007
[New LWP 6008]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x0000560abab2e8c7 in pq_set_nonblocking (nonblocking=nonblocking@entry=0 '\000') at pqcomm.c:892
#0  0x0000560abab2e8c7 in pq_set_nonblocking (nonblocking=nonblocking@entry=0 '\000') at pqcomm.c:892
#1  0x0000560abab2ecd3 in internal_putbytes (s=s@entry=0x7fff4d68dc0c "N\177", len=len@entry=1) at pqcomm.c:1484
#2  0x0000560abab30566 in pq_putmessage (msgtype=<optimized out>, s=0x560af7d7f448 "WARNING:  checkNetworkTimeout: interconnect may encountered a network error, please check your network  (seg105 slice1 10.192.1.67:6003 pid=1227791)\n", len=150) at pqcomm.c:1715
#3  0x0000560abab3122c in pq_endmessage (buf=buf@entry=0x7fff4d68dca0) at pqformat.c:347
#4  0x0000560abadf511d in forwardQENotices () at cdbconn.c:850
#5  0x0000560abadf66d9 in processResults (dispatchResult=dispatchResult@entry=0x560abd204338) at cdbdisp_async.c:1138
#6  0x0000560abadf75b8 in handlePollSuccess (fds=<optimized out>, pParms=<optimized out>) at cdbdisp_async.c:825
#7  checkDispatchResult (ds=0x560abcbd0b88, timeout_sec=-1) at cdbdisp_async.c:638
#8  0x0000560abadf54ae in cdbdisp_checkDispatchResult (ds=ds@entry=0x560abcbd0b88, waitMode=<optimized out>) at cdbdisp.c:133
#9  0x0000560abaae57b6 in mppExecutorFinishup (queryDesc=queryDesc@entry=0x560af6bae108) at execUtils.c:2237
#10 0x0000560abaad18c3 in standard_ExecutorEnd (queryDesc=0x560af6bae108) at execMain.c:1329
#11 0x0000560abab0af09 in _SPI_pquery (queryDesc=queryDesc@entry=0x560af6bae108, tcount=tcount@entry=0, fire_triggers=1 '\001') at spi.c:2650
#12 0x0000560abab0cfc3 in _SPI_execute_plan (plan=plan@entry=0x7fff4d68e540, paramLI=paramLI@entry=0x0, snapshot=snapshot@entry=0x0, crosscheck_snapshot=crosscheck_snapshot@entry=0x0, read_only=read_only@entry=0 '\000', fire_triggers=fire_triggers@entry=1 '\001', tcount=0) at spi.c:2321
#13 0x0000560abab0d2f3 in SPI_execute (src=0x560abcf5a2a8 "insert into diskquota.table_size values (141000603,268173312,61), (139931822,553368,134), (140985200,813920,104), (139915756,793960,99), (140967746,2125696,48), (140970111,1121832,96), (139930318,6289"..., read_only=<optimized out>, tcount=0) at spi.c:436
#14 0x00007f7d10b8effa in ?? () from /opt/greenplum-db-6/lib/postgresql/diskquota-2.0.so
#15 0x00007f7d10b87817 in disk_quota_worker_main () from /opt/greenplum-db-6/lib/postgresql/diskquota-2.0.so
#16 0x0000560ababe45f6 in StartBackgroundWorker () at bgworker.c:753
#17 0x0000560ababf0fc3 in do_start_bgworker (rw=0x560abca067e0) at postmaster.c:6142
#18 maybe_start_bgworker () at postmaster.c:6387
#19 0x0000560ababf2305 in sigusr1_handler (postgres_signal_arg=<optimized out>) at postmaster.c:5564
#20 <signal handler called>
#21 0x00007f7d17cd1cd7 in __GI___select (nfds=nfds@entry=8, readfds=readfds@entry=0x7fff4d68fdd0, writefds=writefds@entry=0x0, exceptfds=exceptfds@entry=0x0, timeout=timeout@entry=0x7fff4d68fd30) at ../sysdeps/unix/sysv/linux/select.c:41
#22 0x0000560aba8edbc2 in ServerLoop () at postmaster.c:1900
#23 0x0000560ababf3ce7 in PostmasterMain (argc=6, argv=0x560abc99a990) at postmaster.c:1526
#24 0x0000560aba8f0147 in main (argc=6, argv=0x560abc99a990) at main.c:245

Also select pg_terminate_backend(6007); does not kill the process. Restart of Greenplum does:)
How this can be fixed? Is this some sort of bug?

user may load diskquota library without shared memory initialization

There are several commits (05da9d4 and 86ff586) in diskquota 2.2.2 that eased restrictions for extension initialization check to make it possible to update extension. But! These changes make it possible to run diskquota queries against uninitialized extension too. That leads to segmentation fault on segments as far as shared structures isn't initialized properly.

For example, let's create a stand with diskquota 2.1 and 2.2 shared libraries

Steps to reproduce 1:

\! gpconfig -c shared_preload_libraries -v diskquota-2.1
\! gpstop -arfq
\c
create extension diskquota version '2.1';
\! gpconfig -c shared_preload_libraries -v ''
\! gpstop -arfq
\c
alter extension diskquota update;
drop extension diskquota;
create extension diskquota version '2.2';

Step to reproduce 2:

\! gpconfig -c shared_preload_libraries -v diskquota-2.1
\! gpstop -arfq
\c
create extension diskquota version '2.1';
\! gpconfig -c shared_preload_libraries -v ''
\! gpstop -arfq
\c
alter extension diskquota update;
select diskquota.resume();

Expected behavior:

ERROR:  [diskquota] booting 2.2.2, but diskquota-2.2 not in shared_preload_libraries. abort. (diskquota.c:224)  (seg0 slice1 127.0.1.1:6002 pid=17043) (diskquota.c:224)
CONTEXT:  SQL statement "SELECT diskquota.diskquota_fetch_table_stat(5, '{794247}'::oid[]) FROM gp_dist_random('gp_id')"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.