Comments (6)
use uniqCombined instead, when you store data in table
lower index_granularity to lower value, to like 1024 or 512
from clickhouse.
lower index_granularity
Thank you very much. Why lower index_granularity,Is it to improve parallel processing?
Are there any other performance effects?
from clickhouse.
Why lower index_granularity,Is it to improve parallel processing?
Those states are huge, like 100-10_000x of normal value size.
If you want to read only one row with state, it does mean that you have x8192 times read amplification with default index_granularity
from clickhouse.
Why lower index_granularity,Is it to improve parallel processing?
Those states are huge, like 100-10_000x of normal value size.
If you want to read only one row with state, it does mean that you have x8192 times read amplification with default index_granularity
I tested the uniqCombinedMerge function and found it to be twice as fast as uniqMerge, but this performance still doesn't meet my requirements. Should I just lower the index_granularity?
from clickhouse.
Should I just lower the index_granularity?
Lowering index_granularity may help if your WHERE condition match ORDER BY key
index_granularity=512 is probably lower bound you want to test
from clickhouse.
Thank you very much for your response. Unfortunately, my issue is still not resolved...
I found that the problem occurs in the Aggregator: Converting aggregation data to two-level. The 32 threads are not evenly distributed, and why is it being executed serially? Below is the log:
┌─────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ time │ message │
│ time │ varchar │
├─────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ 15:09:07.36864 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.020867452 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.368693 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021178829 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.368666 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021077567 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.36867 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.020974254 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.368676 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.020972442 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.368678 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021043224 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.368685 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.020954714 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.368665 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021135261 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.368717 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021003028 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.368742 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.020921154 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.368884 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021311424 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.368919 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021268207 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.36897 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021285537 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.369067 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021509976 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.369201 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021444408 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.36923 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.021625077 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.371471 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.023922573 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.378136 │ <Trace> AggregatingTransform: Aggregated. 0 to 0 rows (from 0.00 B) in 0.030356882 sec. (0.000 rows/sec., 0.00 B/sec.)\n │
│ 15:09:07.4025 │ <Trace> AggregatingTransform: Aggregated. 8474 to 265 rows (from 797.23 KiB) in 0.054687059 sec. (154954.392 rows/sec., 14.24 MiB/sec.)\n │
│ 15:09:07.549225 │ <Trace> AggregatingTransform: Aggregated. 10473 to 200 rows (from 834.32 KiB) in 0.201762457 sec. (51907.576 rows/sec., 4.04 MiB/sec.)\n │
│ 15:09:07.633582 │ <Trace> AggregatingTransform: Aggregated. 3434 to 232 rows (from 63.72 KiB) in 0.286071834 sec. (12003.978 rows/sec., 222.73 KiB/sec.)\n │
│ 15:09:07.702826 │ <Trace> AggregatingTransform: Aggregated. 12771 to 387 rows (from 236.96 KiB) in 0.355322528 sec. (35941.994 rows/sec., 666.89 KiB/sec.)\n │
│ 15:09:07.859172 │ <Trace> AggregatingTransform: Aggregated. 3471 to 205 rows (from 64.40 KiB) in 0.511773644 sec. (6782.295 rows/sec., 125.84 KiB/sec.)\n │
│ 15:09:08.099212 │ <Trace> AggregatingTransform: Aggregated. 42837 to 602 rows (from 3.90 MiB) in 0.751317392 sec. (57015.850 rows/sec., 5.19 MiB/sec.)\n │
│ 15:09:08.695575 │ <Trace> AggregatingTransform: Aggregated. 44103 to 932 rows (from 3.30 MiB) in 1.348089325 sec. (32715.191 rows/sec., 2.45 MiB/sec.)\n │
│ 15:09:08.956363 │ <Trace> AggregatingTransform: Aggregated. 96532 to 1166 rows (from 8.06 MiB) in 1.608933201 sec. (59997.519 rows/sec., 5.01 MiB/sec.)\n │
│ 15:09:10.766329 │ <Trace> AggregatingTransform: Aggregated. 85545 to 601 rows (from 7.18 MiB) in 3.418452112 sec. (25024.484 rows/sec., 2.10 MiB/sec.)\n │
│ 15:09:10.9176 │ <Trace> AggregatingTransform: Aggregated. 52264 to 328 rows (from 4.07 MiB) in 3.569728396 sec. (14640.890 rows/sec., 1.14 MiB/sec.)\n │
│ 15:09:11.337369 │ <Trace> AggregatingTransform: Aggregated. 43720 to 380 rows (from 3.92 MiB) in 3.989532943 sec. (10958.676 rows/sec., 1005.43 KiB/sec.)\n │
│ 15:09:11.557088 │ <Trace> AggregatingTransform: Aggregated. 88537 to 1075 rows (from 7.23 MiB) in 4.209653263 sec. (21031.898 rows/sec., 1.72 MiB/sec.)\n │
│ 15:09:16.434633 │ <Trace> AggregatingTransform: Aggregated. 68540 to 628 rows (from 2.03 MiB) in 9.087120514 sec. (7542.543 rows/sec., 228.68 KiB/sec.)\n │
│ 15:09:19.105799 │ <Trace> AggregatingTransform: Aggregated. 167042 to 1055 rows (from 13.03 MiB) in 11.757854867 sec. (14206.843 rows/sec., 1.11 MiB/sec.)\n │
├─────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ 32 rows 2 columns │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
from clickhouse.
Related Issues (20)
- insert into select increase memory with column number
- Inserting into different tables (with identical table structures) results in varying amounts of data read and memory usage HOT 1
- different memory usage and number of rows read reported during insert HOT 2
- help the redis table function HOT 1
- How to use the g4 file provided under antlr to generate a Java parser? HOT 1
- ReplicatedDatabase: `chassert(max_log_ptr == new_max_log_ptr);` has failed
- `test_executable_user_defined_function/test.py::test_executable_function_slow_python` is flaky
- MEMORY_LIMIT_EXCEEDED in CI
- Test `02908_many_requests_to_system_replicas` has failed HOT 9
- 02765_queries_with_subqueries_profile_events.sh is flaky HOT 5
- Table restore failed because tmp_restore_xxxx part was cleaned up (in test 03032_async_backup_restore)
- Bloom filter index incorrectly treats subexpressions as conditions
- Logical error inside invalid query with `convert_query_to_cnf`, `optimize_using_constraints`, `optimize_substitute_columns`
- MSan report in DatabaseReplicated
- Test `02310_clickhouse_local_INSERT_progress_profile_events` didn't pass
- Test `02982_aggregation_states_destruction` didn't pass
- The test `02481_parquet_list_monotonically_increasing_offsets` is trash
- Can dictionary behave as an empty dictionary instead of raising error if its source is invalid? HOT 1
- Data-race in DatabaseReplicated HOT 1
- Test `02435_rollback_cancelled_queries` has failed
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from clickhouse.