Comments (9)
Looked into this but unable to reproduce the issue on my machine. The reader seems to be working fine as seen in the screenshot below.
![image](https://private-user-images.githubusercontent.com/14217455/332673029-155b67b6-ead1-46a4-a506-0e134a918eb5.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTg5NTA0OTAsIm5iZiI6MTcxODk1MDE5MCwicGF0aCI6Ii8xNDIxNzQ1NS8zMzI2NzMwMjktMTU1YjY3YjYtZWFkMS00NmE0LWE1MDYtMGUxMzRhOTE4ZWI1LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA2MjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNjIxVDA2MDk1MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWQ3ZjFmYTkxY2M0N2EyNjU3OTRjMjZkMTBlMTZlNGQ5ZGQ5OWNkMzliMzM5ZDUyYTNhMGM0MDQ5OTNlNzNiMDUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.wg0FWKcekF7lVmKaJJ2KCQtewxj20xUChjEzsDXysTA)
from cudf.
Here is my test code. I did notice in nvidia-smi
that we see a transient GPU memory spike of 21.4GiB
and it quickly goes down and saturates at around 10GiB
. I am assuming that this 21.4GiB
transient might be the culprit behind the OOM. It doesn't fail on my machine as my GPUs are free otherwise and can handle the transient.
def test_parquet_chunked_reader_oom():
reader = ParquetReader(["/home/coder/datasets/lineitem.parquet"], chunk_read_limit=24000000)
while (reader._has_next()):
chunk = reader._read_chunk()
from cudf.
I did notice in
nvidia-smi
that we see a transient GPU memory spike of21.4GiB
and it quickly goes down and saturates at around10GiB
. I am assuming that this21.4GiB
transient might be the culprit behind the OOM.
Exactly, I did notice this too on T4 and since T4's can't handle that much amount of memory we end up with an OOM there too.
from cudf.
Thank you @mhaseeb123 and @galipremsagar for testing this. I don't believe the C++ API encounters this memory spike in the _has_next()
function. @mhaseeb123 would you please check? @nvdbaranec would you please share your thoughts?
from cudf.
First question, the code I'm seeing is just read_parquet(), not the chunked reader
In [4]: table = pa.Table.from_pandas(pd.read_parquet("lineitem.parquet"))
The regular reader is implemented in terms of the chunked reader, but with no limits set. Ie, infinite sizes. So if you're just using that, OOMs are absolutely possible.
If this code is somehow using the chunked reader, note that there are two parameters:
- the output chunk limit, which limits the total size in bytes of the output file, but does nothing to control the memory usage of the decode process.
- the input chunk limit, which limits how much (temporary) memory will be used during the decoding process.
They can be set independently, but only the input chunk limit will work to keep OOMs down.
from cudf.
Thank you @nvdbaranec, yes we are trying to use the chunked reader here in python. It looks like we might not be setting the "input chunk limit"
from cudf.
Sorry, I misread. I thought the first block of code was where the bug was. It is odd that the one that uses the chunked reader directly would fail. There should be no difference between the two in overall memory usage, but maybe that small chunk value specfied (24 MB) is throwing something for a loop.
In this case, I would not expect the input limit to make a difference since it clearly loads in the non-chunked case.
from cudf.
The following test code and patch for #15728 makes things smooth again
def test_parquet_chunked_reader_oom():
reader = ParquetReader(["/home/coder/datasets/lineitem.parquet"], chunk_read_limit=24000000, pass_read_limit=16384000000) # setting pass_read_limit to 16GB but smaller values also work
table = []
while (reader._has_next()):
chunk = reader._read_chunk()
# table = table + chunk # concatenate not needed for testing
diff --git a/python/cudf/cudf/_lib/parquet.pyx b/python/cudf/cudf/_lib/parquet.pyx
index aa18002fe1..14c1d00c06 100644
--- a/python/cudf/cudf/_lib/parquet.pyx
+++ b/python/cudf/cudf/_lib/parquet.pyx
@@ -763,6 +763,7 @@ cdef class ParquetWriter:
cdef class ParquetReader:
cdef bool initialized
cdef unique_ptr[cpp_chunked_parquet_reader] reader
+ cdef size_t pass_read_limit
cdef size_t chunk_read_limit
cdef size_t row_group_size_bytes
cdef table_metadata result_meta
@@ -781,7 +782,7 @@ cdef class ParquetReader:
def __cinit__(self, filepaths_or_buffers, columns=None, row_groups=None,
use_pandas_metadata=True,
- Expression filters=None, int chunk_read_limit=1024000000):
+ Expression filters=None, size_t chunk_read_limit=1024000000, size_t pass_read_limit=0):
# Convert NativeFile buffers to NativeFileDatasource,
# but save original buffers in case we need to use
@@ -831,9 +832,10 @@ cdef class ParquetReader:
self.allow_range_index &= filters is None
self.chunk_read_limit = chunk_read_limit
+ self.pass_read_limit = pass_read_limit
with nogil:
- self.reader.reset(new cpp_chunked_parquet_reader(chunk_read_limit, args))
+ self.reader.reset(new cpp_chunked_parquet_reader(chunk_read_limit, pass_read_limit, args))
self.initialized = False
self.row_groups = row_groups
self.filepaths_or_buffers = filepaths_or_buffers
from cudf.
The issue has been resolved by using pass_read_limit
, Thanks @mhaseeb123 & @nvdbaranec !
from cudf.
Related Issues (20)
- [QST] Running cudf terribly slow HOT 5
- testing
- Deprecate windowslinetermination from libcudf read_csv
- [QST] circular import problem at the beginning of my code HOT 9
- [QST] cudf and kvikio? (this is actually a question about implementing RDMA in another library) HOT 27
- [FEA] Implement argument-order stable option for `cudf::merge::merge` HOT 2
- [QST] Value error when importing `cudf.pandas` HOT 7
- [BUG] PARQUET_READER_NVBENCH: cudaErrorContextIsDestroyed (error 709) on cudaEventDestroy
- [BUG] cudf::ast Module Fails Implicit Cast from Decimal to Double Due to Recent Explicit Conversion Requirement (PR #15438) HOT 3
- [FEA] Improve performance of high-multiplicity joins HOT 5
- [FEA] Investigate fast-path for hash joins that bypasses row operators
- [FEA] Add support for `datetime32[D]` in cuDF python HOT 2
- [QST] TypeError: Argument 'real' has incorrect type (expected numpy.ndarray, got ndarray) HOT 4
- Cannot import cuDF CUDA Runtime Error HOT 5
- [BUG] cudf-cuda11 not working in Databricks DBR 13.3 ML LTS on GPU instance HOT 4
- [QST] Is Series.value_counts supposed to behave differently between Pandas and CuDF ? HOT 3
- [BUG] Index.names return `tuple` instead of `FrozenList`
- read_pickle support HOT 5
- [BUG] cudf::left_anti_join fails with a signal error (SIGABRT) instead of throwing an exception when there is an OOM condition HOT 6
- [BUG] cuDF Pandas now requires pytest at runtime HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cudf.