Comments (5)
is there any use case that numpy/pandas and alike is not a better alternative?
from xgboost.
For timeseries data like stock exchange data, to predict the next server days return. Saying that there are 100 features, and then rolling the data with 20 days. In order to fit DMatrix, we have to shift the features 20 times, and the memory usage become 20x. Actually most of the data are duplicated. If I can define custom __getitem__
, it will highly reduce the memory usage.
BTW, please do me a favor, check #9625.
from xgboost.
Currently, can can consume data in batch by using the callback function, I took a quick look into LGB, which implements the from_seq
with a function named _push_rows
. I assume that's similar to the callback function we use in terms of the underlying mechanism.
See https://github.com/dmlc/xgboost/blob/master/demo/guide-python/quantile_data_iterator.py
from xgboost.
BTW, please do me a favor, check #9625.
sure, will look into it.
from xgboost.
Currently, can can consume data in batch by using the callback function, I took a quick look into LGB, which implements the
from_seq
with a function named_push_rows
. I assume that's similar to the callback function we use in terms of the underlying mechanism.See https://github.com/dmlc/xgboost/blob/master/demo/guide-python/quantile_data_iterator.py
I have looked into this demo code. It looks like that the QuantileDMatrix
consume all the dataiter more than one time (4 in my case). As a quantile structure, this will save much memory. But for ranking problem, how to set group weight if neccessary. My original demands is that the data consuming at the training stage, not the QuantileDMatrix.__init__
stage.
One more thing, there is a little trap one may fall into if not careful.
# run in version 1.7.6
import numpy as np
import pandas as pd
import xgboost as xgb
np.random.seed(42)
n_groups = 100
group_size = 2000
n_features = 10
n_levels = 20
rows = n_groups * group_size
features = pd.DataFrame(np.random.randn(rows, n_features).astype('float32'), columns=[f'f{i:03d}' for i in range(n_features)])
qids = pd.Series(np.arange(rows, dtype='int') // group_size)
labels = pd.Series(np.random.randn(rows).astype('float32')).groupby(qids).rank(method='first').sub(1) // (group_size // n_levels)
weights = np.arange(1, 101)
# dmatrix = xgb.DMatrix(features, label=labels, qid=qids)
qmatrix = xgb.QuantileDMatrix(features, label=labels, qid=qids)
sub_rows = 10000
sub_qmatrix = xgb.QuantileDMatrix(features.tail(sub_rows))
sub_dmatrix = xgb.DMatrix(features.tail(sub_rows))
params = {
'objective': 'rank:pairwise',
# 'objective': 'multi:softprob',
# 'num_class': n_levels,
'base_score': 0.5,
# 'lambdarank_pair_method': 'mean',
# 'lambdarank_num_pair_per_sample': 1,
'booster': 'gbtree',
'tree_method': 'hist',
'verbosity': 1,
# 'seed': 42,
'learning_rate': 0.1,
'max_depth': 6,
'gamma': 1,
'min_child_weight': 4,
'subsample': 0.9,
'colsample_bytree': 0.7,
'nthread': 20,
'reg_lambda': 1,
'reg_alpha': 1,
'eval_metric': ['ndcg@100', 'ndcg@500', 'ndcg@1000'],
}
booster = xgb.train(params, qmatrix, 100, verbose_eval=10, evals=[(qmatrix, 'train')])
preds_d = booster.predict(sub_dmatrix)
preds_q = booster.predict(sub_qmatrix)
preds_o = booster.predict(qmatrix)[-sub_rows:]
assert np.allclose(preds_d, preds_q) # False
assert np.allclose(preds_o, preds_q) # False
assert np.allclose(preds_o, preds_d) # True
The script above will raise error. So if one train booster with QuantileDMatrix
and then predict with QuantileDMatrix
that is not origin from the training one, wrong predition might occurs, since the hist seperation points changed I guess.
from xgboost.
Related Issues (20)
- Defining a callback to write hessians of train observations to a csv file HOT 2
- Slow inference on sphoradic stremaing data HOT 2
- XGBoost GPU Warning When Working with BayesSearchCV (XGBoost is running on: cuda:0, while the input data is on: cpu.)
- help installing xgboost with gpu HOT 2
- Potential Documentation Inaccuracy Regarding Feature Interaction Constraints
- Horizontal Federated Learning with Secure Features RFC
- [bug] Python - Cuda error (without using Cuda) HOT 5
- Pandas 2.2: Index.format is deprecated
- ArrayInterface handler for cuDF DataFrame cannot yet handle Boolean columns HOT 1
- src/metric/auc.cc:322: Check failed: auc <= local_area HOT 1
- XGBoost4j-spark CrossValidation train FAILED on multi-GPU environment: : Multiple processes running on same CUDA device is not supported! HOT 1
- [jvm-packages] Scaladoc is not working in latest XGBoost
- [CI] Tracker for improving build and CI/CD infrastructure
- [CI] Set up a nightly pipeline to test with dev versions of RAPIDS
- xgboost predict takes a long time HOT 1
- xgboost4j_2.12:1.7.6 's (ml/dmlc/xgboost4j/java/XGBoostJNI.XGBoosterPredict) much slower than 0.90 in some model HOT 6
- NumPy 2.0 support HOT 2
- Tutorial on c-api distributed training of xgboost HOT 2
- Python 3.12 `xgboost.core.XGBoostError: Invalid Parameter format for nthread expect int but value='-1'` when `DMatrix` used with `import googlecloudprofiler`. HOT 6
- [CI] Retire Mac Mini worker in BuildKite HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from xgboost.