Comments (6)
I'm not sure if it's possible to treat an expression as "star-args".
Some "workarounds" in case they are of any use:
It is possible to use a struct, but it changes what the function receives:
df = pl.DataFrame({
"B_1": [1, 2, 3, 4],
"B_2": [5, 6, 7, 8],
"B_3": [9, 10, 11, 12]
}).with_row_index()
df.with_columns(
pl.map_batches(
pl.struct("^B_.*$"),
lambda x: x[0].struct[0] + x[0].struct[1] + x[0].struct[2]
).name.prefix("C_")
)
# shape: (4, 5)
# ┌───────┬─────┬─────┬─────┬───────┐
# │ index ┆ B_1 ┆ B_2 ┆ B_3 ┆ C_B_1 │
# │ --- ┆ --- ┆ --- ┆ --- ┆ --- │
# │ u32 ┆ i64 ┆ i64 ┆ i64 ┆ i64 │
# ╞═══════╪═════╪═════╪═════╪═══════╡
# │ 0 ┆ 1 ┆ 5 ┆ 9 ┆ 15 │
# │ 1 ┆ 2 ┆ 6 ┆ 10 ┆ 18 │
# │ 2 ┆ 3 ┆ 7 ┆ 11 ┆ 21 │
# │ 3 ┆ 4 ┆ 8 ┆ 12 ┆ 24 │
# └───────┴─────┴─────┴─────┴───────┘
Create a Series from the columns:
df.with_columns(
pl.map_batches(
list(pl.Series(df.columns).str.extract('^(B_.*)$').drop_nulls()),
lambda x: x[0] + x[1] + x[2]
)
.name.prefix("C_")
)
Which I think would also be equivalent to expanding a selector:
import polars.selectors as cs
df.with_columns(
pl.map_batches(
cs.expand_selector(df, cs.matches("^B_.*")),
lambda x: x[0] + x[1] + x[2]
)
.name.prefix("C_")
)
from polars.
In fact, my scenario is that the column names after to_dummies
are dynamic, but the ols
function wants to remain simple.
The last two methods you provided need to be additionally passed the df
import polars as pl
import polars.selectors as cs
df = pl.DataFrame({
"A": [9, 10, 11, 12],
"B": [1, 2, 3, 4],
}).with_row_index()
df = df.with_columns(df.to_dummies('B'))
def func(yx):
return yx[0] + yx[1] + yx[2]
def lstsq_1(y, *x):
return pl.map_batches([y, *x], lambda yx: func(yx))
def lstsq_2(y, *x):
# it use the `df`, not good
z = cs.expand_selector(df, x[0])
return pl.map_batches([y, *z], lambda yx: func(yx))
out = df.with_columns([
# polars.exceptions.ComputeError: the name: 'resid' passed to `LazyFrame.with_columns` is duplicate
# lstsq_1(pl.col('A'), cs.matches('^B_.*$')).alias('resid'),
# not good
lstsq_2(pl.col('A'), cs.matches("^B_.*$")).alias('resid'),
])
print(out)
"""
shape: (4, 8)
┌───────┬─────┬─────┬─────┬─────┬─────┬─────┬───────┐
│ index ┆ A ┆ B ┆ B_1 ┆ B_2 ┆ B_3 ┆ B_4 ┆ resid │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ u32 ┆ i64 ┆ i64 ┆ u8 ┆ u8 ┆ u8 ┆ u8 ┆ i64 │
╞═══════╪═════╪═════╪═════╪═════╪═════╪═════╪═══════╡
│ 0 ┆ 9 ┆ 1 ┆ 1 ┆ 0 ┆ 0 ┆ 0 ┆ 10 │
│ 1 ┆ 10 ┆ 2 ┆ 0 ┆ 1 ┆ 0 ┆ 0 ┆ 11 │
│ 2 ┆ 11 ┆ 3 ┆ 0 ┆ 0 ┆ 1 ┆ 0 ┆ 11 │
│ 3 ┆ 12 ┆ 4 ┆ 0 ┆ 0 ┆ 0 ┆ 1 ┆ 12 │
└───────┴─────┴─────┴─────┴─────┴─────┴─────┴───────┘
"""
from polars.
from functools import reduce
import polars as pl
import polars.selectors as cs
df = pl.DataFrame({
"A": [9, 10, 11, 12],
"B": [1, 2, 3, 4],
}).with_row_index()
df = df.with_columns(df.to_dummies('B'))
def func_2(yx):
y = yx[0]
x = yx[1:]
return y - reduce(lambda a, b: a + b, x)
def func_1(yx):
y = yx[0].struct[0]
x = list(yx[0].struct)[1:]
return y - reduce(lambda a, b: a + b, x)
def lstsq_1(y, *x):
return pl.map_batches(pl.struct([y, *x]), lambda yx: func_1(yx))
def lstsq_2(y, *x):
# it use the `df`, not good
z = cs.expand_selector(df, x[0])
return pl.map_batches([y, *z], lambda yx: func_2(yx))
out = df.with_columns([
lstsq_1(pl.col('A'), pl.col('^B_.*$')).name.prefix('C_'),
lstsq_2(pl.col('A'), cs.matches("^B_.*$")).name.prefix('D_'),
])
print(out)
"""
shape: (4, 9)
┌───────┬─────┬─────┬─────┬───┬─────┬─────┬─────┬─────┐
│ index ┆ A ┆ B ┆ B_1 ┆ … ┆ B_3 ┆ B_4 ┆ C_A ┆ D_A │
│ --- ┆ --- ┆ --- ┆ --- ┆ ┆ --- ┆ --- ┆ --- ┆ --- │
│ u32 ┆ i64 ┆ i64 ┆ u8 ┆ ┆ u8 ┆ u8 ┆ i64 ┆ i64 │
╞═══════╪═════╪═════╪═════╪═══╪═════╪═════╪═════╪═════╡
│ 0 ┆ 9 ┆ 1 ┆ 1 ┆ … ┆ 0 ┆ 0 ┆ 8 ┆ 8 │
│ 1 ┆ 10 ┆ 2 ┆ 0 ┆ … ┆ 0 ┆ 0 ┆ 9 ┆ 9 │
│ 2 ┆ 11 ┆ 3 ┆ 0 ┆ … ┆ 1 ┆ 0 ┆ 10 ┆ 10 │
│ 3 ┆ 12 ┆ 4 ┆ 0 ┆ … ┆ 0 ┆ 1 ┆ 11 ┆ 11 │
└───────┴─────┴─────┴─────┴───┴─────┴─────┴─────┴─────┘
"""
from polars.
Yeah, as far as I am aware it is only possible "at the expression level" using a struct because Polars converts multi-selectors into individual column selectors.
i.e.
df.with_columns(
pl.col("^B_.*$").map_batches(...)
)
is turned into:
df.with_columns(
pl.col("B_1").map_batches(...),
pl.col("B_2").map_batches(...),
pl.col("B_3").map_batches(...)
)
Otherwise you need to query the frame's schema in order to get the list of column names.
from polars.
If you're chaining a bunch of methods and want the df.columns
of an intermediate step you can do a pipe lambda like this
(
df
.with_columns(a=pl.col('b')*2)
.pipe(lambda df: (
df.with_columns(pl.col(x) for x in df.columns))
))
from polars.
The issue seems to be that you can pass a sequence of expressions to pl.map_batches
and it passes them to the function all together. (similar to as if you used a struct)
pl.map_batches([pl.col("B_1"), pl.col("B_2"), pl.col("B_3")], ...
It seems to use this map_mul
which I hadn't seen before:
polars/py-polars/polars/functions/lazy.py
Lines 910 to 912 in 6a181f2
But they want to be able to do this by specifying a single pl.col(regex)
instead.
from polars.
Related Issues (20)
- Big integer error HOT 1
- Add a `newline` parameter to `read_csv` HOT 3
- `sort_by` + `struct` + `exclude` index out of bounds PanicException HOT 1
- CSV Downloads Fail for ADLS Gen2 with Azure CLI Authentication HOT 4
- Panic on datetime column min() HOT 1
- High memory usage after `collect()` despite using `limit(1)`
- Conda package outdated HOT 1
- DateFrame.describe() reports datetime as str HOT 2
- pl.list,len() - pl.list,len() always returning u32 no matter the results HOT 5
- [FEA]: Allow specifying null location in `set_sorted`
- Expose the individual parameters from fastexcel.load_sheet in pl.read_excel HOT 1
- Provide native and fast Series slice assignment (currently slower than Pandas) HOT 6
- Supporting multidimensional array style operations, by specifying metadata columns
- struct field access returns incorrect values HOT 3
- pl.struct with no arguments triggers a panic
- Multiple expr.head(n).max()/min()/etc operations in with_columns causing ShapeError
- Add `repeat` and `tile` for Series/Expr
- `SchemaFieldNotFoundError` when chaining `select` and `collect`
- Series is ignoring the dtype argument, series.to_numpy() dtype depends on values passed
- Problem filtering categorical string columns with lazy frame and scan_parquet HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from polars.