Comments (6)
copying my comment on PR.
Is it faster to compile, or produces smaller Core? IIRC record updates are still compiled into Core case?
AFAIK, big records just lead to quadratic code size, if you don't go very creative, like in https://well-typed.com/blog/2021/08/large-records/
from lens.
The resulting core is slightly different. Changed the example to just two fields for brevity:
data Big = Big { _a0 :: Int , _a1 :: Int }
Then both cores for a0
start with:
a0 [InlPrag=INLINE (sat-args=2)] :: Lens' Big Int
[GblId, Arity=3, Caf=NoCafRefs, Unf=OtherCon []]
a0
= \ (@ (x0 :: * -> *))
($dFunctor_a0 :: Functor x0)
(eta_B2 :: Int -> x0 Int)
(eta1_B1 :: Big) ->
But then the two differed, while makeLenses
resumes with
case eta1_B1 of { Big x1 x2 ->
fmap
@ x0
$dFunctor_a0
@ Int
@ Big
(\ (y :: Int) -> BigRecord.Big y0 x2)
(eta_B2 x1)
}
the new variant produces the following core:
fmap
@ x0
$dFunctor_a0
@ Int
@ Big
(\ (y :: Int) ->
case eta1_B1 of { Big x1 x2 ->
BigRecord.Big y x2
})
(eta_B2 (case eta1_B1 of { Big x1 x2 -> x1 }))
The difference being that the regular one matches eta1_B1
once outside and the new one matches it twice within, so it's actually longer core! So I don't know what exactly makes it less slow then..
from lens.
At a wild guess, using record syntax might save time in the type-checker because you're type-checking an expression that is constant-size rather than linear in the number of record fields? You could test this hypothesis by compiling with -ddump-timings
to see where the time is being spent.
Given that the Core is larger with the new variant, I wonder if optimizing use sites might end up taking longer. And is there a semantic or runtime performance difference arising from the change?
from lens.
Given that the Core is larger with the new variant, I wonder if optimizing use sites might end up taking longer. And is there a semantic or runtime performance difference arising from the change?
For over modifier l big
, over = coerce
, Functor
in question is Identity
.
eta1_B1 = big :: Big
eta_B2 = modifier :: Int -> Int
-- makeLenses
case eta1_B1 of { Big x1 x2 ->
fmap
@ x0
$dFunctor_a0
@ Int
@ Big
(\ (y0 :: Int) -> Big y0 x2)
(eta_B2 x1)
}
over modifier l big
= case big of { Big x1 x2 ->
(\ (y0 :: Int) -> Big y0 x2)
(modifier x1)
}
=<beta-redux>
case big of { Big x1 x2 ->
let y0 = modifier x1
in Big y0 x2
}
=<inline>
case big of { Big x1 x2 ->
Big (modifier x1) x2
}
-- using records
fmap
@ x0
$dFunctor_a0
@ Int
@ Big
(\ (y :: Int) ->
case eta1_B1 of { Big x1 x2 ->
Big y x2
})
(eta_B2 (case eta1_B1 of { Big x1 x2 -> x1 }))
over modifier l big
= (\ (y :: Int) ->
case big of { Big x1 x2 ->
Big y x2
})
(modifier (case big of { Big x1 x2 -> x1 }))
=<beta-redux>
let y = modifier (case big of { Big x1 x2 -> x1 })
in case big of { Big x1 x2 ->
Big y x2
}
=<inline>
case big of { Big x1 x2 ->
Big (modifier (case big of { Big x1 x2 -> x1 })) x2
}
=<notice that `big` is cased on twice>
case big of { Big x1 x2 ->
Big (modifier x1) x2
}
I think GHC is smart enough to figure out double case
optimization. But I will be surprised if the use-site is faster to optimize.
(view
case is simpler, as then the eta_B2
is replaced with coerce
and fmap @(Const r)
is also just coerce
)
from lens.
At a wild guess, using record syntax might save time in the type-checker because you're type-checking an expression that is constant-size rather than linear in the number of record fields? You could test this hypothesis by compiling with
-ddump-timings
to see where the time is being spent.
Results of -ddump-timings
show improvement in Renamer/typechecker
over here from 2019 to 456 (btw what unit is this? milliseconds?)
Last steps:
Stage | Normal | Record Syntax |
---|---|---|
Renamer/typechecker | alloc=1674493960 time=2018.927 | alloc=344285416 time=455.906 |
Desugar | alloc=382819504 time=550.085 | alloc=535813632 time=805.058 |
Simplifier | alloc=3741509512 time=2641.438 | alloc=4725170280 time=3449.629 |
CoreTidy | alloc=445557936 time=403.412 | alloc=841371136 time=398.969 |
CorePrep | alloc=17176 time=0.052 | alloc=17176 time=0.037 |
CodeGen | alloc=10154994752 time=4546.312 | alloc=7060459536 time=3597.344 |
from lens.
#987 adds an option to generate lenses using record syntax, but this option is disabled by default. I'll leave this issue open to discuss whether we should change the default.
from lens.
Related Issues (20)
- Add `toAlternativeOf`? HOT 7
- Declare Fields with Nested Records of types defined in same Splice HOT 1
- Replace the creately diagram
- HLint fails in master HOT 2
- cloneGetter HOT 2
- Improving documentation HOT 11
- Deprecate 'lifted'? HOT 1
- Build failure with `template-haskell-2.21` (GHC 9.8) HOT 2
- trying to understand some accesssor stuff HOT 2
- Censor & listen
- Separate Each into own package HOT 3
- Order dependence for `makeFields` HOT 10
- Improve `rewriteM` docs
- Skillsmatter video is not online anymore? HOT 8
- Wiki Derivation: Make example types more consistent HOT 2
- Application of `view` to a lens does not always type check. HOT 4
- Document that application of `view` to a term of type `Lens` does not always type check.
- Where is the `Iso' Text String`? HOT 5
- `is` should not require a prism — a fold should be fine. HOT 2
- New operators not exported by Control.Lens.Operators HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lens.