Comments (8)
I think I have found the issue. The problem is that GLPK is non-reentrant, meaning that it is not thread safe. This is noted loud and clear in the manual for both v4.45 and v5.0. Other people have asked about thread safety at GPLK.jl but with no resolution. I have raised the issue with GLPK.jl, but we might consider another default solver (although I have yet to find one fast enough for small problems).
from lazysets.jl.
For me this already crashes with the simpler set (see the stacktrace below). Not sure why this works for you.
In any case, as written in the warning box here, some solvers do not support parallelism. The error indicates that GLPK
is one of these.
It seems that the LP solver is called from remove_redundant_constraints
. Currently the code does not allow to choose this solver explicitly. You can first confirm by passing prune=false
to minkowski_sum
. If that works, you can remove redundant constraints by calling that function explicitly in your loop, where you would have to pass another solver (here is a list of options). I have not experimented with other LP solvers, so I cannot recommend one that supports parallelism. You can ask in the jump-dev
group which LP solvers support parallelism (they have a forum and a Slack channel, links here).
glp_free: memory allocation error
Error detected in file env/alloc.c at line 70
signal (6): Aborted
in expression starting at REPL[9]:1
pthread_kill at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
raise at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
abort at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
errfunc at /workspace/srcdir/glpk-5.0/src/env/error.c:53
dma at /workspace/srcdir/glpk-5.0/src/env/alloc.c:70
_glp_dmp_delete_pool at /workspace/srcdir/glpk-5.0/src/misc/dmp.c:235
delete_prob at /workspace/srcdir/glpk-5.0/src/api/prob1.c:1553
glp_delete_prob at /workspace/srcdir/glpk-5.0/src/api/prob1.c:1581
glp_delete_prob at ~/.julia/packages/GLPK/k5XCe/src/gen/libglpk_api.jl:98 [inlined]
#2 at ~/.julia/packages/GLPK/k5XCe/src/MOI_wrapper/MOI_wrapper.jl:189
unknown function (ip: 0x7f39c93e3fe2)
_jl_invoke at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/gf.c:2377 [inlined]
ijl_apply_generic at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/gf.c:2559
jl_apply at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/julia.h:1843 [inlined]
run_finalizer at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/gc.c:280
jl_gc_run_finalizers_in_list at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/gc.c:367
run_finalizers at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/gc.c:410
jl_gc_run_pending_finalizers at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/gc.c:423
jl_mutex_unlock at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/julia_locks.h:131 [inlined]
ijl_task_get_next at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/partr.c:569
poptask at ./task.jl:921
wait at ./task.jl:930
task_done_hook at ./task.jl:634
jfptr_task_done_hook_49067.clone_1 at ~/programs/julia-1.8.5/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/gf.c:2377 [inlined]
ijl_apply_generic at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/gf.c:2559
jl_apply at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/julia.h:1843 [inlined]
jl_finish_task at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/task.c:254
start_task at /cache/build/default-amdci4-2/julialang/julia-release-1-dot-8/src/task.c:942
Allocations: 78936650 (Pool: 78907549; Big: 29101); GC: 33
Aborted (core dumped)
from lazysets.jl.
Many thanks for your reply! Actually, when I checked it in my another windows laptop, it also crashes with the simpler set.
Unfortunately, after passing prune=false
to minkowski_sum
, it doesn't work but has less errors (similar to yours) as follows. It seems there is something else that do not support parallelism.
glp_free: memory allocation error
Error detected in file env/alloc.c at line 70
signal (6): Aborted
in expression starting at REPL[9]:1
gsignal at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
abort at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
errfunc at /workspace/srcdir/glpk-5.0/src/env/error.c:53
dma at /workspace/srcdir/glpk-5.0/src/env/alloc.c:70
_glp_dmp_delete_pool at /workspace/srcdir/glpk-5.0/src/misc/dmp.c:235
delete_prob at /workspace/srcdir/glpk-5.0/src/api/prob1.c:1553
glp_delete_prob at /workspace/srcdir/glpk-5.0/src/api/prob1.c:1581
glp_delete_prob at /home/dso/.julia/packages/GLPK/lOla6/src/gen/libglpk_api.jl:98 [inlined]
#2 at /home/dso/.julia/packages/GLPK/lOla6/src/MOI_wrapper/MOI_wrapper.jl:189
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1788 [inlined]
run_finalizer at /buildworker/worker/package_linux64/build/src/gc.c:278
jl_gc_run_finalizers_in_list at /buildworker/worker/package_linux64/build/src/gc.c:365
run_finalizers at /buildworker/worker/package_linux64/build/src/gc.c:394
jl_gc_run_pending_finalizers at /buildworker/worker/package_linux64/build/src/gc.c:405
jl_mutex_unlock at /buildworker/worker/package_linux64/build/src/julia_locks.h:131 [inlined]
jl_task_get_next at /buildworker/worker/package_linux64/build/src/partr.c:484
poptask at ./task.jl:827
wait at ./task.jl:836
task_done_hook at ./task.jl:544
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1788 [inlined]
jl_finish_task at /buildworker/worker/package_linux64/build/src/task.c:218
start_task at /buildworker/worker/package_linux64/build/src/task.c:888
Allocations: 85358068 (Pool: 85334760; Big: 23308); GC: 55
Aborted (core dumped)
from lazysets.jl.
btw, I would like to share that minkowski_sum
works by multi-processing. At least, it seems something works for my project! Many thanks for your help again!!! I could help check the multi-threading case if it is improved later.
from lazysets.jl.
The call to minkowski_sum
does quite a lot in the background. In particular it converts the zonotope to a VPolytope
and in this process it also removes redundant vertices. This uses GLPK
automatically. MWE:
using LazySets
import Polyhedra
P = rand(VPolytope)
for j in 1:10000
Threads.@threads for i in 1:2
remove_redundant_vertices(P)
end
end
You cannot explicitly provide a solver via minkowski_sum
to remove_redundant_vertices
. What you can do is change the solver globally by redefining this method:
LazySets.jl/src/Initialization/init_Polyhedra.jl
Lines 16 to 24 in b955043
from lazysets.jl.
I see! Many thanks for your explanation!!
from lazysets.jl.
Thanks for the investigation @Zinoex. Have you checked https://github.com/ds4dm/Tulip.jl? Do you think it might be a good default solver?
from lazysets.jl.
At this point, it feels like a "no free lunch". There are pros and cons with each solver. To show this, I attach below benchmark results for GLPK, HiGHS, and Tulip.jl. I included HiGHS in the test because it was recommended by JuMP developer Oscar Dawson. The results clearly show that Tulip.jl tend to use considerably more memory compared to GLPK and HiGHS, which use basically the same amount. Both Tulip.jl and HiGHS are slower than GLPK (although my experience is that GLPK is slower for larger problems than the ones I test below). When comparing only Tulip.jl and HiGHS, Tulip.jl is faster than HiGHS for small problems, but the opposite way for larger problems. Another consideration is that, neither HiGHS or Tulip.jl support rational numbers (although I do not know how important this is). Tulip also use considerably more memory, but I am unsure whether the memory for GLPK and HiGHS includes the memory allocated from their binaries, whereas Tulip.jl is pure Julia. To understand how much more power, we may be squeeze out of HiGHS, I also did a profiling of support_function
with the 2D HPolytope from the benchmark, and 40% of the time was the optimization, another 30% was to create the C-struct of the problem (Highs_create
), which we can reduce to half the time by using JuMP.direct_model
(which is easier with HiGHS as it supports more constraint types natively, and this will also save ~25% memory).
To summarize:
- GLPK is (unfortunately) the fastest by a fair margin.
- Tulip.jl uses much more memory.
- Which of HiGHS and Tulip.jl is fastest depend on the problem size.
GLPK
julia> res["ρ"][("GLPK", "HPolytope", "dense", 2)]
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 34.289 μs … 8.657 ms ┊ GC (min … max): 0.00% … 97.75%
Time (median): 37.172 μs ┊ GC (median): 0.00%
Time (mean ± σ): 45.726 μs ± 115.079 μs ┊ GC (mean ± σ): 3.48% ± 1.39%
▃▇█▆▃▅▅▄▄▃▃▂▂▁▁ ▂▄▅▂▁▂▂▁▂▁▁▁▁▁▁▁▁ ▂
████████████████▇▆▆▅▆▆▆▆▆▅▅▅▄▅▅▅██████████████████████▇▇▇▆▇█ █
34.3 μs Histogram: log(frequency) by time 79.5 μs <
Memory estimate: 21.69 KiB, allocs estimate: 374.
julia> res["ρ"][("GLPK", "HPolytope", "dense", 4)]
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 67.808 μs … 5.206 ms ┊ GC (min … max): 0.00% … 94.12%
Time (median): 70.833 μs ┊ GC (median): 0.00%
Time (mean ± σ): 82.539 μs ± 159.924 μs ┊ GC (mean ± σ): 6.35% ± 3.25%
▆█▇▄▃▂▂▃▃▃▂▁▁ ▂▁ ▁▁ ▂
███████████████▇▇▇▆▇▇▇▇▆▆▅▆▆▅▆▅▅▆▆██▇▆███▇██▇▆▆▆▆▆▆▆▄▅▄▃▅▁▄▅ █
67.8 μs Histogram: log(frequency) by time 159 μs <
Memory estimate: 70.08 KiB, allocs estimate: 956.
julia> res["ρ"][("GLPK", "HPolytope", "dense", 6)]
BenchmarkTools.Trial: 6603 samples with 1 evaluation.
Range (min … max): 566.450 μs … 8.422 ms ┊ GC (min … max): 0.00% … 83.74%
Time (median): 597.367 μs ┊ GC (median): 0.00%
Time (mean ± σ): 753.360 μs ± 573.565 μs ┊ GC (mean ± σ): 5.32% ± 6.73%
█▇▆▅▄▄▃▂▂▃▂▁▁▁ ▁ ▂▂▂▂▂▁▂▁▁▂▂▂▂▁▁▁ ▂
████████████████████▆███▇▇█▇███▇▇▇▇███▇▇█████████████████▇█▇▆ █
566 μs Histogram: log(frequency) by time 1.2 ms <
Memory estimate: 727.23 KiB, allocs estimate: 7926.
julia> res["ρ"][("GLPK", "HPolytope", "dense", 10)]
BenchmarkTools.Trial: 73 samples with 1 evaluation.
Range (min … max): 58.513 ms … 89.176 ms ┊ GC (min … max): 4.37% … 9.56%
Time (median): 67.358 ms ┊ GC (median): 10.22%
Time (mean ± σ): 68.492 ms ± 6.158 ms ┊ GC (mean ± σ): 8.88% ± 2.60%
▂▅ ▅█▂ ▅▅▂ ▅ ▂▂ ▂
▅▁▅▁▁▅▅▅███▅████▅████▅▅████▁██▅▅▅▁▁▅▁▁▁▁▁▁▁▅▁▁▁▁▁▁▅▅▅▁▁▁▁▅▅ ▁
58.5 ms Histogram: frequency by time 85.9 ms <
Memory estimate: 60.51 MiB, allocs estimate: 418705.
HiGHS
julia> res["ρ"][("HiGHS", "HPolytope", "dense", 2)]
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 235.693 μs … 5.347 ms ┊ GC (min … max): 0.00% … 89.11%
Time (median): 244.202 μs ┊ GC (median): 0.00%
Time (mean ± σ): 269.157 μs ± 124.465 μs ┊ GC (mean ± σ): 0.70% ± 1.85%
▂█▇▅▄▃▂▂▁▁▁ ▁▂▁▁▁▁ ▁
█████████████▇▇▇▆▆▆▅▅▆▆▅▆▆▅▅▆▆▆▅▆▅▅▅▅▅▅▅▄▅▅███████▇▇▆▆▆▅▅▅▅▄▅ █
236 μs Histogram: log(frequency) by time 474 μs <
Memory estimate: 21.78 KiB, allocs estimate: 376.
julia> res["ρ"][("HiGHS", "HPolytope", "dense", 4)]
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 350.803 μs … 5.878 ms ┊ GC (min … max): 0.00% … 88.28%
Time (median): 362.058 μs ┊ GC (median): 0.00%
Time (mean ± σ): 390.667 μs ± 180.081 μs ┊ GC (mean ± σ): 1.32% ± 2.92%
▆█▆▅▄▃▂▁▁▁ ▁ ▁
████████████▇▇▇▆▆▇▆▆▇▆▆▆▆▆▆▅▇▇▇▇▆▅▅▅▄▅▄▄▆▇▆▅▇█▇▆▆▆▅▄▅▄▄▄▄▃▄▄▄ █
351 μs Histogram: log(frequency) by time 744 μs <
Memory estimate: 70.23 KiB, allocs estimate: 932.
julia> res["ρ"][("HiGHS", "HPolytope", "dense", 6)]
BenchmarkTools.Trial: 2387 samples with 1 evaluation.
Range (min … max): 1.543 ms … 7.057 ms ┊ GC (min … max): 0.00% … 66.15%
Time (median): 1.764 ms ┊ GC (median): 0.00%
Time (mean ± σ): 2.087 ms ± 736.502 μs ┊ GC (mean ± σ): 2.60% ± 7.45%
█▅▂
███▇▅▄▄▄▃▃▃▃▃▃▃▃▃▃▃▄▄▃▃▃▃▂▂▂▁▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▂▁▁▁▁▁▁▁▂▂▂▂▂ ▃
1.54 ms Histogram: frequency by time 5.93 ms <
Memory estimate: 727.45 KiB, allocs estimate: 7572.
julia> res["ρ"][("HiGHS", "HPolytope", "dense", 10)]
BenchmarkTools.Trial: 43 samples with 1 evaluation.
Range (min … max): 108.616 ms … 145.610 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 116.970 ms ┊ GC (median): 5.62%
Time (mean ± σ): 118.505 ms ± 7.881 ms ┊ GC (mean ± σ): 4.40% ± 2.70%
█▁███▁█▁███ █▁▁ █▁▁▁▁██▁█ ▁ ▁▁ █ ▁ ▁
███████████▁███▁█████████▁▁▁▁█▁▁██▁█▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
109 ms Histogram: frequency by time 146 ms <
Memory estimate: 60.51 MiB, allocs estimate: 400526.
Tulip:
julia> res["ρ"][("Tulip", "HPolytope", "dense", 2)]
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 174.700 μs … 12.592 ms ┊ GC (min … max): 0.00% … 35.07%
Time (median): 185.761 μs ┊ GC (median): 0.00%
Time (mean ± σ): 243.179 μs ± 487.561 μs ┊ GC (mean ± σ): 4.52% ± 2.27%
▅█▇▆▄▄▃▃▂▂▂▂▂▁▁▁ ▃▄▄▃▃▂▁▁▁ ▁▁▁▁▁▁▁▁▁▁ ▂
██████████████████████▇██▇█▇█▆▇██████████████████████████▇█▇▆ █
175 μs Histogram: log(frequency) by time 370 μs <
Memory estimate: 129.59 KiB, allocs estimate: 1651.
julia> res["ρ"][("Tulip", "HPolytope", "dense", 4)]
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 339.021 μs … 12.563 ms ┊ GC (min … max): 0.00% … 38.87%
Time (median): 364.164 μs ┊ GC (median): 0.00%
Time (mean ± σ): 460.900 μs ± 625.129 μs ┊ GC (mean ± σ): 4.36% ± 3.18%
▄██▆▅▄▃▃▂▂▂▂▂▁▁▂▂▃▄▃▁▁▂▃▂▂▂▁▂▄▃▂▁▁▁▁▁▁▁ ▁▁ ▁▁▁ ▂
████████████████████████████████████████████████▇▇█▆▆▅▆▆▅▅▄▄▄ █
339 μs Histogram: log(frequency) by time 764 μs <
Memory estimate: 508.23 KiB, allocs estimate: 2598.
julia> res["ρ"][("Tulip", "HPolytope", "dense", 6)]
BenchmarkTools.Trial: 1289 samples with 1 evaluation.
Range (min … max): 2.779 ms … 11.194 ms ┊ GC (min … max): 0.00% … 42.28%
Time (median): 3.417 ms ┊ GC (median): 0.00%
Time (mean ± σ): 3.870 ms ± 1.228 ms ┊ GC (mean ± σ): 8.77% ± 13.51%
█▄▂
▇███▆▄▃▅▄▄▄▄▃▄▄▄▃▃▂▂▂▂▂▂▂▂▁▂▁▁▁▁▁▂▃▂▂▂▂▂▂▂▂▁▂▂▂▂▁▂▁▁▁▁▁▁▁▁ ▂
2.78 ms Histogram: frequency by time 7.41 ms <
Memory estimate: 6.81 MiB, allocs estimate: 13145.
julia> res["ρ"][("Tulip", "HPolytope", "dense", 10)]
BenchmarkTools.Trial: 19 samples with 1 evaluation.
Range (min … max): 255.919 ms … 284.787 ms ┊ GC (min … max): 12.55% … 15.29%
Time (median): 265.923 ms ┊ GC (median): 13.42%
Time (mean ± σ): 267.170 ms ± 6.472 ms ┊ GC (mean ± σ): 13.28% ± 1.59%
▃█ ▃
▇▁▁▁▁▇▁▁▁▇▁▁▁▁▁▇▁▁▇▇██▇▁▁▁▇▇▁▁▇▁▁█▁▁▁▁▁▁▁▁▁▁▁▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▇ ▁
256 ms Histogram: frequency by time 285 ms <
Memory estimate: 420.75 MiB, allocs estimate: 585598.
from lazysets.jl.
Related Issues (20)
- convert from Zonotope to VPolytope fails due to inconsistent argument name
- Support vector of Polygon HOT 2
- Use and document fast vertices_list for 2D zonotopes
- Caching the JuMP model in linprog HOT 9
- Update "How to cite" in manual
- Template polyhedron (TPolyhedron)
- vertices_list returns Float64 type for Float32 type input
- Containment check in a LinearMap can fail with SingularException
- Inclusion check of flat zonotope in equivalent line segment fails HOT 1
- convex_hull of two 1D points modifies points in-place
- Inclusion of zonotope without generators in polyhedron
- Faster extrema of HPolytope/HPolyhedron in 1D
- Unify switching logic for solvers
- Fix piracies
- Fix unbound args
- Convex hull algorithm in 2D sensitive to numeric precision HOT 1
- Convex hull algorithm from Polyhedra.jl produces invalid constraints HOT 2
- Plot recipe for lazy operations of unions
- Use only one interval instance for `zero_itv` / `sym_itv`
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lazysets.jl.