lanl-ansi / rosetta-opf Goto Github PK
View Code? Open in Web Editor NEWAC-OPF Implementations in Various NLP Modeling Frameworks
License: Other
AC-OPF Implementations in Various NLP Modeling Frameworks
License: Other
@odow I am thinking we can remove jump-casadi.jl
from the variants directory in main, as casadi.jl
provides a more faithful implementation of AC-OPF in CasADi and maintenance on CasADi_jll
may be tedious.
What do you think?
TODO
Load Julia in the variants/Project.toml
(base) oscar@Oscars-MBP rosetta-opf % julia --project=variants
_
_ _ _(_)_ | Documentation: https://docs.julialang.org
(_) | (_) (_) |
_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 1.10.0 (2023-12-25)
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |
(variants) pkg> st
Status `~/Documents/lanl-ansi/rosetta-opf/variants/Project.toml`
[7c4d4715] AmplNLWriter v1.2.0
[2569d6c7] ConcreteStructs v0.2.3
[992eb4ea] CondaPkg v0.2.22
[b6b21f68] Ipopt v1.6.0
[4076af6c] JuMP v1.18.1
[309f4015] MathOptSymbolicAD v0.1.3
[7f7a1694] Optimization v3.21.1
[fd9f6733] OptimizationMOI v0.3.4
[c36e90e8] PowerModels v0.20.1
[6099a3de] PythonCall v0.9.15
[37e2e3b7] ReverseDiff v1.15.1
⌃ [47a9eef4] SparseDiffTools v2.15.0
⌅ [9cc047cb] Ipopt_jll v300.1400.400+0
Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated`
julia> versioninfo()
Julia Version 1.10.0
Commit 3120989f39b (2023-12-25 18:01 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: macOS (x86_64-apple-darwin22.4.0)
CPU: 8 × Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-15.0.7 (ORCJIT, skylake)
Threads: 1 on 8 virtual cores
julia> include("variants/optimization-cs-asrd.jl")
First call is to remove problem-independent compilation overhead:
julia> solve_opf("data/pglib_opf_case5_pjm.m")
[info | PowerModels]: extending matpower format with data: areas 1x3
[info | PowerModels]: removing 1 cost terms from generator 4: [4000.0, 0.0]
[info | PowerModels]: removing 1 cost terms from generator 1: [1400.0, 0.0]
[info | PowerModels]: removing 1 cost terms from generator 5: [1000.0, 0.0]
[info | PowerModels]: removing 1 cost terms from generator 2: [1500.0, 0.0]
[info | PowerModels]: removing 1 cost terms from generator 3: [3000.0, 0.0]
[info | PowerModels]: updated generator 4 cost function with order 2 to a function of order 3: [0.0, 4000.0, 0.0]
[info | PowerModels]: updated generator 1 cost function with order 2 to a function of order 3: [0.0, 1400.0, 0.0]
[info | PowerModels]: updated generator 5 cost function with order 2 to a function of order 3: [0.0, 1000.0, 0.0]
[info | PowerModels]: updated generator 2 cost function with order 2 to a function of order 3: [0.0, 1500.0, 0.0]
[info | PowerModels]: updated generator 3 cost function with order 2 to a function of order 3: [0.0, 3000.0, 0.0]
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit https://github.com/coin-or/Ipopt
******************************************************************************
This is Ipopt version 3.14.4, running with linear solver MUMPS 5.4.1.
Number of nonzeros in equality constraint Jacobian...: 155
Number of nonzeros in inequality constraint Jacobian.: 36
Number of nonzeros in Lagrangian Hessian.............: 240
Total number of variables............................: 44
variables with only lower bounds: 0
variables with lower and upper bounds: 39
variables with only upper bounds: 0
Total number of equality constraints.................: 35
Total number of inequality constraints...............: 18
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 6
inequality constraints with only upper bounds: 12
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.0059989e+02 3.99e+00 2.88e+01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 8.3066346e+03 2.47e+00 1.01e+02 -1.0 2.78e+00 - 4.11e-03 3.82e-01h 1
2 6.7182484e+03 2.36e+00 9.62e+01 -1.0 1.60e+01 - 7.37e-02 4.44e-02f 1
3 6.6691211e+03 2.30e+00 9.34e+01 -1.0 1.30e+01 - 4.95e-01 2.40e-02f 1
4 6.5744238e+03 2.04e+00 8.25e+01 -1.0 1.29e+01 - 3.67e-01 1.12e-01f 2
5 6.8265929e+03 1.80e+00 7.10e+01 -1.0 1.23e+01 - 8.72e-01 1.20e-01h 2
6 8.8541540e+03 1.08e+00 4.20e+01 -1.0 9.14e+00 - 5.92e-01 4.00e-01h 1
7 1.0572759e+04 8.62e-01 3.58e+01 -1.0 2.94e+00 - 4.94e-01 2.00e-01h 1
8 1.7308372e+04 3.63e-02 1.47e+01 -1.0 2.41e+00 - 7.66e-01 9.58e-01h 1
9 1.7572883e+04 1.33e-02 1.10e+00 -1.0 2.11e+00 - 1.00e+00 1.00e+00h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 1.7590632e+04 1.69e-03 1.61e-01 -1.0 5.03e-01 - 1.00e+00 1.00e+00h 1
11 1.7558725e+04 5.24e-03 5.03e-01 -2.5 6.03e-01 - 8.35e-01 9.36e-01f 1
12 1.7553111e+04 3.34e-03 4.11e+00 -2.5 2.84e-01 - 1.00e+00 8.20e-01h 1
13 1.7552956e+04 3.24e-05 1.26e-02 -2.5 6.35e-02 - 1.00e+00 1.00e+00h 1
14 1.7551990e+04 1.35e-05 1.09e+00 -3.8 2.53e-02 - 1.00e+00 9.25e-01h 1
15 1.7551938e+04 4.46e-08 1.23e-02 -3.8 7.00e-03 - 1.00e+00 1.00e+00f 1
16 1.7551940e+04 2.36e-10 2.06e-04 -3.8 3.84e-04 - 1.00e+00 1.00e+00h 1
17 1.7551892e+04 1.75e-07 2.11e-01 -5.7 2.49e-03 - 1.00e+00 9.68e-01f 1
18 1.7551891e+04 6.82e-11 3.10e-05 -5.7 2.38e-04 - 1.00e+00 1.00e+00f 1
19 1.7551891e+04 1.60e-14 6.53e-10 -5.7 5.20e-07 - 1.00e+00 1.00e+00h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
20 1.7551891e+04 6.34e-12 3.03e-07 -8.6 3.52e-05 - 1.00e+00 1.00e+00f 1
21 1.7551891e+04 1.82e-14 2.21e-12 -8.6 3.34e-08 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 21
(scaled) (unscaled)
Objective...............: 4.3879727096486897e+02 1.7551890838594758e+04
Dual infeasibility......: 2.2126691395576679e-12 8.8506765582306717e-11
Constraint violation....: 1.3544720900426910e-14 1.8207657603852567e-14
Variable bound violation: 2.9463905093507492e-08 2.9463905093507492e-08
Complementarity.........: 2.5059076302144483e-09 1.0023630520857793e-07
Overall NLP error.......: 2.5059076302144483e-09 1.0023630520857793e-07
Number of objective function evaluations = 28
Number of objective gradient evaluations = 22
Number of equality constraint evaluations = 28
Number of inequality constraint evaluations = 28
Number of equality constraint Jacobian evaluations = 22
Number of inequality constraint Jacobian evaluations = 22
Number of Lagrangian Hessian evaluations = 21
Total seconds in IPOPT = 2.576
EXIT: Optimal Solution Found.
Summary
case........: data/pglib_opf_case5_pjm.m
variables...: 44
constraints.: 53
feasible....: true
cost........: 17552
total time..: 32.540064096450806
data time.: 8.220144987106323
build time: 17.367176055908203
solve time: 6.952743053436279
Dict{String, Any} with 9 entries:
"cost" => 17551.9
"variables" => 44
"constraints" => 53
"case" => "data/pglib_opf_case5_pjm.m"
"time_total" => 32.5401
"time_build" => 17.3672
"time_solve" => 6.95274
"time_data" => 8.22014
"feasible" => true
Second call is that @ccoffrin reports as the time
julia> solve_opf("data/pglib_opf_case57_ieee.m")
[info | PowerModels]: removing 3 cost terms from generator 4: Float64[]
[info | PowerModels]: removing 1 cost terms from generator 1: [1696.0624, 0.0]
[info | PowerModels]: removing 1 cost terms from generator 5: [3044.1037, 0.0]
[info | PowerModels]: removing 3 cost terms from generator 2: Float64[]
[info | PowerModels]: removing 3 cost terms from generator 6: Float64[]
[info | PowerModels]: removing 1 cost terms from generator 7: [3718.8979000000004, 0.0]
[info | PowerModels]: removing 1 cost terms from generator 3: [3407.5557000000003, 0.0]
[info | PowerModels]: updated generator 4 cost function with order 0 to a function of order 3: [0.0, 0.0, 0.0]
[info | PowerModels]: updated generator 1 cost function with order 2 to a function of order 3: [0.0, 1696.0624, 0.0]
[info | PowerModels]: updated generator 5 cost function with order 2 to a function of order 3: [0.0, 3044.1037, 0.0]
[info | PowerModels]: updated generator 2 cost function with order 0 to a function of order 3: [0.0, 0.0, 0.0]
[info | PowerModels]: updated generator 6 cost function with order 0 to a function of order 3: [0.0, 0.0, 0.0]
[info | PowerModels]: updated generator 7 cost function with order 2 to a function of order 3: [0.0, 3718.8979000000004, 0.0]
[info | PowerModels]: updated generator 3 cost function with order 2 to a function of order 3: [0.0, 3407.5557000000003, 0.0]
This is Ipopt version 3.14.4, running with linear solver MUMPS 5.4.1.
Number of nonzeros in equality constraint Jacobian...: 1935
Number of nonzeros in inequality constraint Jacobian.: 480
Number of nonzeros in Lagrangian Hessian.............: 3167
Total number of variables............................: 445
variables with only lower bounds: 0
variables with lower and upper bounds: 388
variables with only upper bounds: 0
Total number of equality constraints.................: 435
Total number of inequality constraints...............: 240
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 80
inequality constraints with only upper bounds: 160
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.0503586e+02 3.76e+00 1.91e+01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.0716542e+04 2.65e+00 1.19e+02 -1.0 3.70e+00 - 2.74e-03 2.95e-01h 1
2 1.3985380e+04 2.15e+00 9.71e+01 -1.0 7.02e+00 - 5.91e-02 1.90e-01h 1
3 1.4674768e+04 2.08e+00 7.81e+01 -1.0 3.90e+00 - 3.67e-01 2.99e-02h 1
4 2.0339343e+04 1.57e+00 5.90e+01 -1.0 4.22e+00 - 7.79e-01 2.46e-01h 1
5 3.5192035e+04 2.16e-01 4.78e+01 -1.0 4.88e+00 - 7.60e-01 8.63e-01h 1
6 3.6353626e+04 1.02e-01 2.36e+01 -1.0 1.06e+01 - 7.33e-01 5.28e-01h 1
7 3.7555282e+04 1.23e-02 1.55e+00 -1.0 6.75e+00 - 1.00e+00 1.00e+00h 1
8 3.7614438e+04 3.66e-04 1.89e-02 -1.0 1.26e+00 - 1.00e+00 1.00e+00h 1
9 3.7591305e+04 4.39e-04 3.47e+00 -2.5 1.26e+00 - 8.64e-01 1.00e+00f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 3.7590573e+04 8.44e-05 2.30e+00 -2.5 1.37e-01 - 9.44e-01 8.36e-01h 1
11 3.7590149e+04 1.43e-05 1.35e-03 -2.5 2.81e-02 - 1.00e+00 1.00e+00f 1
12 3.7589396e+04 2.20e-06 1.91e-02 -3.8 4.24e-02 - 9.89e-01 1.00e+00h 1
13 3.7589383e+04 8.08e-09 2.05e-06 -3.8 5.69e-04 - 1.00e+00 1.00e+00h 1
14 3.7589339e+04 7.59e-09 1.23e-06 -5.7 2.32e-03 - 1.00e+00 1.00e+00h 1
15 3.7589338e+04 1.85e-12 2.72e-10 -8.6 2.92e-05 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 15
(scaled) (unscaled)
Objective...............: 1.0107655336327791e+03 3.7589338204193220e+04
Dual infeasibility......: 2.7159429322152497e-10 1.0100314467135135e-08
Constraint violation....: 1.8536838730653926e-12 1.8536838730653926e-12
Variable bound violation: 2.4448082225347889e-08 2.4448082225347889e-08
Complementarity.........: 2.6288234097387554e-09 9.7763258579482989e-08
Overall NLP error.......: 2.6288234097387554e-09 9.7763258579482989e-08
Number of objective function evaluations = 16
Number of objective gradient evaluations = 16
Number of equality constraint evaluations = 16
Number of inequality constraint evaluations = 16
Number of equality constraint Jacobian evaluations = 16
Number of inequality constraint Jacobian evaluations = 16
Number of Lagrangian Hessian evaluations = 15
Total seconds in IPOPT = 7.702
EXIT: Optimal Solution Found.
Summary
case........: data/pglib_opf_case57_ieee.m
variables...: 448
constraints.: 675
feasible....: true
cost........: 37589
total time..: 30.914313077926636
data time.: 0.10789203643798828
build time: 20.57877492904663
solve time: 10.227646112442017
Dict{String, Any} with 9 entries:
"cost" => 37589.3
"variables" => 448
"constraints" => 675
"case" => "data/pglib_opf_case57_ieee.m"
"time_total" => 30.9143
"time_build" => 20.5788
"time_solve" => 10.2276
"time_data" => 0.107892
"feasible" => true
Total time is 3.09+01
seconds, which is close to @ccoffrin's latest run of 3.25e+01
, so a 2018 MacBook is a reasonable comparison for his hardware.
julia> using ProfileView
julia> @profview solve_opf("data/pglib_opf_case57_ieee.m")
Here's the profile: 2/3 of the time is spent building the problem:
For some research papers with François Pacaud and Sungho Shin, we rely on an Artifacts.toml
to easily test our optimization solvers on ACOPF problems. It can be also useful here.
The content of Artifacts.toml
is:
[PGLib_opf]
git-tree-sha1 = "0e8968a89b6ad43910a8eda4ec30656add35cf91"
lazy = false
[[PGLib_opf.download]]
sha256 = "f1421ce22f0a7b9de8a8b2111776b496348220192ad24aace392c3bf608706c2"
url = "https://github.com/power-grid-lib/pglib-opf/archive/refs/tags/v23.07.tar.gz"
In our test, we can afterward do:
pglib_path = joinpath(artifact"PGLib_opf", "pglib-opf-23.07")
using PowerModels
pglib_opf_case78484_epigrids = ac_power_model(joinpath(pglib_path, "pglib_opf_case78484_epigrids.m"))
Copied from Optimization example. I'll make a PR once CI is merged
#!/usr/bin/env julia
###### AC-OPF using ADNLPModels ######
#
# implementation reference: https://juliasmoothoptimizers.github.io/ADNLPModels.jl/stable/tutorial/
# other AD libraries can be considered: https://juliasmoothoptimizers.github.io/ADNLPModels.jl/stable/
#
import PowerModels
import Symbolics
import ADNLPModels
import ConcreteStructs
import NLPModelsIpopt
ConcreteStructs.@concrete struct DataRepresentation
data
ref
var_lookup
var_init
var_lb
var_ub
ref_gen_idxs
lookup_pg
lookup_qg
lookup_va
lookup_vm
lookup_lij
lookup_p_lij
lookup_q_lij
cost_arrs
f_bus
t_bus
ref_bus_idxs
ref_buses_idxs
ref_bus_gens
ref_bus_arcs
ref_branch_idxs
ref_arcs_from
ref_arcs_to
p_idxmap
q_idxmap
bus_pd
bus_qd
bus_gs
bus_bs
br_g
br_b
br_tr
br_ti
br_ttm
br_g_fr
br_b_fr
br_g_to
br_b_to
end
function load_and_setup_data(file_name)
data = PowerModels.parse_file(file_name)
PowerModels.standardize_cost_terms!(data, order=2)
PowerModels.calc_thermal_limits!(data)
ref = PowerModels.build_ref(data)[:it][:pm][:nw][0]
# Some data munging to type-stable forms
var_lookup = Dict{String,Int}()
var_init = Float64[]
var_lb = Float64[]
var_ub = Float64[]
var_idx = 1
for (i,bus) in ref[:bus]
push!(var_init, 0.0) #va
push!(var_lb, -Inf)
push!(var_ub, Inf)
var_lookup["va_$(i)"] = var_idx
var_idx += 1
push!(var_init, 1.0) #vm
push!(var_lb, bus["vmin"])
push!(var_ub, bus["vmax"])
var_lookup["vm_$(i)"] = var_idx
var_idx += 1
end
for (i,gen) in ref[:gen]
push!(var_init, 0.0) #pg
push!(var_lb, gen["pmin"])
push!(var_ub, gen["pmax"])
var_lookup["pg_$(i)"] = var_idx
var_idx += 1
push!(var_init, 0.0) #qg
push!(var_lb, gen["qmin"])
push!(var_ub, gen["qmax"])
var_lookup["qg_$(i)"] = var_idx
var_idx += 1
end
for (l,i,j) in ref[:arcs]
branch = ref[:branch][l]
push!(var_init, 0.0) #p
push!(var_lb, -branch["rate_a"])
push!(var_ub, branch["rate_a"])
var_lookup["p_$(l)_$(i)_$(j)"] = var_idx
var_idx += 1
push!(var_init, 0.0) #q
push!(var_lb, -branch["rate_a"])
push!(var_ub, branch["rate_a"])
var_lookup["q_$(l)_$(i)_$(j)"] = var_idx
var_idx += 1
end
@assert var_idx == length(var_init)+1
ref_gen_idxs = [i for i in keys(ref[:gen])]
lookup_pg = Dict{Int,Int}()
lookup_qg = Dict{Int,Int}()
lookup_va = Dict{Int,Int}()
lookup_vm = Dict{Int,Int}()
lookup_lij = Tuple{Int,Int,Int}[]
lookup_p_lij = Int[]
lookup_q_lij = Int[]
cost_arrs = Dict{Int,Vector{Float64}}()
for (i,gen) in ref[:gen]
lookup_pg[i] = var_lookup["pg_$(i)"]
lookup_qg[i] = var_lookup["qg_$(i)"]
cost_arrs[i] = gen["cost"]
end
for (i,bus) in ref[:bus]
lookup_va[i] = var_lookup["va_$(i)"]
lookup_vm[i] = var_lookup["vm_$(i)"]
end
for (l,i,j) in ref[:arcs]
push!(lookup_lij, (l,i,j))
push!(lookup_p_lij,var_lookup["p_$(l)_$(i)_$(j)"])
push!(lookup_q_lij,var_lookup["q_$(l)_$(i)_$(j)"])
end
f_bus = Dict{Int,Int}()
t_bus = Dict{Int,Int}()
for (l,branch) in ref[:branch]
f_bus[l] = branch["f_bus"]
t_bus[l] = branch["t_bus"]
end
ref_bus_idxs = [i for i in keys(ref[:bus])]
ref_buses_idxs = [i for i in keys(ref[:ref_buses])]
ref_bus_gens = ref[:bus_gens]
ref_bus_arcs = ref[:bus_arcs]
ref_branch_idxs = [i for i in keys(ref[:branch])]
ref_arcs_from = ref[:arcs_from]
ref_arcs_to = ref[:arcs_to]
p_idxmap = Dict(lookup_lij[i] => lookup_p_lij[i] for i in 1:length(lookup_lij))
q_idxmap = Dict(lookup_lij[i] => lookup_q_lij[i] for i in 1:length(lookup_lij))
bus_pd = Dict(i => 0.0 for (i,bus) in ref[:bus])
bus_qd = Dict(i => 0.0 for (i,bus) in ref[:bus])
bus_gs = Dict(i => 0.0 for (i,bus) in ref[:bus])
bus_bs = Dict(i => 0.0 for (i,bus) in ref[:bus])
for (i,bus) in ref[:bus]
if length(ref[:bus_loads][i]) > 0
bus_pd[i] = sum(ref[:load][l]["pd"] for l in ref[:bus_loads][i])
bus_qd[i] = sum(ref[:load][l]["qd"] for l in ref[:bus_loads][i])
end
if length(ref[:bus_shunts][i]) > 0
bus_gs[i] = sum(ref[:shunt][s]["gs"] for s in ref[:bus_shunts][i])
bus_bs[i] = sum(ref[:shunt][s]["bs"] for s in ref[:bus_shunts][i])
end
end
br_g = Dict(i => 0.0 for (i,branch) in ref[:branch])
br_b = Dict(i => 0.0 for (i,branch) in ref[:branch])
br_tr = Dict(i => 0.0 for (i,branch) in ref[:branch])
br_ti = Dict(i => 0.0 for (i,branch) in ref[:branch])
br_ttm = Dict(i => 0.0 for (i,branch) in ref[:branch])
br_g_fr = Dict(i => 0.0 for (i,branch) in ref[:branch])
br_b_fr = Dict(i => 0.0 for (i,branch) in ref[:branch])
br_g_to = Dict(i => 0.0 for (i,branch) in ref[:branch])
br_b_to = Dict(i => 0.0 for (i,branch) in ref[:branch])
for (i,branch) in ref[:branch]
g, b = PowerModels.calc_branch_y(branch)
tr, ti = PowerModels.calc_branch_t(branch)
br_g[i] = g
br_b[i] = b
br_tr[i] = tr
br_ti[i] = ti
br_ttm[i] = tr^2 + ti^2
br_g_fr[i] = branch["g_fr"]
br_b_fr[i] = branch["b_fr"]
br_g_to[i] = branch["g_to"]
br_b_to[i] = branch["b_to"]
end
return DataRepresentation(
data,
ref,
var_lookup,
var_init,
var_lb,
var_ub,
ref_gen_idxs,
lookup_pg,
lookup_qg,
lookup_va,
lookup_vm,
lookup_lij,
lookup_p_lij,
lookup_q_lij,
cost_arrs,
f_bus,
t_bus,
ref_bus_idxs,
ref_buses_idxs,
ref_bus_gens,
ref_bus_arcs,
ref_branch_idxs,
ref_arcs_from,
ref_arcs_to,
p_idxmap,
q_idxmap,
bus_pd,
bus_qd,
bus_gs,
bus_bs,
br_g,
br_b,
br_tr,
br_ti,
br_ttm,
br_g_fr,
br_b_fr,
br_g_to,
br_b_to)
end
function build_opf_optimization_prob(dataset)
(;data,
ref,
var_lookup,
var_init,
var_lb,
var_ub,
ref_gen_idxs,
lookup_pg,
lookup_qg,
lookup_va,
lookup_vm,
lookup_lij,
lookup_p_lij,
lookup_q_lij,
cost_arrs,
f_bus,
t_bus,
ref_bus_idxs,
ref_buses_idxs,
ref_bus_gens,
ref_bus_arcs,
ref_branch_idxs,
ref_arcs_from,
ref_arcs_to,
p_idxmap,
q_idxmap,
bus_pd,
bus_qd,
bus_gs,
bus_bs,
br_g,
br_b,
br_tr,
br_ti,
br_ttm,
br_g_fr,
br_b_fr,
br_g_to,
br_b_to) = dataset
function opf_objective(x)
cost = 0.0
for i in ref_gen_idxs
pg = x[lookup_pg[i]]
_cost_arr = cost_arrs[i]
cost += _cost_arr[1]*pg^2 + _cost_arr[2]*pg + _cost_arr[3]
end
return cost
end
function opf_constraints!(ret, x)
offsetidx = 0
# va_con
for (reti,i) in enumerate(ref_buses_idxs)
ret[reti + offsetidx] = x[lookup_va[i]]
end
offsetidx += length(ref_buses_idxs)
# @constraint(model,
# sum(p[a] for a in ref[:bus_arcs][i]) ==
# sum(pg[g] for g in ref_bus_gens[i]) -
# sum(load["pd"] for load in bus_loads) -
# sum(shunt["gs"] for shunt in bus_shunts)*x[lookup_vm[i]]^2
# )
for (reti,i) in enumerate(ref_bus_idxs)
ret[reti + offsetidx] = sum(x[lookup_pg[j]] for j in ref_bus_gens[i]; init=0.0) -
bus_pd[i] -
bus_gs[i]*x[lookup_vm[i]]^2 -
sum(x[p_idxmap[a]] for a in ref_bus_arcs[i])
end
offsetidx += length(ref_bus_idxs)
# @constraint(model,
# sum(q[a] for a in ref[:bus_arcs][i]) ==
# sum(x[lookup_qg[g]] for g in ref_bus_gens[i]) -
# sum(load["qd"] for load in bus_loads) +
# sum(shunt["bs"] for shunt in bus_shunts)*x[lookup_vm[i]]^2
# )
for (reti,i) in enumerate(ref_bus_idxs)
ret[reti + offsetidx] = sum(x[lookup_qg[j]] for j in ref_bus_gens[i]; init=0.0) -
bus_qd[i] +
bus_bs[i]*x[lookup_vm[i]]^2 -
sum(x[q_idxmap[a]] for a in ref_bus_arcs[i])
end
offsetidx += length(ref_bus_idxs)
# @NLconstraint(model, p_fr == (g+g_fr)/ttm*vm_fr^2 + (-g*tr+b*ti)/ttm*(vm_fr*vm_to*cos(va_fr-va_to)) + (-b*tr-g*ti)/ttm*(vm_fr*vm_to*sin(va_fr-va_to)) )
# power_flow_p_from_con =
for (reti,(l,i,j)) in enumerate(ref_arcs_from)
ret[reti + offsetidx] = (br_g[l]+br_g_fr[l])/br_ttm[l]*x[lookup_vm[f_bus[l]]]^2 +
(-br_g[l]*br_tr[l]+br_b[l]*br_ti[l])/br_ttm[l]*(x[lookup_vm[f_bus[l]]]*x[lookup_vm[t_bus[l]]]*cos(x[lookup_va[f_bus[l]]]-x[lookup_va[t_bus[l]]])) +
(-br_b[l]*br_tr[l]-br_g[l]*br_ti[l])/br_ttm[l]*(x[lookup_vm[f_bus[l]]]*x[lookup_vm[t_bus[l]]]*sin(x[lookup_va[f_bus[l]]]-x[lookup_va[t_bus[l]]])) -
x[p_idxmap[(l,i,j)]]
end
offsetidx += length(ref_arcs_from)
# @NLconstraint(model, p_to == (g+g_to)*vm_to^2 + (-g*tr-b*ti)/ttm*(vm_to*vm_fr*cos(va_to-va_fr)) + (-b*tr+g*ti)/ttm*(vm_to*vm_fr*sin(va_to-va_fr)) )
# power_flow_p_to_con
for (reti,(l,i,j)) in enumerate(ref_arcs_to)
ret[reti + offsetidx] = (br_g[l]+br_g_to[l])*x[lookup_vm[t_bus[l]]]^2 +
(-br_g[l]*br_tr[l]-br_b[l]*br_ti[l])/br_ttm[l]*(x[lookup_vm[t_bus[l]]]*x[lookup_vm[f_bus[l]]]*cos(x[lookup_va[t_bus[l]]]-x[lookup_va[f_bus[l]]])) +
(-br_b[l]*br_tr[l]+br_g[l]*br_ti[l])/br_ttm[l]*(x[lookup_vm[t_bus[l]]]*x[lookup_vm[f_bus[l]]]*sin(x[lookup_va[t_bus[l]]]-x[lookup_va[f_bus[l]]])) -
x[p_idxmap[(l,i,j)]]
end
offsetidx += length(ref_arcs_to)
# @NLconstraint(model, q_fr == -(b+b_fr)/ttm*vm_fr^2 - (-b*tr-g*ti)/ttm*(vm_fr*vm_to*cos(va_fr-va_to)) + (-g*tr+b*ti)/ttm*(vm_fr*vm_to*sin(va_fr-va_to)) )
# power_flow_q_from_con
for (reti,(l,i,j)) in enumerate(ref_arcs_from)
ret[reti + offsetidx] = -(br_b[l]+br_b_fr[l])/br_ttm[l]*x[lookup_vm[f_bus[l]]]^2 -
(-br_b[l]*br_tr[l]-br_g[l]*br_ti[l])/br_ttm[l]*(x[lookup_vm[f_bus[l]]]*x[lookup_vm[t_bus[l]]]*cos(x[lookup_va[f_bus[l]]]-x[lookup_va[t_bus[l]]])) +
(-br_g[l]*br_tr[l]+br_b[l]*br_ti[l])/br_ttm[l]*(x[lookup_vm[f_bus[l]]]*x[lookup_vm[t_bus[l]]]*sin(x[lookup_va[f_bus[l]]]-x[lookup_va[t_bus[l]]])) -
x[q_idxmap[(l,i,j)]]
end
offsetidx += length(ref_arcs_from)
# @NLconstraint(model, q_to == -(b+b_to)*vm_to^2 - (-b*tr+g*ti)/ttm*(vm_to*vm_fr*cos(va_to-va_fr)) + (-g*tr-b*ti)/ttm*(vm_to*vm_fr*sin(va_to-va_fr)) )
# power_flow_q_to_con
for (reti,(l,i,j)) in enumerate(ref_arcs_to)
ret[reti + offsetidx] = -(br_b[l]+br_b_to[l])*x[lookup_vm[t_bus[l]]]^2 -
(-br_b[l]*br_tr[l]+br_g[l]*br_ti[l])/br_ttm[l]*(x[lookup_vm[t_bus[l]]]*x[lookup_vm[f_bus[l]]]*cos(x[lookup_va[t_bus[l]]]-x[lookup_va[f_bus[l]]])) +
(-br_g[l]*br_tr[l]-br_b[l]*br_ti[l])/br_ttm[l]*(x[lookup_vm[t_bus[l]]]*x[lookup_vm[f_bus[l]]]*sin(x[lookup_va[t_bus[l]]]-x[lookup_va[f_bus[l]]])) -
x[q_idxmap[(l,i,j)]]
end
offsetidx += length(ref_arcs_to)
# @constraint(model, va_fr - va_to <= branch["angmax"])
# @constraint(model, va_fr - va_to >= branch["angmin"])
# power_flow_vad_con
for (reti,(l,i,j)) in enumerate(ref_arcs_from)
ret[reti + offsetidx] = x[lookup_va[f_bus[l]]] - x[lookup_va[t_bus[l]]]
end
offsetidx += length(ref_arcs_from)
# @constraint(model, p_fr^2 + q_fr^2 <= branch["rate_a"]^2)
# power_flow_mva_from_con
for (reti,(l,i,j)) in enumerate(ref_arcs_from)
ret[reti + offsetidx] = x[p_idxmap[(l,i,j)]]^2 + x[q_idxmap[(l,i,j)]]^2
end
offsetidx += length(ref_arcs_from)
# @constraint(model, p_to^2 + q_to^2 <= branch["rate_a"]^2)
# power_flow_mva_to_con
for (reti,(l,i,j)) in enumerate(ref_arcs_to)
ret[reti + offsetidx] = x[p_idxmap[(l,i,j)]]^2 + x[q_idxmap[(l,i,j)]]^2
end
offsetidx += length(ref_arcs_to)
@assert offsetidx == length(ret)
return ret
end
con_lbs = Float64[]
con_ubs = Float64[]
#@constraint(model, va[i] == 0)
for (i,bus) in ref[:ref_buses]
push!(con_lbs, 0.0)
push!(con_ubs, 0.0)
end
#power_balance_p_con
for (i,bus) in ref[:bus]
push!(con_lbs, 0.0)
push!(con_ubs, 0.0)
end
#power_balance_q_con
for (i,bus) in ref[:bus]
push!(con_lbs, 0.0)
push!(con_ubs, 0.0)
end
#power_flow_p_from_con
for (l,i,j) in ref[:arcs_from]
push!(con_lbs, 0.0)
push!(con_ubs, 0.0)
end
#power_flow_p_to_con
for (l,i,j) in ref[:arcs_to]
push!(con_lbs, 0.0)
push!(con_ubs, 0.0)
end
#power_flow_q_from_con
for (l,i,j) in ref[:arcs_from]
push!(con_lbs, 0.0)
push!(con_ubs, 0.0)
end
#power_flow_q_to_con
for (l,i,j) in ref[:arcs_to]
push!(con_lbs, 0.0)
push!(con_ubs, 0.0)
end
#power_flow_vad_con
for (l,i,j) in ref[:arcs_from]
branch = ref[:branch][l]
push!(con_lbs, branch["angmin"])
push!(con_ubs, branch["angmax"])
end
#power_flow_mva_from_con
for (l,i,j) in ref[:arcs_from]
branch = ref[:branch][l]
push!(con_lbs, -Inf)
push!(con_ubs, branch["rate_a"]^2)
end
#power_flow_mva_to_con
for (l,i,j) in ref[:arcs_to]
branch = ref[:branch][l]
push!(con_lbs, -Inf)
push!(con_ubs, branch["rate_a"]^2)
end
nlp = ADNLPModels.ADNLPModel!(opf_objective, var_init, var_lb, var_ub, opf_constraints!, con_lbs, con_ubs, backend = :optimized)
return (
nlp = nlp,
con_lbs = con_lbs,
con_ubs = con_ubs,
)
end
function solve_opf(file_name)
start = time()
dataset = load_and_setup_data(file_name);
data_load_time = time() - start
model = build_opf_optimization_prob(dataset)
nlp = model.nlp
model_build_time = time() - start - data_load_time
output = NLPModelsIpopt.ipopt(nlp)
cost = output.objective
feasible = (output.primal_feas <= 1e-6)
total_time = time() - start
solve_time = total_time - model_build_time - data_load_time
model_variables = length(dataset.var_init)
model_constraints = length(model.con_lbs)
println("variables: $(model_variables), $(length(dataset.var_lb)), $(length(dataset.var_ub))")
println("constraints: $(model_constraints), $(length(model.con_lbs)), $(length(model.con_ubs))")
println("")
println("\033[1mSummary\033[0m")
println(" case........: $(file_name)")
println(" variables...: $(model_variables)")
println(" constraints.: $(model_constraints)")
println(" feasible....: $(feasible)")
println(" cost........: $(round(Int, cost))")
println(" total time..: $(total_time)")
println(" data time.: $(data_load_time)")
println(" build time: $(model_build_time)")
println(" solve time: $(solve_time)")
println("")
return Dict(
"case" => file_name,
"variables" => model_variables,
"constraints" => model_constraints,
"feasible" => feasible,
"cost" => cost,
"time_total" => total_time,
"time_data" => data_load_time,
"time_build" => model_build_time,
"time_solve" => solve_time,
"solution" => Dict(k => output.solution[v] for (k, v) in dataset.var_lookup),
)
end
if isinteractive() == false
solve_opf("$(@__DIR__)/../data/pglib_opf_case5_pjm.m")
end
This could be a topic for standardization across the packages. Key points,
Per an email with @ccoffrin
You are correct that 20% overall improvement represents a bigger improvement (often around 2x more) on just the derivative computations. To get a better handle on this, instead of switching to HSL, is there a way we can add derivative callback time tracking into the information that is reported in the rosetta-opf scripts?
Update this example to use the more recent "solve_opf" function call style.
To show that the macros don't do anything special.
It's now Optimization.jl and we need OptimizationMOI.jl.
Working on this now.
Discussed with @ccoffrin that we should add a casadi benchmark.
What I will do is setup Julia to export a problem as JSON, which we can then read into Python.
casadi has two ways of solving NLPs:
The Opti stack has a friendly interface, but it has a critical drawback of not supporting variable bounds (as far as I can tell casadi/casadi#2357).
The nlpsol method is less-friendly, but does support variable bounds.
Here's a basic example for when I come back to this.
import casadi
def opti_model():
model = casadi.Opti()
x = model.variable(4)
model.set_initial(x, [1, 5, 5, 1])
model.minimize(x[0] * x[3] * (x[0] + x[1] + x[2]) + x[2])
# Bounds are passed as constraints :(
# https://github.com/casadi/casadi/issues/2357
model.subject_to(x >= 1)
model.subject_to(x <= 5)
model.subject_to(x[0] * x[1] * x[2] * x[3] >= 25)
model.subject_to(sum(x[i]**2 for i in range(4)) == 40)
model.solver('ipopt')
solution = model.solve()
stats = solution.stats()
return {
'x': solution.value(x),
'total_time': stats['t_wall_total'],
'deriv_time':
stats['t_wall_nlp_f'] +
stats['t_wall_nlp_grad_f'] +
stats['t_wall_nlp_g'] +
stats['t_wall_nlp_jac_g'] +
stats['t_wall_nlp_hess_l']
}
def nlp_solmodel():
x = casadi.SX.sym('x', 4)
model = {
'x': x,
'f': x[0] * x[3] * (x[0] + x[1] + x[2]) + x[2],
'g': casadi.vertcat(
x[0] * x[1] * x[2] * x[3],
sum(x[i]**2 for i in range(4)),
),
}
S = casadi.nlpsol('S', 'ipopt', model)
solution = S(
lbx = [1, 1, 1, 1],
ubx = [5, 5, 5, 5],
lbg = [25, 40],
ubg = [casadi.inf, 40],
x0 = [1, 5, 5, 1]
)
return {
'x': solution['x'].elements()
}
opti_model()
nlp_solmodel()
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.