dwavesystems / dwave-greedy Goto Github PK
View Code? Open in Web Editor NEWGreedy binary quadratic model solvers.
Home Page: https://docs.ocean.dwavesys.com/projects/greedy/
License: Apache License 2.0
Greedy binary quadratic model solvers.
Home Page: https://docs.ocean.dwavesys.com/projects/greedy/
License: Apache License 2.0
I'm seeing a performance regression in steepest descent in Ocean 4.1 vs Ocean 3.5. With Ocean 3.5, the sample code runs in about 2.4s, whereas with Ocean 4.1 it runs in about 3.9s. Is this expected? From what I can tell, they're both using the same version of greedy
, so it seems surprising.
from time import perf_counter
import dimod
import greedy
bqm = dimod.generators.anti_crossing_clique(150)
sampler = greedy.SteepestDescentSolver()
def solve(bqm, sampler):
results = sampler.sample(bqm, num_reads=50000)
return results.first
tic = perf_counter()
first = solve(bqm, sampler)
toc = perf_counter()
print(toc-tic)
(Use of anti_crossing_clique
as the test BQM is not significant, I just wanted something deterministic. I also saw similar differences with randomly generated BQM's.)
This is the same as dwavesystems/dimod#1169
Posting this more as an FYI than a request for help as it's not super high priority.
I'm using greedy
to post-process a sampleset
. It would be great if when a sampleset is passed as initial_states
, a copy of that same sampleset with greedy post-processing is returned.
The current behaviour makes a new sampleset, which doesn't keep the rest of metadata.
dwave-greedy/greedy/sampler.py
Line 221 in 75d5dcd
Desired use:
result = DWaveSampler().sample(bqm)
greedy_pp = greedy.SteepestDescentSolver().post_process(bqm, sampleset=result)
Keeping N
samples in memory is expensive if we only want the k
best ones!
Open question is the interface -- num_reads
is synonymous with num_samples
returned in sample set for all samplers in Ocean. Also, initial_states
(even for random states) are expanded to num_reads
input samples -- something we would also want to avoid in this case.
To retain compatible behavior with existing samplers, perhaps we could introduce a parameter like num_resample
. Also, resample_reduce_method
(min
/max
). And to support the k
best samples use case, we'll need a parameter like num_samples
.
So, something like:
ss = greedy.sample(bqm, num_reads=1, num_resample=1000, num_samples=3, resample_reduce_method='k-best')
Another take on this would be async sampler interface. In which case a caller would yield as many samples are needed, all with minimal memory overhead!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.