Giter VIP home page Giter VIP logo

dwave-greedy's People

Contributors

arcondello avatar joelpasvolsky avatar randomir avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

dwave-greedy's Issues

Performance regression?

I'm seeing a performance regression in steepest descent in Ocean 4.1 vs Ocean 3.5. With Ocean 3.5, the sample code runs in about 2.4s, whereas with Ocean 4.1 it runs in about 3.9s. Is this expected? From what I can tell, they're both using the same version of greedy, so it seems surprising.

from time import perf_counter

import dimod
import greedy

bqm = dimod.generators.anti_crossing_clique(150)
sampler = greedy.SteepestDescentSolver()

def solve(bqm, sampler):
    results = sampler.sample(bqm, num_reads=50000)
    return results.first

tic = perf_counter()
first = solve(bqm, sampler)
toc = perf_counter()

print(toc-tic)

(Use of anti_crossing_clique as the test BQM is not significant, I just wanted something deterministic. I also saw similar differences with randomly generated BQM's.)

greedy as a sampleset post-processing tool

I'm using greedy to post-process a sampleset. It would be great if when a sampleset is passed as initial_states, a copy of that same sampleset with greedy post-processing is returned.
The current behaviour makes a new sampleset, which doesn't keep the rest of metadata.

result = dimod.SampleSet.from_samples(

Desired use:

result = DWaveSampler().sample(bqm)
greedy_pp = greedy.SteepestDescentSolver().post_process(bqm, sampleset=result)

Add option for continuous sampling

Keeping N samples in memory is expensive if we only want the k best ones!

Open question is the interface -- num_reads is synonymous with num_samples returned in sample set for all samplers in Ocean. Also, initial_states (even for random states) are expanded to num_reads input samples -- something we would also want to avoid in this case.

To retain compatible behavior with existing samplers, perhaps we could introduce a parameter like num_resample. Also, resample_reduce_method (min/max). And to support the k best samples use case, we'll need a parameter like num_samples.

So, something like:

ss = greedy.sample(bqm, num_reads=1, num_resample=1000, num_samples=3, resample_reduce_method='k-best')

Another take on this would be async sampler interface. In which case a caller would yield as many samples are needed, all with minimal memory overhead!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.