Giter VIP home page Giter VIP logo

Comments (3)

peastman avatar peastman commented on June 10, 2024 2

I don't quite understand what you're doing. What is dummy_context for? And what does "GPU" mean? There's no platform with that name.

The CPU platform is internally parallelized, and by default uses all available cores. If you want to run many simulations in parallel with it, you should set the "Threads" property to limit how many threads each one uses.

I assume from the name that this is a tiny system, just three alanines in vacuum? Running on a GPU, the time will be entirely dominated by kernel launch overhead. There may be little or no benefit to running multiple simulations in parallel in that case.

What hardware are you running on? GPUs in general aren't very good at running multiple jobs in parallel. Some are better than others.

openmmtools.integrators.LangevinIntegrator is not a good choice. Use openmm.LangevinMiddleIntegrator instead. It is much faster.

The first few steps of any simulation may take much longer than later ones as it compiles kernels and does other initialization work. You shouldn't start timing until you've completed about 10 steps or so on each context. In addition, just because step() returns that doesn't mean the GPU has finished. The CPU and GPU can run asynchronously at the same time. To make sure it really has finished, you need to transfer data back from the GPU to the CPU, for example by calling getState().

Here is how the benchmarking script times a simulation:

def timeIntegration(context, steps, initialSteps):
"""Integrate a Context for a specified number of steps, then return how many seconds it took."""
context.getIntegrator().step(initialSteps) # Make sure everything is fully initialized
context.getState(getEnergy=True)
start = datetime.now()
context.getIntegrator().step(steps)
context.getState(getEnergy=True)
end = datetime.now()
elapsed = end-start
return elapsed.seconds + elapsed.microseconds*1e-6

It runs some initial steps to make sure everything is initialized, then calls getState() to make sure those initial steps have completed, then takes the intended number of steps, and then calls getState() again to block until they have finished.

from openmm.

CheukHinHoJerry avatar CheukHinHoJerry commented on June 10, 2024

By 'GPU' I mean 'CUDA' sorry for the confusion. The dummy context was created simply because I am pretty new to openmm and have problems when I try to define a platform properly :(, sorry again for the confusion. Thank you for all you suggestions. I was not aware at all the problems and all of them are very helpful!

What hardware are you running on?

I am running on a single Nvidia GeForce GTX TITAN Z and 96 cpu cores with 1 thread each.

The CPU platform is internally parallelized, and by default uses all available cores. If you want to run many simulations in parallel with it, you should set the "Threads" property to limit how many threads each one uses.

If this is the case, in what circumstances can I gain benefits by running multithread on CPU? Or would it be better for me just to use all the available cores for a single simulation and start another when it is done?

from openmm.

peastman avatar peastman commented on June 10, 2024

If this is the case, in what circumstances can I gain benefits by running multithread on CPU?

When simulating very small systems, it might be faster to run multiple simulations each using a fraction of the cores. However you do it, make sure the total number of threads between all simulations doesn't exceed the number of cores.

from openmm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.