Comments (3)
I don't quite understand what you're doing. What is dummy_context
for? And what does "GPU" mean? There's no platform with that name.
The CPU platform is internally parallelized, and by default uses all available cores. If you want to run many simulations in parallel with it, you should set the "Threads"
property to limit how many threads each one uses.
I assume from the name that this is a tiny system, just three alanines in vacuum? Running on a GPU, the time will be entirely dominated by kernel launch overhead. There may be little or no benefit to running multiple simulations in parallel in that case.
What hardware are you running on? GPUs in general aren't very good at running multiple jobs in parallel. Some are better than others.
openmmtools.integrators.LangevinIntegrator
is not a good choice. Use openmm.LangevinMiddleIntegrator
instead. It is much faster.
The first few steps of any simulation may take much longer than later ones as it compiles kernels and does other initialization work. You shouldn't start timing until you've completed about 10 steps or so on each context. In addition, just because step()
returns that doesn't mean the GPU has finished. The CPU and GPU can run asynchronously at the same time. To make sure it really has finished, you need to transfer data back from the GPU to the CPU, for example by calling getState()
.
Here is how the benchmarking script times a simulation:
Lines 99 to 108 in 71f4b3f
It runs some initial steps to make sure everything is initialized, then calls getState()
to make sure those initial steps have completed, then takes the intended number of steps, and then calls getState()
again to block until they have finished.
from openmm.
By 'GPU' I mean 'CUDA' sorry for the confusion. The dummy context was created simply because I am pretty new to openmm and have problems when I try to define a platform properly :(, sorry again for the confusion. Thank you for all you suggestions. I was not aware at all the problems and all of them are very helpful!
What hardware are you running on?
I am running on a single Nvidia GeForce GTX TITAN Z and 96 cpu cores with 1 thread each.
The CPU platform is internally parallelized, and by default uses all available cores. If you want to run many simulations in parallel with it, you should set the "Threads" property to limit how many threads each one uses.
If this is the case, in what circumstances can I gain benefits by running multithread on CPU? Or would it be better for me just to use all the available cores for a single simulation and start another when it is done?
from openmm.
If this is the case, in what circumstances can I gain benefits by running multithread on CPU?
When simulating very small systems, it might be faster to run multiple simulations each using a fraction of the cores. However you do it, make sure the total number of threads between all simulations doesn't exceed the number of cores.
from openmm.
Related Issues (20)
- Deform simulations settings HOT 1
- Inclusion of a Drude polarizable DFHR system in the benchmark pipeline HOT 2
- Switching fn used for LRC of custom NB force different than used for energy HOT 2
- Overflow errors with random seeds HOT 5
- Specifying parameters per pair of particles in CustomNonbondForce HOT 1
- Implementation of Configurational-Bias Monte Carlo HOT 1
- Large time taken to initialize `OpenMM::Context` HOT 20
- First periodic box vector must be parallel to X - error HOT 5
- Option to make MonteCarloBarostat ignore frozen molecules HOT 2
- Compiling Openmm 7.5.0 from source code in order to implement tinker-openmm plugin to use amoeba ff. HOT 1
- Simulation memory leaks when adding ATMForce. HOT 13
- Have an average for pdb result HOT 1
- Problems in defining time-varying force HOT 3
- `error: HWCAP_NEON was not declared in this scope` on NVIDIA Grace-Hopper HOT 5
- Minor bug in AmberAsciiRestart HOT 2
- A zero CustomBondForce causes atoms to leave the box HOT 2
- Adding a possibility to have different exclusion sets for different CustomNonbondedForces HOT 2
- How to add two types of positive ions to a simulation box HOT 3
- Free energy calculation for Protein-protein binding with Openmm HOT 2
- Is it possible to add a reporter to an existing Context? HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from openmm.