Replies: 4 comments 1 reply
-
|
Total throughput of GPU decreases if is unsaturated (underutilized). This
is different from CPU, which gives almost linear speed up or down with
respect to the number of atoms in the simulation cell. For this reason, GPU
could give almost the same or even slower speed compared to single core
CPU. When I have to do MD simulations or relaxation with very small,
different systems, I use multi core CPU to distribute the job, or write
batched code in python to fully saturate GPU.
You can see relevant figure in supporting information of SevenNet JCTC
paper.
…-----------------------------------------------------------------
*Yutack Park*
Ph.D. Candidate
Materials Data and Informatics Lab.
Department of Materials Science and Engineering
Seoul National University
*E-mail:* ***@***.*** ***@***.***>
*Homepage:* http://mtcg.snu.ac.kr
-----------------------------------------------------------------
2025년 3월 20일 (목) 오후 10:21, C. Thang Nguyen ***@***.***>님이 작성:
hi @YutackPark <https://github.com/YutackPark>,
Can I have a question about the performance of running MD in LAMMPS.
I try a test with only 6 atoms in system, use 1 GPU to run LAMMPS with
Sevenn model. with command
lmp -i input
Just run 30000 steps of NVT without any compute.
The total is almost 15 minutes.
Does this amount of time is reasonable?
I feel so slow.
Do you have any trick to deploy model for using in LAMMPS?
—
Reply to this email directly, view it on GitHub
<#190>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A2RQY25P7A6PWJG3H36PTMD2VK6F3AVCNFSM6AAAAABZNN4CSCVHI2DSMVQWIX3LMV43ERDJONRXK43TNFXW4OZYGEYDSNBYGA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
You can use the same. There is no CPU-specialized compile option at this
moment. and I think intel won't make it as GPU era comes...
…-----------------------------------------------------------------
*Yutack Park*
Ph.D. Candidate
Materials Data and Informatics Lab.
Department of Materials Science and Engineering
Seoul National University
*E-mail:* ***@***.*** ***@***.***>
*Homepage:* http://mtcg.snu.ac.kr
-----------------------------------------------------------------
2025년 3월 20일 (목) 오후 10:38, C. Thang Nguyen ***@***.***>님이 작성:
To run with CPU only, do we need to re-compile a separate lammps version?
or can we use the version compiled with GPU?
Thanks.
—
Reply to this email directly, view it on GitHub
<#190 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A2RQY2ZPB2BHDYISKBTUD3T2VLAD7AVCNFSM6AAAAABZNN4CSCVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTENJWGQZTINA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
hi @YutackPark Thank you so much for your guide. I use Slurm for scheduling jobs, and even I do not ask GPU resource, the GPU existed on the node, and Sevenn automatically pickup it. Do we have any way to "turn off" GPU (or tell Sevenn just use CPU)? For example export some runtime VARs? |
Beta Was this translation helpful? Give feedback.
-
|
In LAMMPS or Python, `export CUDA_VISIBLE_DEVICES= `
Alternatively, in python, you always have an appropriate 'device' argument
to force 7net to use CPU.
You can talk to your admin to automatically do the above behavior.
…-----------------------------------------------------------------
*Yutack Park*
Ph.D. Candidate
Materials Data and Informatics Lab.
Department of Materials Science and Engineering
Seoul National University
*E-mail:* ***@***.*** ***@***.***>
*Homepage:* http://mtcg.snu.ac.kr
-----------------------------------------------------------------
2025년 3월 21일 (금) 오후 1:49, C. Thang Nguyen ***@***.***>님이 작성:
hi @YutackPark <https://github.com/YutackPark>
Thank you so much for your guide.
Can you have a little detail on how to disable Sevenn using GPU when it
run?
I use Slurm for scheduling jobs, and even I do no accquire GPU resource,
the GPU existed on the node, and Sevenn automatically pickup it.
Do we have any way to "turn off" this? For example export some runtime
VARs?
Thanks.
—
Reply to this email directly, view it on GitHub
<#190 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A2RQY23PHNOFMUFASF5KAIT2VOK5FAVCNFSM6AAAAABZNN4CSCVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTENJXGI2DQNA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
hi @YutackPark,
Can I have a question about the performance of running MD in LAMMPS.
I try a test with only 6 atoms in system, use 1 GPU to run LAMMPS with Sevenn model. with command
Just run 30000 steps of NVT without any compute.
The total is almost 15 minutes.
Does this amount of time is reasonable?
I feel so slow.
Do you have any trick to deploy model for using in LAMMPS?
This is the output from LAMMPS:
Beta Was this translation helpful? Give feedback.
All reactions