Difference between revisions of "Computers Basel 2016"
(50 intermediate revisions by the same user not shown) | |||
Line 6: | Line 6: | ||
We will be using the Matlab release R2015b. It should be visible on the desktop, otherwise use Spotlight (the magnifying glass on the top right corner) to locate it. | We will be using the Matlab release R2015b. It should be visible on the desktop, otherwise use Spotlight (the magnifying glass on the top right corner) to locate it. | ||
+ | |||
+ | ===Installing ''Dynamo''=== | ||
+ | |||
+ | A package can be found in <tt>/Users/gast/workshop2016/dynamo</tt> | ||
+ | |||
+ | |||
+ | Just unpack it with this command inside a system terminal: | ||
+ | <pre>tar -xf /Users/gast/workshop2016/dynamo/dynamo_temp_1.1.182.tar -C /Users/gast/workshop2016/dynamo</pre> | ||
+ | |||
+ | Note that you need to use system terminal, not a Matlab desktop. Inside a Matlab terminal, the syntax is: | ||
+ | <pre>!tar -xf /Users/gast/workshop2016/dynamo/dynamo_temp_1.1.182.tar -C /Users/gast/workshop2016/dynamo</pre> | ||
===Opening ''Dynamo''=== | ===Opening ''Dynamo''=== | ||
− | After opening a Matlab session, you'll need to activate ''Dynamo'' in that session. ''Dynamo '' should be installed in: | + | After opening a Matlab session, you'll need to activate ''Dynamo'' in that session. ''Dynamo '' should be installed in <tt>/Users/gast/workshop2016/dynamo </tt> |
+ | In order to activate your local ''Dynamo'' version, please type: | ||
+ | |||
+ | <pre>run /Users/gast/workshop2016/dynamo/dynamo_activate.m</pre> | ||
+ | |||
+ | ===Connecting to the instructors file share=== | ||
− | <pre> /Users/gast/ | + | Some files will be made available through a mounting the file system of the instructor's computer. You can mount this share by typing in Matlab |
− | You need to | + | |
+ | <pre> run /Users/gast/mountData </pre> | ||
+ | |||
+ | which will make the file share <tt>/Users/gast/mountedData</tt> visible in your workstation. You will need this in order to transfer our test data sets to your local machine. | ||
− | |||
== CSCS: Lugano== | == CSCS: Lugano== | ||
− | CSCS Lugano is the Nacional Supercomputing Center of Switzerland. | + | CSCS Lugano is the Nacional Supercomputing Center of Switzerland. Each account should be able to submit jobs to a single node connected to a K20 GPU and four cores. |
Line 26: | Line 44: | ||
<pre>stud01@ela2:~> ssh -Y daint</pre> | <pre>stud01@ela2:~> ssh -Y daint</pre> | ||
+ | |||
+ | === Using ''Dynamo''=== | ||
+ | We are using a slightly older version of ''Dynamo'' on the supercomputer GPUs for compatibility reasons | ||
+ | ;On the local machine | ||
+ | #tar your project in'' Dynamo'' (in <tt>Dynamo wizard >> Tools >> Create a tarball</tt> | ||
+ | #<tt>rsync -avr my_project.tar stud##@ela.cscs.ch:~/</tt> | ||
+ | #Also rsync your data to CSCS | ||
+ | #Untar your ''Dynamo'' project | ||
+ | : You will need the ''Dynamo'' terminal for this: | ||
+ | : <tt>dynamo &</tt> | ||
+ | |||
+ | : <tt>dvuntar myProject</tt> | ||
+ | ;On CSCS, | ||
+ | # type <br /><tt>salloc --gres=gpu:1</tt> <br /> to get a node with a gpu. It can take some time till the system allocates you a node. You can allocate up to two nodes. <br /> you can check the GPU on your node by: <br /> <tt>srun nvidia-smi</tt> | ||
+ | # type <br /><tt>source ~/bin/dynamoFlorida/dynamo_activate_linux_shipped_MCR.sh</tt><br /> to activate ''Dynamo'' in your shell. | ||
+ | #open ''Dynamo'' with <tt>dynamo &</tt> | ||
+ | #open your project, and re-unfold it (make sure standalone GPU is selected and make sure your data is in the same relative location as on the local machine) | ||
+ | #: ''Note'' | ||
+ | #: if the graphical interface is too slow, you can use the command line instead: | ||
+ | #: open a ''Dynamo'' console in your shell with <tt>dynamo x</tt> | ||
+ | #: <tt>dvput my_project -destination system_gpu</tt> | ||
+ | #: <tt>dvunfold my_project</tt> | ||
+ | #run your alignment by typing <tt>srun my_project.exe</tt> | ||
+ | |||
+ | === ''Dynamo'' as standalone === | ||
+ | We can use the system terminal as an equivalent of the Matlab terminal using the ''Dynamo'' [[standalone]]. This is an example on how to use it to create a phantom project like the one we did yesterday. | ||
+ | * open a ''Dynamo'' [[Dynamo console|console]] by typing: | ||
+ | : <tt> dynamo x </tt> | ||
+ | in a linux shell (you'll need to source ''Dynamo'' activation script on that shell beforehand). | ||
+ | * create a tutorial project. For this, type inside the ''Dynamo'' console: | ||
+ | : <tt> dtutorial myTest -p ptest -M 128</tt> | ||
+ | * tune the project to work in a GPU | ||
+ | : <tt> dvput ptest -destination system_gpu</tt> | ||
+ | *unfold the project | ||
+ | : <tt> dvunfold ptest.exe</tt> inside the ''Dynamo'' [[Dynamo console | '''console''']] | ||
+ | * run the project with <tt>srun</tt> | ||
+ | : <tt> srun ptest.exe</tt> in a '''terminal shell''', i.e., '''not''' inside the ''Dynamo'' console | ||
+ | * when it finishes, the averages can be also accessed programmatically with the database tool. For instance, to access the last computed average and view it with <tt>dview</tt>, type: | ||
+ | : <tt> ddb ptest:a -v</tt> | ||
+ | |||
+ | |||
+ | '''Note about performance''' | ||
+ | You will notice that the project stops at several points during execution. Those are the points where the project accesses the [[MCR libraries]]. This overhead is a constant, and is a very small fraction of the computing time for a real project with thousands of particles. | ||
+ | |||
+ | We are using an old ''Dynamo'' version. Modern ''Dynamo'' versions don't access the MCR library several times. | ||
+ | |||
== Strubi Oxforfd== | == Strubi Oxforfd== | ||
− | We are also using some accounts from the GPU cluster in the | + | We are also using some accounts from the GPU cluster in the Structural Biology department in the University of Oxford. |
Latest revision as of 14:20, 24 August 2017
Contents
Biozentrum
Use the credentials from your credential handout to log into the Mac workstations.
Opening Matlab
We will be using the Matlab release R2015b. It should be visible on the desktop, otherwise use Spotlight (the magnifying glass on the top right corner) to locate it.
Installing Dynamo
A package can be found in /Users/gast/workshop2016/dynamo
Just unpack it with this command inside a system terminal:
tar -xf /Users/gast/workshop2016/dynamo/dynamo_temp_1.1.182.tar -C /Users/gast/workshop2016/dynamo
Note that you need to use system terminal, not a Matlab desktop. Inside a Matlab terminal, the syntax is:
!tar -xf /Users/gast/workshop2016/dynamo/dynamo_temp_1.1.182.tar -C /Users/gast/workshop2016/dynamo
Opening Dynamo
After opening a Matlab session, you'll need to activate Dynamo in that session. Dynamo should be installed in /Users/gast/workshop2016/dynamo In order to activate your local Dynamo version, please type:
run /Users/gast/workshop2016/dynamo/dynamo_activate.m
Some files will be made available through a mounting the file system of the instructor's computer. You can mount this share by typing in Matlab
run /Users/gast/mountData
which will make the file share /Users/gast/mountedData visible in your workstation. You will need this in order to transfer our test data sets to your local machine.
CSCS: Lugano
CSCS Lugano is the Nacional Supercomputing Center of Switzerland. Each account should be able to submit jobs to a single node connected to a K20 GPU and four cores.
Connecting
First you need to connect to the gate node ela using your cscs credentials from the credentials handout.
ssh -Y stud01@ela.cscs.ch
and then you can connect to the computing machine called daint, again you will be requested to type in your credentials.
stud01@ela2:~> ssh -Y daint
Using Dynamo
We are using a slightly older version of Dynamo on the supercomputer GPUs for compatibility reasons
- On the local machine
- tar your project in Dynamo (in Dynamo wizard >> Tools >> Create a tarball
- rsync -avr my_project.tar stud##@ela.cscs.ch:~/
- Also rsync your data to CSCS
- Untar your Dynamo project
- You will need the Dynamo terminal for this:
- dynamo &
- dvuntar myProject
- On CSCS,
- type
salloc --gres=gpu:1
to get a node with a gpu. It can take some time till the system allocates you a node. You can allocate up to two nodes.
you can check the GPU on your node by:
srun nvidia-smi - type
source ~/bin/dynamoFlorida/dynamo_activate_linux_shipped_MCR.sh
to activate Dynamo in your shell. - open Dynamo with dynamo &
- open your project, and re-unfold it (make sure standalone GPU is selected and make sure your data is in the same relative location as on the local machine)
- Note
- if the graphical interface is too slow, you can use the command line instead:
- open a Dynamo console in your shell with dynamo x
- dvput my_project -destination system_gpu
- dvunfold my_project
- run your alignment by typing srun my_project.exe
Dynamo as standalone
We can use the system terminal as an equivalent of the Matlab terminal using the Dynamo standalone. This is an example on how to use it to create a phantom project like the one we did yesterday.
- open a Dynamo console by typing:
- dynamo x
in a linux shell (you'll need to source Dynamo activation script on that shell beforehand).
- create a tutorial project. For this, type inside the Dynamo console:
- dtutorial myTest -p ptest -M 128
- tune the project to work in a GPU
- dvput ptest -destination system_gpu
- unfold the project
- dvunfold ptest.exe inside the Dynamo console
- run the project with srun
- srun ptest.exe in a terminal shell, i.e., not inside the Dynamo console
- when it finishes, the averages can be also accessed programmatically with the database tool. For instance, to access the last computed average and view it with dview, type:
- ddb ptest:a -v
Note about performance
You will notice that the project stops at several points during execution. Those are the points where the project accesses the MCR libraries. This overhead is a constant, and is a very small fraction of the computing time for a real project with thousands of particles.
We are using an old Dynamo version. Modern Dynamo versions don't access the MCR library several times.
Strubi Oxforfd
We are also using some accounts from the GPU cluster in the Structural Biology department in the University of Oxford.