Follow the procedure below to execute ProteinDF from the command line:
Create directories (default: fl_Input, fl_Table, and fl_Work) for outputting intermediate data, under the execution directory of ProteinDF (where the input file is located).
Note
These directories can be created with the pdf-setup command.
Note
The data written in these directories will be extremely large. It is recommended to create the directories in a high-speed disk storage with large capacity.
To execute the serial version of ProteinDF, use the following command:
% $PDF_HOME/bin/PDF.x
When the computation starts, total energy at each SCF calculation is sequentially displayed in the standard output. In addition, the series of the calculation result data is output in the log file (fl_Out_Std). Intermediate data during all-electron calculation is also output in the log file.
To execute the parallel version of ProteinDF, use the following command:
% mpiexec -n N $PDF_HOME/bin/PDF.x
Here, specify N, the number of processors for parallel computation.
Note
Execution procedure of the MPI program varies depending on the computing system environment. For details, refer to the system manuals.
When the computation starts, total energy at each SCF calculation is sequentially displayed in the standard output. In addition, the series of the calculation result data is output in a text file (default file name: fl_Out_Std), as in the serial mode.
ProteinDF has several run types to efficiently compute a large object with limited computing resources.
run type | parallel method | matrix |
---|---|---|
serial | OpenMP only | replica |
replica_static | MPI/OpenMP hybrid | replica |
replica_dynamic | MPI/OpenMP hybrid | replica |
distributed | MPI/OpenMP hybrid | distributed |
Performs computation in a single process without interprocess communication.
Allows multi-thread parallel computation with OpenMP.
Uses LAPACK for matrix operations.
The computable system scale depends on the memory size available for the process.
Performs parallel computation with interprocess communication (MPI). In each process, OpenMP parallel computation is performed.
Process duplicates and stores all matrix elements in MPI processes.
If matrices cannot be stored in the specified memory amount, they are stored in disk storage.
Employs the divide and conquer algorithm for task distribution.
Uses LAPACK for matrix operations.
Uses LAPACK for matrix operations.
Note
In the divide and conquer algorithm, all processes join the computation. This algorithm is therefore effective when the number of processes is small. One defect is that the load cannot be distributed evenly.
Note
Use the keyword memory_size to specify the memory size that can be used for processes.
Warning
If the disk storage is used due to memory shortage, the system performance may deteriorate.
Performs parallel computation with interprocess communication (MPI). In each process, OpenMP parallel computation is performed.
Process duplicates and stores all matrix elements in MPI processes.
If matrices cannot be stored in the specified memory amount, they are stored in disk storage.
Employs the master-slave method for task distribution.
Uses LAPACK for matrix operations.
Uses LAPACK for matrix operations.
Uses MS for parallel_processing_type
Note
In the master-slave method, the master process dedicates distribution of tasks. This algorithm is effective when the number of process is large.
Note
Use the keyword memory_size to specify the memory size that can be used for processes.
Warning
If the disk storage is used due to memory shortage, the system performance may deteriorate.
Performs parallel computation with interprocess communication (MPI). In each process, OpenMP parallel computation is performed.
Distributes and stores a global matrix among MPI processes.
If matrices cannot be stored in the specified memory amount, they are stored in disk storage.
Uses ScaLAPACK for matrix operations.
Uses ScaLAPACK for linear_algebra_package