References
- Kale L, Skeel R, Bhandarkar M, et al. NAMD2: greater scalability for parallel molecular dynamics. J Comput Phys. 1999;151:283–312. doi: 10.1006/jcph.1999.6201
- Pearlman D, Case D, Caldwell J, et al. AMBER, a computer program for applying molecular mechanics, normal mode analysis, molecular dynamics and free energy calculations to elucidate the structures and energies of molecules. Comput Phys Commun. 1995;91:1–41. doi: 10.1016/0010-4655(95)00041-D
- Brooks B, Bruccoleri R, Olafson B, et al. CHARMM: a program for macromolecular energy, minimization, and dynamics calculations. J Comput Chem. 1983;4:187–217. Available from: http://www.charmm.org. doi: 10.1002/jcc.540040211
- Berendsen H, van der Spoel D, van Drunen R. GROMACS: a message-passing parallel molecular dynamics implementation. Comput Phys Commun. 1995;91:43–56. doi: 10.1016/0010-4655(95)00042-E
- Lindahl E, Hess B, van der Spoel D. GROMACS 3.0: a package for molecular simulation and trajectory analysis. J Mol Model. 2001;7:306–317. doi: 10.1007/s008940100045
- Plimpton S. Fast parallel algorithms for short-range molecular dynamics. J Comput Phys. 1995;117:1–19. doi: 10.1006/jcph.1995.1039
- Smith W, Forester T. DL_POLY_2.0: A general-purpose parallel molecular dynamics simulation package. J Mol Graph. 1996;14(3):136–141. doi:10. 1016/S0263-7855(96)00043-4 doi: 10.1016/S0263-7855(96)00043-4
- Smith W[W], Forester T, Todorov I. (2008). The DL_POLY_2 user manual.
- Smith W. The DL_POLY classic molecular simulation package. STFC Daresbury Laboratory. Available from: https://www.ccp5.ac.uk/DL_POLY_C.
- DL_POLY Classic. Available from: https://gitlab.com/DL_POLY_Classic.
- Smith W, Forester T. Parallel macromolecular simulations and the replicated data strategy. Comput Phys Commun. 1994;79(1):63–77. doi: 10.1016/0010-4655(94)90230-5
- Smith W, Yong C, Rodger M. DL_POLY: application to molecular simulation. Mol Simul. 2002;28:385–471. doi: 10.1080/08927020290018769
- Smith W, Todorov I. A short description of DL_POLY. Mol Simul. 2006;32(12-13):935–943. doi:10.1080/08927020600939830.
- Todorov I, Smith W[W]. DL_POLY_3: the CCP5 national UK code for molecular-dynamics simulations. Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences. 2004;362(1822):1835–1852. doi:10.1098/rsta.2004.1419.
- Todorov I, Smith W, Trachenko K, et al. DL_POLY_3: New dimensions in molecular dynamics simulations via massive parallelism. J Mater Chem. 2006;20:1911–1918. doi:10.1039/B517931A.
- DL_POLY_4. DL_POLY. Available from: http://www.ccp5.ac.uk/DL_POLY, 1991-2019.
- Bush IJ, Todorov IT. DL_POLY’s custom developed DAFT FFT library. Comp Phys Commun. 2006;175(5):323–329. doi: 10.1016/j.cpc.2006.05.001
- Todorov I, Smith W. The DL_POLY 4 user manual, STFC daresbury laboratory, Daresbury, Warrington WA4 4AD, Cheshire, UK (Version 4.08, March 2016). Available from: http://www.ccp5.ac.uk/DL_POLY/.
- The DL_POLY_CUDA code, Kartsaklis C, Todorov IT, et al. Symposium on chemical computations on GPGPUS, 240th ACS National Meeting and Exposition; Boston; 2010.
- Available from: https://www.top500.org/lists/2017/11/.
- Available from: http://ipm-hpc.sourceforge.net/; http://projekt17.pub.lab.nm.ifi.lmu.de/ipm/home.html; http://ipm-hpc.sourceforge.net/examples/ex1/, http://ipm-hpc.sourceforge.net/examples/ex2/, http://ipm-hpc.sourceforge.net/examples/ex3.
- Fürlinger K, Skinner D. Capturing and visualizing event flow graphs of MPI applications. In Workshop on Productivity and Performance (PROPER 2009) in conjunction with Euro-Par 2009. 2009 Aug.
- Aguilar X, Fürlinger K, Laure E. Visual MPI performance analysis using event flow graphs. In Proceedings of the International Conference on Computational Science, ICCS 2015, Reykjavik, Iceland 1-3 June, 2015, June 2015.
- Available from: https://www.arm.com/products/development-tools/hpc-tools/cross-platform/performance-reports.
- Allinea performance reports user guide, V. 7.1. Available from: https://static.docs.arm.com/101137/0701/userguide-reports.pdf.
- Using Allinea performance reports on NREL HPC systems. Available from: https://hpc.nrel.gov/users/software/debugging-profiling/usring-performance-reports-on-nrel-hpc-systems.
- Intel Xeon Benchmark - Intel.com, www.intel.com/Xeon.
- Available from: https://www.intel.com/content/www/us/en/products/processors/xeon-phi/xeon-phi-processors.html.
- Available from: https://www.intel.co.uk/content/www/uk/en/architecture-and-technology/many-integrated-core/intel-many-integrated-core-architecture.html.
- Introducing EDR 100Gb/s - enabling the use of data, White Paper. 2014 Jun. Available from: http://www.mellanox.com/pdf/whitepapers/wp_introducing_edr_100gb_enabling_use_data.pdf.
- Intel® Omni-Path Fabric 100 Series, The next generation of High Performance Computing (HPC) fabrics. Available from: https://www.intel.co.uk/content/www/uk/en/high-performance-computing-fabrics/omni-path-architecture-fabric-overview.html.
- Getting Started with Intel® MPI Library for Linux* OS, Last updated on August 24, 2015. Available from: https://software.intel.com/en-us/get-started-with-mpi-for-linux.
- Performance comparison of open source MPI implementations, Erik McClements, MSc in High Performance Computing, The University of Edinburgh Year of Presentation: 2006.
- Mellanox HPC-X™ Scalable Software Toolkit, 2014. Available from: http://www.mellanox.com/related-docs/prod_acceleration_software/PB_HPC-X.pdf.
- Smith W. J Mol Graph. 1987;5:71–74. doi: 10.1016/0263-7855(87)80002-4
- Smith W. Molecular dynamics on hypercube parallel computers. Comput Phys Commun. 1991;62:229–248. doi: 10.1016/0010-4655(91)90097-5
- Smith W. Molecular dynamics on distributed memory (MIMD) parallel computers. Theoretica Chim Acta. 1993;84:385–398. doi: 10.1007/BF01113277
- Smith W, Forester TR. Parallel macromolecular simulations and the replicated data strategy. Comput Phys Commun. 1994;79:52–62. doi: 10.1016/0010-4655(94)90229-1
- Pinches MRS, Tildesley D, Smith W. Large Scale molecular dynamics on parallel Computers using the link-cell algorithm. Mol Simul. 1991;6:51–87. doi: 10.1080/08927029108022139
- Rapaport DC. Multi-million particle molecular dynamics. Comput Phys Commun. 1991;62:217–228. doi: 10.1016/0010-4655(91)90096-4
- Hockney RW, Eastwood JW. Computer simulation using particles. New York: McGraw-Hill; 1981.
- Essmann U, Perera L, Berkowtz ML, et al. A smooth particle mesh Ewald method. J Chem Phys. 1995;103:8577–8593. doi: 10.1063/1.470117
- Frigo M, Johnson SG. The latest official release of FFTW is version 3.3.7. Available from: http://www.fftw.org/.
- Bush IJ. The Daresbury Advanced Fourier Transform. Technical Report Daresbury Laboratory; 1999.
- Bush IJ, Todorov IT, Smith W. 2006. A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method, Comp. Phys. Commun., 175, 323-329 (2006).
- Hammonds KD, Ryckaert J-P. On the convergence of the SHAKE algorithm. Comput Phys Commun. 1991;62(2–3):336–351. doi: 10.1016/0010-4655(91)90105-T
- Miyamoto S, Kollman PA. SETTLE: an analytical version of the SHAKE and RATTLE algorithm for rigid water models. J Comput Chem. 1992;13(8):952–962. doi:10.1002/jcc.540130805.
- Guest M. (2002). Application performance on high-end and commodity-class computers 2002. Available from: http://www.ukhec.ac.uk/publications/reports/benchmarking.pdf.
- Hein J, Reid F, Smith L, et al. On the performance of molecular dynamics applications on current high-end systems. Phil Trans R Soc A. 2005;363(1833):1987–1998. doi:10.1098/rsta.2005.1624.
- Peng L, Tan G, Kalia RK, et al. Scalability study of molecular dynamics simulation on Godson-T many-core architecture. J Parallel Distrib Comput. 2013;73:1469–1482. doi: 10.1016/j.jpdc.2012.07.007
- Lysaght M, Uchroński M, Kwiecien A, et al. Benchmarking and analysis of DL_POLY 4 on GPU Clusters.
- DL-POLY performance benchmark and profiling, HPC advisory council, 2013 Aug. Available from: http://www.hpcadvisorycouncil.com/pdf/DL_POLY_Analysis_and_Profiling.pdf.
- Loeer HH, Winn MD. Large biomolecular simulation on HPC platforms II. DL_POLY, Gromacs, LAMMPS and NAMD, STFC Technical Report, December 2010, DL-TR-2009-002. Available from: http://www.hecbiosim.ac.uk/benchmark.
- Mabakanea MS, Moeketsia DM, Lopisb AS. Scalability of DL_POLY on high performance computing platform. South African Comput J. 2017;29(3):81), doi:10.18489/sacj.v29i3.405
- Available from: https://en.wikipedia.org/wiki/Cray_T3E.
- Guest M, Blake RJ. Report on computational science support and development activities at Daresbury Laboratory 1997/98.
- 23rd Daresbury Machine Evaluation Workshop, 27–28 November 2012. [cited 2012 Nov 27–28]. Available from: https://eventbooking.stfc.ac.uk/uploads/mew23flyer.pdf.
- Computing Insight UK 2017. Joining up the UK e-Infrastructure. Manchester Central. 2017 Dec 12–13. Available from: https://www.stfc.ac.uk/news-events-and-publications/events/general-interest-events/computing-insight-uk/.
- Available from: https://en.wikipedia.org/wiki/Pentium_III and https://en.wikipedia.org/wiki/Pentium_4.
- Available from: http://www.tomshardware.co.uk/intel-clovertown-harptown-cpu,news-28747.html.
- Available from: https://en.wikipedia.org/wiki/List_of_Intel_Itanium_microprocessors.
- Available from: https://en.wikipedia.org/wiki/List_of_AMD_Opteron_microprocessors.
- Available from: https://en.wikipedia.org/wiki/IBM_System_p.
- Available from: https://en.wikipedia.org/wiki/InfiniBand.
- Saini S, Jin H, Hood R, et al. The impact of hyper-threading on processor resource utilization in production applications. Best Paper, 8th International Conference on High Performance Computing, HiPC 2011; 2011 Dec 18–21; Bengaluru, India.
- Available from: http://www.ddn.com/press-releases/ddn-partnership-with-intel-step-forward-lustre-solution-leadership/.
- Saini S, Naraikin A, Biswas R, et al. Early performance evaluation of a “Nehalem” cluster using scientific and engineering applications. Proceedings of the ACM/IEEE Conference on High Performance Computing, SC 2009; 2009 Nov 14–20; Portland, Oregon, USA.
- Murray M. Sandy Bridge: 10 things you need to know. 2011 Jan 3. Available from: https://www.pcmag.com/article2/0,2817,2375039,00.asp.
- Murray M. Comparing Ivy Bridge vs. Sandy Bridge, June 2012. Available from: http://www.pcmag.com/article2/0,2817,2405317,00.asp.
- Comparing Sandy Bridge vs. Ivy Bridge processors for HPC applications, Mayura Deshmukh and NCSA Private Sector Program. Available from: http://en.community.dell.com/techcenter/high-performance-computing/b/general_hpc/archive/2014/07/31/comparing-sandy-bridge-vs-ivy-bridge-processors-for-hpc-applications.
- Domingo JS. Haswell vs. Ivy Bridge: A look at old and new. 2013 Jul. Available from: http://uk.pcmag.com/chipsets-processors-products/14485/feature/haswell-vs-ivy-bridge-a-look-at-old-and-new.
- New vs. Old: comparing broadwell performance for CAE applications across generations, Mayura Deshmukh, Ashish K Singh, Neha Kashyap, NCSA, 2016. Available from: http://en.community.dell.com/techcenter/high-performance-computing/b/general_hpc/archive/2016/04/21/new-vs-old-comparing-broadwell-performance-for-cae-applications-across-generations.
- The Huge premium intel is charging for Skylake Xeons, September 1, 2017, Timothy Prickett Morgan. Available from: https://www.nextplatform.com/2017/09/01/huge-premium-intel-charging-skylake-xeons/.
- Intel® Architecture instruction set extensions programming reference, 319433-014%August 2012.
- McCalpin JD. STREAM. Sustainable Memory Bandwidth in high performance computers. 2015 Oct 12. Available from: https://www.cs.virginia.edu/stream/.
- De Gelas J. HPC: Fluid dynamics with OpenFOAM. 2016 Mar. Available from: https://www.anandtech.com/show/10158/the-intel-xeon-e5-v4-review/16.
- IMB. Introducing intel MPI benchmarks, Gergana S. (Intel), published on October 9, 2013, updated February 7, 2018. Available from: https://software.intel.com/en-us/articles/intel-mpi-benchmarks.
- Intel MPI Benchmarks user guide, last updated on November 13, 2017. Available from: https://software.intel.com/en-us/imb-user-guide.
- Berkeley UPC - Unified Parallel C. Available from: http://upc.lbl.gov/.
- PGAS. Partitioned global address space. Available from: https://en.wikipedia.org/wiki/Partitioned_global_address_space.
- Mintz TM, Hernandez O, Bernholdt DE. A global view programming abstraction for transitioning MPI codes to PGAS languages, Available from: http://www.csm.ornl.gov/workshops/openshmem2013/documents/presentations_and_tutorials/Mintz_Transitioning%20MPI%20Codes%20to%20PGAS%20Languages.pdf.
- Available from: https://www.intel.co.uk/content/www/uk/en/high-performance-computing-fabrics/omni-path-architecture-fabric-overview.html.
- Giannozzi P, Baroni S, Bonini N, et al. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. J Phys Condens Matter. 2009;21:395502), doi:10.1088/0953-8984/21/39/395502
- Hafner J. Ab-initio simulations of materials using VASP: Density-functional theory and beyond. J Comp Chem. 2008;29(13):2044–2078. doi: 10.1002/jcc.21057
- Jasak H, Jemcov A, Tukovic Z. OpenFOAM: A C++ Library for Complex Physics Simulations. International Workshop on Coupled Methods in Numerical Dynamics, IUC, Dubrovnik, Croatia. 2007 Sep 19–21 2007.
- Irish centre for high end computing, ICHEC. Available from: https://www.ichec.ie/.
- Available from: https://software.intel.com/en-us/ipcc.
- Available from: https://software.intel.com/en-us/parallel-studio-xe.
- Intel® Manycore Platform Software Stack, Available from: https://software.intel.com/en-us/articles/intel-manycore-platform-software-stack-mpss.
- Available from: https://www.siam.org/students/siuro/vol10/S01589.pdf.
- Intel re-architects the fundamental building block for high-performance computing — Intel newsroom. 2014 Jun 23 [cited 2017 Mar 23]. Available from: https://newsroom.intel.com/news-releases/intel-re-architects-the-fundamental-building-block-for-high-performance-computing/.
- Intel® Initial Many Core Instructions (Intel® IMCI). Available from: https://software.intel.com/en-us/node/694272.
- Available from: https://crd.lbl.gov/departments/computer-science/PAR/research/roofline/.
- Michael Brown W, Semin A, Hebenstreit M, et al. Increasing molecular dynamics simulation rates with an 8-fold increase in electrical power efficiency. SC ‘16 Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. Article No. 8; 2016.