Abstract
Modern graphics cards contain hundreds of cores that can be programmed for intensive calculations. They are beginning to be used for spiking neural network simulations. The goal is to make parallel simulation of spiking neural networks available to a large audience, without the requirements of a cluster. We review the ongoing efforts towards this goal, and we outline the main difficulties.
Notes
Notes
1. The GeForce GTX 690 consisting of dual Kepler GK110 GPUs. http://www.geforce.com/whats-new/articles/article-keynote/
2. For example, the GeForce GTX 580. http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-580
3. For devices of compute capability 2.0, 2.1 and 3.0, including all Fermi and Kepler architecture GPUs, there is 64 KB on-chip memory which can be allocated to shared memory and L1 cache in a 48/16 or 16/48 arrangement. The maximum L2 cache size for 2.x devices is 768 KB and for 3.0 it is 512 KB (NVIDIA 2012).