Chris Jones, Andrew Washbrook
With around a billion particle collisions occurring every second within the detector, it is impossible to store the data collected by ATLAS in its entirety. It is responsibility of the trigger to reduce this data, selecting out events of potential scientific interest for storage and further analysis. The works in three main stages: level 1 running on customised hardware, and levels 2 and 3 in software on server farms. IDScan is a track-reconstruction program running in level 2 of the trigger, and taking its input from the inner detector. The first algorithm used by IDScan is the Z-finder algorithm, the purpose of which is to produce a good estimate for the z-coordinate of the collision event along the axis of the detector. This reduces the execution time of the track-reconstruction algorithms later in the chain.
The aim of this project will be to port the Z-finder algorithm to GPU using NVIDIA's C for CUDA, and investigate the performance.
Maria Rovatsou, Andrew Washbrook
The track reconstruction in the ATLAS trigger is achieved with the utilization of the Kalman filter technique. The current implementation of the filter is a distributed sequential implementation. An acceleration of the filter would boost the whole process and would allow even more significant data to be available for processing. Apart from that, the proposed upgrade of the LHC in order to increase its performance, results in luminosity increase of one order of magnitude higher and would require a more efficient implementation of the Kalman filter. A possible way for acceleration of the filter is by porting it on a Graphics Processing Unit (GPU), which has immense computational power that can be exploited through programming models as CUDA. The proposed approach is a parallelization on thread level of the Kalman filter for execution on an NVIDIA GPU. The proposed programming model is CUDA and the analysis of its performance will be done with the CUDA profiling tools and with benchmarking tools.
James Henderson, Phil Clark
Over recent years Graphical Processing Units (GPUs) have demonstrated their great ability in scientific calculations. They have the capability to carry out tasks in parallel, enabling huge speed-ups of calculations. This project investigates the potential speed-up using GPUs when calculating a particle's trajectory through a magnetic field. The results show that, when considering many particles, a speed-up of 32x can be achieved on the GPU versus a serial processor.
Triangular matrix addition is performed on 2 square equal matrices of equal size 20 x 20
The code generates a grid of points in a box of the complex plane containing the upper half of the (symmetric) Mandelbrot Set. Then each point is iterated using the equation above a finite number of times (2000). If within that number of iterations the threshold condition |z| > 2 is satisfied then that point is considered to be outside of the Mandelbrot Set. Then counting the number of points within the Set and those outside will give an estimate of the area of the Set.
Hazel McKendrick, Andrew Washbrook
This project forms a feasibility study into the use of GPU (Graphics Processing Unit) devices to parallelise TMVA, the Toolkit for Multi Variate Analysis, to determine whether such techniques might lead to future performance gains for the framework.
In particular, the Multi-layer perceptron, a class of neural network, is ported to the GPU programming platform CUDA. Performance when training single networks is generally comparable to CPU performance but show promise for future improvement. However, this parallelism of the GPU also allows multiple networks to be trained simultaneously, and this is leveraged to show significant performance gains over under- taking such a task in serial. The challenges and potential for these results to be applied across the TMVA framework is then considered and discussed.
Links to CUDA and other GPU parallisation resources:
This university course link has some very helpful video lectures, linked in 'Syllabus/ Lectures' and under 'Materials'.
I | Attachment | History | Action | Size | Date | Who | Comment |
---|---|---|---|---|---|---|---|
AlanRichardsonDissertation.pdf | r1 | manage | 581.4 K | 2009-12-10 - 19:52 | AndrewWashbrook | Alan Richardson | |
ajw-GPUAtlasUpgrade.pdf | r1 | manage | 4019.2 K | 2011-01-27 - 18:03 | AndrewWashbrook | CHEP 2010 Presentation | |
Submission.pdf | r1 | manage | 577.2 K | 2010-09-28 - 18:14 | AndrewWashbrook | Chris Project | |
cu | CUDA_Stepper.cu | r1 | manage | 8.7 K | 2011-11-22 - 12:51 | AndrewWashbrook | CUDA code for RK4 stepper |
gz | CudaFitter.tar.gz | r1 | manage | 828.8 K | 2011-01-28 - 12:50 | AndrewWashbrook | Dmitry CUDA code |
ATLAS-Upgrade-GPGPU.pdf | r1 | manage | 3985.8 K | 2009-12-10 - 19:40 | AndrewWashbrook | GPGPU and HEP | |
gz | cudaKalmanFitter.tar.gz | r1 | manage | 13682.7 K | 2011-01-27 - 18:04 | AndrewWashbrook | Kalman Fitter Code |
lazzaroCCR09.pdf | r1 | manage | 2472.2 K | 2009-12-10 - 20:07 | AndrewWashbrook | Lazzaro | |
infthesis.pdf | r1 | manage | 1530.4 K | 2010-09-28 - 18:12 | AndrewWashbrook | Maria Project | |
ATL-PHYS-PUB-2009-026.pdf | r1 | manage | 530.0 K | 2010-09-28 - 18:19 | AndrewWashbrook | ||
ATL-UPGRADE-PROC-2010-003.pdf | r1 | manage | 304.4 K | 2010-09-28 - 18:21 | AndrewWashbrook | ||
ATL-UPGRADE-SLIDE-2009-243.pdf | r1 | manage | 1213.7 K | 2010-09-28 - 18:22 | AndrewWashbrook | ||
ATLAS-zfinder.pdf | r1 | manage | 123.6 K | 2010-09-28 - 18:17 | AndrewWashbrook | ||
Christopher0963432_REP_Submission.pdf | r1 | manage | 340.5 K | 2010-04-28 - 09:49 | AndrewWashbrook | ||
GPU_Tracking-Emeliyanov.pdf | r1 | manage | 1215.9 K | 2011-01-27 - 18:24 | AndrewWashbrook | ||
HLToverview.pdf | r1 | manage | 156.8 K | 2010-09-28 - 18:20 | AndrewWashbrook | ||
IDSCAN-uclreport.pdf | r1 | manage | 729.8 K | 2010-09-28 - 18:18 | AndrewWashbrook | ||
IRP.pdf | r1 | manage | 447.7 K | 2010-04-28 - 09:50 | AndrewWashbrook | ||
gz | OpenMPExercises.tar.gz | r1 | manage | 2265.8 K | 2014-09-12 - 12:22 | AndrewWashbrook | |
ParallelTutorial.pdf | r1 | manage | 104.5 K | 2014-09-12 - 12:22 | AndrewWashbrook | ||
TDR-2up.pdf | r1 | manage | 3463.6 K | 2010-09-28 - 18:19 | AndrewWashbrook | ||
chris-presentation.pdf | r1 | manage | 1305.9 K | 2010-04-28 - 09:50 | AndrewWashbrook | ||
commissioning-ATLAS-IDtrigger.pdf | r1 | manage | 842.7 K | 2010-09-28 - 18:20 | AndrewWashbrook | ||
p246.pdf | r1 | manage | 424.3 K | 2010-09-28 - 18:18 | AndrewWashbrook | ||
probabilisticDataAssociationFilter.pdf | r1 | manage | 220.8 K | 2010-09-28 - 18:19 | AndrewWashbrook | ||
simdkalman.pdf | r1 | manage | 1223.5 K | 2010-09-28 - 18:18 | AndrewWashbrook | ||
trackingatL2-trigger.pdf | r1 | manage | 414.0 K | 2010-09-28 - 18:18 | AndrewWashbrook | ||
ucl-idscan-hitfilter.pdf | r1 | manage | 162.6 K | 2010-09-28 - 18:21 | AndrewWashbrook | ||
vertexstrategy.pdf | r1 | manage | 131.8 K | 2010-09-28 - 18:21 | AndrewWashbrook | ||
Nvidia-GTC2010.pdf | r1 | manage | 8604.7 K | 2011-01-27 - 18:03 | AndrewWashbrook | NVIDIA GTC 2010 Presentation | |
cpp | C_Stepper.cpp | r1 | manage | 5.0 K | 2011-11-22 - 12:51 | AndrewWashbrook | RK4 stepper code used to compare timings |
gz | TMVA-GPU.tar.gz | r1 | manage | 2776.9 K | 2013-04-03 - 19:25 | AndrewWashbrook | TMVA code with GPU-based MLP method |
McKendrick.pdf | r1 | manage | 984.8 K | 2011-11-22 - 12:36 | AndrewWashbrook | TMVA GPU project report | |
GPU_Long_report.pdf | r1 | manage | 841.4 K | 2011-11-22 - 12:50 | AndrewWashbrook | Tracking GPU project long report | |
GPU_Short_report.pdf | r1 | manage | 509.0 K | 2011-11-22 - 12:49 | AndrewWashbrook | Tracking GPU project short report | |
gz | ZFinder-GPU.tar.gz | r1 | manage | 1359.0 K | 2011-01-27 - 18:04 | AndrewWashbrook | Z-finder GPU code |
Webs
Welcome Guest