Esta es la antigua web del Centro de Supercomputación de Galicia.

Nuestro nuevo web está disponible desde el 18 de Julio de 2011 en
https://www.cesga.es

Por favor, actualice sus enlaces y marcadores.


Esta web solo se mantiene con el fin de servir como histórico de noticias, cursos, ofertas de empleo publicadas, etc.. y/o documentación.

CESGA-Centro de Supercomputación de Galicia
Galego :: Español :: English
Centro de Supercomputacion de Galicia
Inicio » Novas » Cursos e Eventos » WORKSHOP ON HIGH PERFORMANCE COMPUTING APPLICATIONS
Destacados

Conectados
330 visitantes
Total dende 21-12-05: 66544429 visitas
ISO 9001:2008

EXCELENCIA XESTIÓN

Accesibilidade

WORKSHOP ON HIGH PERFORMANCE COMPUTING APPLICATIONS PDF E-mail

Mañá venres 26 de setembro remata o prazo de inscrición para o workshop organizado pola Rede Galega de HPC, que constará de 5 ponencias sobre Supercomputación aplicadas a diferentes campos da investigación: Cálculo Financieiro, Electromagnetismo, e-Ciencia, Biociencia e Dinámica de Fluídos, e que contan coa participación de destacados investigadores de universidades de Reino Unido, Turquía, Holanda e Francia.

Para inscribirse prema aqui:http://ghpc.udc.es
Programa da Xornada
Resumo das ponencias

______________________________________________________________________________

Monday 29th September 08
Instituto de Investigaciones Agrobiológicas
Avda. de Vigo s/n, Campus Sur Santiago de Compostela           

9:00 Opening

9:15 Mike Giles - University of Oxford, United Kingdom 
Using graphic cards for high performance financial calculation 

In this talk, Mike Giles will present their experience with CUDA programming on NVIDIA GPUs, applied to both Monte Carlo and PDE calculations in computational finance. Compared to two quad-core Xeons, speedups of 10-20 can be achieved, with comparable improvements in price/performance and energy efficiency.

10:00 Levent Gurel - Bilkent University, Turkey  
Parallel Computing and Fast Algorithms for the Solution the World's Largest Dense - Matrix Problems in Computational Electromagnetics

Since 2006, the world’s largest integral-equation problems in computational electromagnetics have been solved at Bilkent University Computational Electromagnetics Research Center (BiLCEM). Most recently, breaking the latest world record actually required the solution of 150,000,000x150,000,000 dense matrix equations! This achievement is an outcome of a multidisciplinary study involving physical understanding of electromagnetics problems, novel parallelization strategies (computer science), constructing parallel clusters (computer architecture), advanced mathematical methods for integral equations, fast solvers, iterative methods, preconditioners, and linear algebra.

In this seminar, following a general introduction to our work in computational electromagnetics, I will continue to present fast and accurate solutions of large-scale electromagnetic modeling problems involving three-dimensional geometries with arbitrary shapes using the multilevel fast multipole algorithm (MLFMA). Accurate solutions of reallife problems require discretizations with tens of millions of unknowns. To achieve the solution of such extremely large problems, maximizing the computational resources by parallelizing MLFMA on distributed memory architectures is needed. However, due to its complicated structure, parallelization of MLFMA is not trivial. Recently, we proposed a hierarchical parallelization strategy to increase the efficiency of parallelization. For more information, please visit www.cem.bilkent.edu.tr.

10:45 Peter Sloot - University of Amsterdam, Netherlands   
Computational eScience: Bridging the Scales from Molecule to Man

Recent advances in experimental techniques such as detectors, sensors, and scanners have opened up new windows into physical and biological processes on many levels of detail. The complete cascade from the individual components to the fully integrated multi-science systems crosses many orders of magnitude in temporal and spatial scales. The challenge is to study not only the fundamental processes on all these separate scales, but also their mutual coupling through the scales in the overall system, and the resulting emergent properties. These complex systems display endless signatures of order, disorder, self-organization and self-annihilation. Understanding, quantifying and handling this complexity is one of the biggest scientific challenges of our time.

A prototypical example comes from biomedicine, where we have data from virtually all levels between 'molecule and man' and yet we have no models where we can study these processes as a whole. It is a real complex system: from a biological cell, made of thousands of different molecules that work together, to billions of cells that build our tissue, organs and immune system, to our society, six billion unique interacting individuals. The complete cascade from the genome, proteome, metabolome, physiome to health constitutes multi-scale, multi-science systems, and crosses many orders of magnitude in temporal and spatial scales [Finkelstein 2004,6].

For this we need to build time critical numerical models that can simulate ‘what-if’ scenario’s, where hydraulic, flow and weather prediction models are integrated.

The sheer complexity and range of spatial and temporal scales defies any existing numerical model and computational capacity. The only way out is by combining data on all levels of detail with for instance large scale particle-based, stochastic and continuous models; an open research area. The challenges include understanding how one can reconstruct multi-level systems and their dynamics through computational simulation within virtual laboratories that connect models to massive sets of heterogeneous and often incomplete data.

Conceptual, theoretical and methodological foundations are necessary in understanding these multi-scale processes, dynamic networks, and the associated predictability limits of such large scale computer simulations.

11:30 Coffee Break

12:00 Laurent Dumas - University of Paris, France  
Optimal positioning of electrodes for improving a pacemaker efficiency

Our aim is to determine the optimal positioning of electrodes of a pacemaker on a disease heart. This can be
interpreted as an inverse problem which is solved with stochastic optimization tools. The optimal positioning of the electrodes is based on the minimization of the delay in the depolarization phase and allows recovering a satisfactory electrocardiogram. By taking advantage of the natural parallelization of evolutionary methods, the optimization procedure can be achieved in a reasonable computational time and thus could help the practioner in the future to choose for the best strategy.

12:45 Aslan Rustem - University of Istanbul, Turkey
CDF in design

Industrial aero thermal design requires timely and accurate determination of airloads, heat transfer and related parameters. This presentation will focus on use of various HPC platforms and their performance in the service of short turnaround time production oriented project work. Examples include large size ship design, large vertical wind tunnel design, helicopter design with thermal and flow analysis using CFD. Additional examples of academic interest will also be presented.

 13:30 Closing

Actualizado ( 15.10.2008 )
Master HPC

CESGA APOIA

PRACE Award 2009

Itanium Alliance Award

Proxectos

Membro de Gelato

Acreditación EUGridPMA

Novidades
Dominio galego

ALERTA VIRUS MENSUAL

infoarrobacesga.es :: Telf.: +34 981 569810 - Fax: 981 594616 :: Avda. de Vigo s/n 15705, Santiago de Compostela.
CESGA