Supercomputing

Information about Supercomputing

Published on October 2, 2007

Author: Danielle

Source: authorstream.com

Content

VORTONICS: Vortex Dynamics on Transatlantic Federated Grids:  VORTONICS: Vortex Dynamics on Transatlantic Federated Grids US-UK TG-NGS Joint Projects Supported by NSF, EPSRC, and TeraGrid Slide2:  Evident coherent structures in Navier-Stokes flow Intuitively useful: Tornado or smoke ring Theoretically useful: Helicity and linking number No single agreed upon mathematical definition Difficulties with visualization Vortex interactions poorly understood… Vortex Cores Scientific & Computational Challenges:  Scientific & Computational Challenges Physical challenges: Reconnection and Dynamos Vortical reconnection governs establishment of steady-state in Navier-Stokes turbulence Magnetic reconnection governs heating of solar corona The astrophysical dynamo problem Exact mechanism and space/time scales unknown and represent important theoretical challenges Mathematical challenges: Identification of vortex cores, and discovery of new topological invariants associated with them Discovery of new and improved analytic solutions of Navier-Stokes equations for interacting vortices Computational challenges: Enormous problem sizes, memory requirements, and long run times Algorithmic complexity scales as cube of Re Substantial postprocessing for vortex core identification Largest present runs and most future runs will require geographically distributed domain decomposition (GD3) Is GD3 on Grids a sensible approach? Homogeneous turbulence driven by force of Arnold-Beltrami-Childress (ABC) form Simulations to Study Reconnection:  Simulations to Study Reconnection Aref & Zawadzki (1992) presented numerical evidence that two nearby elliptical vortex rings will partially link Benchmark problem in vortex dynamics Used vortex-in-cell method for 3D Euler flow Some numerical diffusion associated with VIC method, but very small Example: Hopf Link:  Example: Hopf Link Two linked circular vortex tubes as initial condition Latice Boltzmann algorithm for Navier-Stokes with very low viscosity (0.002 in lattice units) ELI variational result in dark blue and red Vorticity thresholding in light blue The dark blue and red curves do not unlink in the time scale of this simulation! Example: Aref & Zawadzki’s Ellipses: Front View:  Example: Aref & Zawadzki’s Ellipses: Front View Parameters obtained by correspondence with Aref & Zawadzki Lattice Boltzmann simulation with very low viscosity They do not link in the time scale of this simulation! Same Ellipses: Side View:  Same Ellipses: Side View Note that not all minima are shown in the late stages of this evolution - only the time continuation of the original pair of ellipses Again: They do not link in the time scale of this simulation! Lattice Remapping, Fourier Resizing, and Computational Steering:  Lattice Remapping, Fourier Resizing, and Computational Steering At its lowest level, VORTONICS contains a general remapping library for dynamically changing the layout of the computational lattice across the processors (pencils, blocks, slabs) using MPI All data on computational lattice can be Fourier resized (FFT, augmentation or truncation in k space, inverse FFT) as it is remapped All data layout features are dynamically steerable VTK used for visualization (each rank computes polygons locally) Grid-enabled with MPICH-G2 so that simulation, visualization, and steering can be run anywhere, or even across sites Vortex Generator Component:  Vortex Generator Component Given parametrization of knot or link Future: “Draw” a vortex knot Superpose contributions from each Each site on 3D grid performs line integral Divergenceless, parameter-independent Periodic boundary conditions requires Ewald-like sum over image knots Poisson solve (FFT) to get velocity field Components for Fluid Dynamics:  Components for Fluid Dynamics Navier-Stokes codes Multiple-relaxation-time lattice Boltzmann Entropic lattice Boltzmann Pseudospectral Navier-Stokes solver All codes parallelized with MPI (MPICH-G2) Domain decomposition Halo swapping Slide11:  Components for Visualization: Extremal Line Integral (ELI) Method Intuition: Line integral of vorticity along vortex core is large Definition: A vortex core is the curve along which line integral of vorticity is a local maximum in the space of all curves in the fluid domain… …with appropriate boundary conditions For smoke ring, periodic BC’s For tornado or trailing vortex on airplane wing, one end is attached to zero-velocity boundary, other at infinity For “hairpin” vortex, two ends attached to boundary Result is one-dimensional curve along vortex core Two References available (Phil. Trans. & Physica A) Slide12:  ELI Algorithm Evolve curve in “fictitious time” t “Equilibrium” of GL equation is a vortex core Ginsburg-Landau equation for which line integral is a Lyapunov functional Slide13:  Computational Steering All components use computational steering Almost all parameters are steerable time step frequency of checkpoints outputs, logs, graphics stop and restart read from checkpoint even spatial lattice dimensions (dynamic lattice resizing) halo thickness Scenarios for Using TFD Toolkit:  Scenarios for Using TFD Toolkit Run with periodic checkpointing until a topological change is noticed Rewind to last checkpoint before topological change, refine spatial and temporal discretization, viscosity Postprocessing of vorticity field and search for vortex cores can be migrated All components portable and may run locally or on geographically separated hardware Slide15:  Cross-Site Runs Before, During, and After SC05 Federated Grids US TeraGrid NCSA San Diego Supercomputing Center Argonne National Laboratory Texas Advanced Computing Center Pittsburgh Supercomputing Center UK National Grid Service CSAR Task distribution GD3 - is it sensible for large computational lattices? Slide16:  Run Sizes to Date / Performance Multiple Relaxation Time Lattice Boltzmann (MRTLB) model 600,000 SUPS/processor when run on one multiprocessor Performance scales linearly with np when run on one multiprocessor 3D lattice sizes up to 6453 run prior to SC05 across six sites NCSA, SDSC, ANL, TACC, PSC, CSAR 528 CPU’s to date, and larger runs in progress as we speak! Amount of data injected into network. Strongly bandwidth limited. Effective SUPS/processor Reduced by factor approximately equal to number of sites Therefore SUPS approximately constant as problem grows in size Discussion / Performance Metric:  Discussion / Performance Metric We are aiming for lattice sizes that can not reside at any one SC Center, but… Bell, Gray, Szalay, “PetaScale Computational Systems: Balanced CyberInfrastructure in a Data-Centric World” (September, 2005) If data can be regenerated locally, don’t send it over the grid (105 ops/byte) Higher disk to processing ratios - large disk farms Thought experiment: Enormous lattice, local to one SC Center, by swapping n sublattices to disk farm If we can not exceed this performance, it is not worth using the Grid for GD3 Make the very optimistic assumption that disk access time not limiting Clearly total SUPS constant, since it is one single multiprocessor Therefore SUPS/processor degrades by 1/n We can do that now. That is precisely the scaling that we see now. GD3 is a win! And things are only going to improve… Improvements in store: UDP with added reliability (UDT) in MPICH-G2 will improve bandwidth Multithreading in MPICH-G2 will overlap communication and computation to hide latency and bulk data transfers Disk swap in volume, interprocessor communications on surface, keep in processors! Conclusions:  Conclusions GD3 is already a win on today’s TeraGrid/NGS, with today’s middleware With improvements to MPICH-G2, TeraGrid infrastructure, and middleware, GD3 will become still more desirable The TeraGrid will enable scientific computation with larger lattice sizes than have ever been possible It is worthwhile to consider algorithms that push the envelope in this regard, including relaxation of PDE’s in 3+1 dimensions

Related presentations


Other presentations created by Danielle

American Culture
07. 11. 2007
0 views

American Culture

Seminar Dec 05 06 EvansS
07. 05. 2008
0 views

Seminar Dec 05 06 EvansS

9071
02. 05. 2008
0 views

9071

cas loutraki
02. 05. 2008
0 views

cas loutraki

26904 1
30. 04. 2008
0 views

26904 1

01TaxationNaturalRes ources
28. 04. 2008
0 views

01TaxationNaturalRes ources

UC1006
22. 04. 2008
0 views

UC1006

corpsponsorprogram
18. 04. 2008
0 views

corpsponsorprogram

Diameter Credit Check Mironov
17. 04. 2008
0 views

Diameter Credit Check Mironov

Insurance Needs
16. 04. 2008
0 views

Insurance Needs

EventPlanning
05. 10. 2007
0 views

EventPlanning

MURI NOAAPAP
05. 10. 2007
0 views

MURI NOAAPAP

Chapter3 Overexploitation
10. 10. 2007
0 views

Chapter3 Overexploitation

NSFWkshp10 KoganGPS
12. 10. 2007
0 views

NSFWkshp10 KoganGPS

2006 PC chap1 5
12. 10. 2007
0 views

2006 PC chap1 5

CultureoftheCIS
15. 10. 2007
0 views

CultureoftheCIS

ICCOA IOMDP PP
15. 10. 2007
0 views

ICCOA IOMDP PP

2006BiochemA chap3
15. 10. 2007
0 views

2006BiochemA chap3

CS 595 Presentation
17. 10. 2007
0 views

CS 595 Presentation

landnav
19. 10. 2007
0 views

landnav

GlobalInsightSupplyC hain
22. 10. 2007
0 views

GlobalInsightSupplyC hain

ProfRaulBraes
22. 10. 2007
0 views

ProfRaulBraes

masjid
24. 10. 2007
0 views

masjid

walk21
17. 10. 2007
0 views

walk21

2006 Footwear Conf
25. 10. 2007
0 views

2006 Footwear Conf

protws2 4572
29. 10. 2007
0 views

protws2 4572

0611
30. 10. 2007
0 views

0611

ContactCanberra1
04. 10. 2007
0 views

ContactCanberra1

robert engle
08. 10. 2007
0 views

robert engle

PO Workbenches Data Clean up
22. 10. 2007
0 views

PO Workbenches Data Clean up

Noel Final
12. 10. 2007
0 views

Noel Final

Traina
15. 10. 2007
0 views

Traina

The End of the Cold War
23. 12. 2007
0 views

The End of the Cold War

billah
23. 10. 2007
0 views

billah

CHAPTER 18 1
05. 01. 2008
0 views

CHAPTER 18 1

Constructed Wetlands
07. 01. 2008
0 views

Constructed Wetlands

US invlovemnet in ww2
13. 11. 2007
0 views

US invlovemnet in ww2

usits2001 talk 1
29. 10. 2007
0 views

usits2001 talk 1

Cine y filosofia
24. 10. 2007
0 views

Cine y filosofia

open economy
04. 10. 2007
0 views

open economy

IHEP in EGEE ver4
27. 09. 2007
0 views

IHEP in EGEE ver4

PACS
15. 10. 2007
0 views

PACS

Project Gini
14. 02. 2008
0 views

Project Gini

1Unit 8
24. 02. 2008
0 views

1Unit 8

Slides Louis
24. 02. 2008
0 views

Slides Louis

Management Structure Syria
07. 01. 2008
0 views

Management Structure Syria

DHS COPLINK Data Mining 2003
07. 03. 2008
0 views

DHS COPLINK Data Mining 2003

0815 Branch 0730
12. 03. 2008
0 views

0815 Branch 0730

trudel and nelson
01. 10. 2007
0 views

trudel and nelson

national holocaust memorial day
18. 03. 2008
0 views

national holocaust memorial day

Chapter 14 Powerpoint
26. 11. 2007
0 views

Chapter 14 Powerpoint

viniciuscatao inclusao
02. 11. 2007
0 views

viniciuscatao inclusao

nile climatology
21. 10. 2007
0 views

nile climatology

Roman Spring 2006
31. 12. 2007
0 views

Roman Spring 2006

OPSPanama 1
22. 10. 2007
0 views

OPSPanama 1

FunNight
23. 11. 2007
0 views

FunNight

COrlandi ANCI
24. 10. 2007
0 views

COrlandi ANCI

VisitingUCSF
30. 10. 2007
0 views

VisitingUCSF

Portfolio INFANZIA
02. 11. 2007
0 views

Portfolio INFANZIA

Rejmanek Honza Poster 20061110
03. 10. 2007
0 views

Rejmanek Honza Poster 20061110

Session 2 Mr Hotta ENUM 07
09. 10. 2007
0 views

Session 2 Mr Hotta ENUM 07

embrapa1
23. 10. 2007
0 views

embrapa1

6 ApocaplyticLiterature
01. 10. 2007
0 views

6 ApocaplyticLiterature

dissolving
04. 01. 2008
0 views

dissolving

UUpresEng0706
15. 10. 2007
0 views

UUpresEng0706

ICOPS agarwal 2007 v6
04. 12. 2007
0 views

ICOPS agarwal 2007 v6

Druckman flu presentation
26. 03. 2008
0 views

Druckman flu presentation