keyes

Information about keyes

Published on January 3, 2008

Author: Me_I

Source: authorstream.com

Content

Slide1:  David E. Keyes Center for Computational Science Old Dominion University Institute for Computer Applications in Science & Engineering NASA Langley Research Center Institute for Scientific Computing Research Lawrence Livermore National Laboratory Algorithms and Software for Scalable Fusion Energy Simulations Presuppositions:  Presuppositions Capabilities of many physics codes limited by solvers low temporal accuracy low spatial accuracy low fidelity When applied to physically realistic applications, most ‘optimal’ solvers … aren’t! slow linear convergence unreliable nonlinear convergence inefficient scaling Path forward: new generation of solvers that are enhanced by, but not limited by, the expression of the ‘physics’ Perspective:  Perspective We’re from the government, and we’re here to help We really want to help We need your help Our success (under SciDAC) is measured (mainly) by yours What’s new since you last looked?:  What’s new since you last looked? Philosophy of library usage callbacks extensibility polyalgorithmic adaptivity Resources for development, maintenance, and (some) support not just for dissertation scope ideas Experience on ASCI apps and ASCI scale computers “Secret weapons” Plan of presentation:  Plan of presentation Imperative of optimality in solution methods Domain decomposition and multilevel algorithmic concepts Illustrations of performance Three “secret weapons” Terascale Optimal PDE Simulations (TOPS) software project Conclusions Boundary conditions from architecture:  Boundary conditions from architecture Algorithms must run on physically distributed memory units connected by message-passing network, each serving one or more processors with multiple levels of cache Following the platforms …:  Following the platforms … … Algorithms must be highly concurrent and straightforward to load balance not communication bound cache friendly (temporal and spatial locality of reference) highly scalable (in the sense of convergence) Goal for algorithmic scalability: fill up memory of arbitrarily large machines while preserving constant running times with respect to proportionally smaller problem on one processor Domain-decomposed multilevel methods “natural” for all of these Domain decomposition also “natural” for software engineering Keyword: “Optimal”:  Keyword: “Optimal” Convergence rate nearly independent of discretization parameters Multilevel schemes for rapid linear convergence of linear problems Newton-like schemes for quadratic convergence of nonlinear problems Convergence rate as independent as possible of physical parameters Continuation schemes Physics-based preconditioning Why optimal algorithms?:  Why optimal algorithms? The more powerful the computer, the greater the premium on optimality Example: Suppose Alg1 solves a problem in time CN2, where N is the input size Suppose Alg2 solves the same problem in time CN Suppose that the machine on which Alg1 and Alg2 have been parallelized to run has 10,000 processors In constant time (compared to serial), Alg1 can run a problem 100X larger, whereas Alg2 can run a problem fully 10,000X larger Or, filling up the machine, Alg1 takes 100X longer Imperative: deal with multiple scales:  Imperative: deal with multiple scales Multiple spatial scales interfaces, fronts, layers thin relative to domain size Multiple temporal scales fast waves small transit times relative to convection, diffusion, or group velocity dynamics Analyst must isolate dynamics of interest and model the rest in a system that can be discretized over computable (modest) range of scales May lead to idealizations of local discontinuity or infinitely stiff subsystem requiring special treatment Richtmeyer-Meshkov instability, c/o A. Mirin, LLNL Multiscale stress on algorithms:  Multiscale stress on algorithms Spatial resolution stresses condition number Ill-conditioning: small error in input may lead to large error in output For self-adjoint linear systems cond no. , related to ratio of max to min eigenvalue With improved resolution we approach the continuum limit of an unbounded inverse For discrete Laplacian, Standard iterative methods fail due to growth in iterations like or Direct methods fail due to memory growth and bounded concurrency Solution is hierarchical (multilevel) iterative methods Multiscale stress on algorithms, cont.:  Multiscale stress on algorithms, cont. Temporal resolution stresses stiffness Stiffness: failure to track fastest mode may lead to exponentially growing error in other modes, related to ratio of max to min eigenvalue of A, in By definition, multiple timescale problems contain phenomena of very different relaxation rates Certain idealized systems (e.g., incomp NS) are infinitely stiff Number of steps to finite simulated time grows, to preserve stability, regardless of accuracy requirements Solution is to step over fast modes by assuming quasi-equilibrium Throws temporally stiff problems into spatially ill-conditioned regime; need optimal implicit solver Multiscale stress on architecture:  Multiscale stress on architecture Spatial resolution stresses memory size number of floating point words precision of floating point words Temporal resolution stresses clock rates Both stress interprocessor latency, and together they severely stress memory bandwidth Less severely stressed for PDEs, in principle, are memory latency and interprocessor bandwidth Subject of Europar2000 plenary (talk and paper available from my home page; URL later) Brute force not an option Decomposition strategies for Lu=f in :  Decomposition strategies for Lu=f in  Operator decomposition Function space decomposition Domain decomposition Operator decomposition:  Operator decomposition Consider ADI Iteration matrix consists of four sequential (“multiplicative”) substeps per timestep two sparse matrix-vector multiplies two sets of unidirectional bandsolves Parallelism within each substep But global data exchanges between bandsolve substeps Function space decomposition:  Function space decomposition Consider a spectral Galerkin method Domain decomposition:  Domain decomposition Consider restriction and extension operators for subdomains, , and for possible coarse grid, Replace discretized with Solve by a Krylov method, e.g., CG Matrix-vector multiplies with parallelism on each subdomain nearest-neighbor exchanges, global reductions possible small global system (not needed for parabolic case) Comparison:  Comparison Operator decomposition (ADI) natural row-based assignment requires all-to-all, bulk data exchanges in each step (for transpose) Function space decomposition (Fourier) natural mode-based assignment requires all-to-all, bulk data exchanges in each step (for transform) Domain decomposition (Schwarz) natural domain-based assignment requires local (nearest neighbor) data exchanges, global reductions, and optional small global problem Primary (DD) PDE solution kernels:  Primary (DD) PDE solution kernels Vertex-based loops state vector and auxiliary vector updates Edge-based “stencil op” loops residual evaluation approximate Jacobian evaluation Jacobian-vector product (often replaced with matrix-free form, involving residual evaluation) intergrid transfer (coarse/fine) in multilevel methods Subdomain-wise sparse, narrow-band recurrences approximate factorization and back substitution smoothing Vector inner products and norms orthogonalization/conjugation convergence progress and stability checks Illustration of edge-based loop:  Illustration of edge-based loop Vertex-centered grid Traverse by edges load vertex values compute intensively e.g., for compressible flows, solve 5x5 eigen-problem for character-istic directions and speeds of each wave store flux contributions at vertices Each vertex appears in approximately 15 flux computations (for tets) Complexities of PDE kernels:  Complexities of PDE kernels Vertex-based loops work and data closely proportional pointwise concurrency, no communication Edge-based “stencil op” loops large ratio of work to data colored edge concurrency; local communication Subdomain-wise sparse, narrow-band recurrences work and data closely proportional Vector inner products and norms work and data closely proportional pointwise concurrency; global communication Potential architectural stresspoints :  Potential architectural stresspoints Vertex-based loops: memory bandwidth Edge-based “stencil op” loops: load/store (register-cache) bandwidth internode bandwidth Subdomain-wise sparse, narrow-band recurrences: memory bandwidth Inner products and norms: memory bandwidth internode latency, network diameter ALL STEPS: memory latency, unless good locality is consciously built-in Theoretical scaling of domain decomposition (for three common network topologies*):  Theoretical scaling of domain decomposition (for three common network topologies*) With logarithmic-time (hypercube- or tree-based) global reductions and scalable nearest neighbor interconnects: optimal number of processors scales linearly with problem size (“scalable”, assumes one subdomain per processor) With power-law-time (3D torus-based) global reductions and scalable nearest neighbor interconnects: optimal number of processors scales as three-fourths power of problem size (“almost scalable”) With linear-time (common bus) network: optimal number of processors scales as one-fourth power of problem size (*not* scalable) bad news for conventional Beowulf clusters, but see 2000 & 2001 Bell Prize “price-performance awards” using multiple commodity NICs per Beowulf node! * subject of DD’98 proceedings paper (on-line) Basic Concepts:  Basic Concepts Iterative correction (including CG and MG) Schwarz preconditioning Iterative correction:  Iterative correction The most basic idea in iterative methods Evaluate residual accurately, but solve approximately, where is an approximate inverse to A A sequence of complementary solves can be used, e.g., with first and then one has Optimal polynomials of lead to various preconditioned Krylov methods Scale recurrence, e.g., with , leads to multilevel methods Multilevel preconditioning:  Finest Grid Multilevel preconditioning Schwarz preconditioning:  Schwarz preconditioning Given A x = b , partition x into subvectors, corresp. to subdomains of the domain of the PDE, nonempty, possibly overlapping, whose union is all of the elements of Let Boolean rectangular matrix extract the subset of : Let The Boolean matrices are gather/scatter operators, mapping between a global vector and its subdomain support Iteration count estimates from the Schwarz theory:  Iteration count estimates from the Schwarz theory In terms of N and P, where for d-dimensional isotropic problems, N=h-d and P=H-d, for mesh parameter h and subdomain diameter H, iteration counts may be estimated as follows: Krylov-Schwarz iterative methods typically converge in a number of iterations that scales as the square-root of the condition number of the Schwarz-preconditioned system Comments on the Schwarz theory:  Comments on the Schwarz theory Basic Schwarz estimates are for: self-adjoint operators positive definite operators exact subdomain solves two-way overlapping with generous overlap, =O(H) (otherwise 2-level result is O(1+H/)) Extensible to: nonself-adjointness (convection) indefiniteness (wave Helmholtz) inexact subdomain solves one-way neighbor communication small overlap Comments on the Schwarz theory:  Comments on the Schwarz theory Theory still requires “sufficiently fine” coarse mesh coarse space need not be nested in the fine space or in the decomposition into subdomains Practice is better than one has any right to expect “In theory, theory and practice are the same ... In practice they’re not!” — Yogi Berra Newton-Krylov-Schwarz:  Newton-Krylov-Schwarz Newton nonlinear solver asymptotically quadratic Krylov accelerator spectrally adaptive Schwarz preconditioner parallelizable Popularized in parallel Jacobian-free form under this name by Cai, Gropp, Keyes & Tidriri (1994) Jacobian-Free Newton-Krylov Method:  Jacobian-Free Newton-Krylov Method In the Jacobian-Free Newton-Krylov (JFNK) method, a Krylov method solves the linear Newton correction equation, requiring Jacobian-vector products These are approximated by the Fréchet derivatives so that the actual Jacobian elements are never explicitly needed, where  is chosen with a fine balance between approximation and floating point rounding error  Schwarz preconditions, using approx. Jacobian NKS in transport modeling:  1 proc NKS in transport modeling Newton-Krylov solver with Aztec non-restarted GMRES with 1 – level domain decomposition preconditioner, ILUT subdomain solver, and ML 2-level DD with Gauss-Seidel subdomain solver. Coarse Solver: “Exact” = Superlu (1 proc), “Approx” = one step of ILU (8 proc. in parallel) Temperature iso-lines on slice plane, velocity iso-surfaces and streamlines in 3D N.45 N.24 N0 3D Results 512 procs Total Unknowns Avg. Iterations per Newton Step Thermal Convection Problem (Ra = 1000) c/o J. Shadid and R. Tuminaro MPSalsa/Aztec Newton-Krylov-Schwarz solver www.cs.sandia.gov/CRF/MPSalsa Computational aerodynamics:  Computational aerodynamics mesh c/o D. Mavriplis, ICASE Implemented in PETSc www.mcs.anl.gov/petsc Transonic “Lambda” Shock, Mach contours on surfaces Fixed-size parallel scaling results:  Fixed-size parallel scaling results c/o K. Anderson, W. Gropp, D. Kaushik, D. Keyes and B. Smith This scaling study, featuring our widest range of processor number, was done for the incompressible case. Improvements from locality reordering:  Improvements from locality reordering Nonlinear Schwarz preconditioning:  Nonlinear Schwarz preconditioning Nonlinear Schwarz has Newton both inside and outside and is fundamentally Jacobian-free It replaces with a new nonlinear system possessing the same root, Define a correction to the partition (e.g., subdomain) of the solution vector by solving the following local nonlinear system: where is nonzero only in the components of the partition Then sum the corrections: Nonlinear Schwarz, cont.:  Nonlinear Schwarz, cont. It is simple to prove that if the Jacobian of F(u) is nonsingular in a neighborhood of the desired root then and have the same unique root To lead to a Jacobian-free Newton-Krylov algorithm we need to be able to evaluate for any : The residual The Jacobian-vector product Remarkably, (Cai-Keyes, 2000) it can be shown that where and All required actions are available in terms of ! Experimental example of nonlinear Schwarz:  Experimental example of nonlinear Schwarz What about nonelliptic problems?:  What about nonelliptic problems? No natural variational principle, by which rootfinding problem is equivalent to residual minimization Multigrid theory and intuition “at sea” without VP Look to First-Order Systems of Least Squares (FOSLS) formulation of multigrid FOSLS (Manteuffel & McCormick), grossly oversimplified: Rewrite system in terms of first-order partial derivatives only (by introducing lots of new auxiliary variables, as necessary) “Dot” the first-order system on itself and minimize So far, excellent results for Maxwell’s Eqs. Physics-based preconditioning: Shallow water equations example:  Physics-based preconditioning: Shallow water equations example Continuity Momentum Gravity wave speed Typically , but stability restrictions would require timesteps based on the CFL criterion for the fastest wave, for an explicit method One can solve fully implicitly, or one can filter out the gravity wave by solving semi-implicitly 1D Shallow water eq. example, cont.:  1D Shallow water eq. example, cont. Continuity (*) Momentum (**) Solving (**) for and substituting into (*), where 1D Shallow water eq. example, cont.:  1D Shallow water eq. example, cont. After the parabolic equation is spatially discretized and solved for , then can be found from One scalar parabolic solve and one scalar explicit update replace an implicit hyperbolic system This semi-implicit operator splitting is foundational to multiple scales problems in geophysical modeling Similar tricks are employed in aerodynamics (sound waves), MHD (multiple Alfvén waves), etc. Temporal truncation error remains due to the lagging of the advection in (**) Physics-based preconditioning:  Physics-based preconditioning In Newton iteration, one seeks to obtain a correction (“delta”) to solution, by inverting the Jacobian matrix on (the negative of) the nonlinear residual A typical operator-split code also derives a “delta” to the solution, by some implicitly defined means, through a series of implicit and explicit substeps This implicitly defined mapping from residual to “delta” is a natural preconditioner Software must accommodate this! Slide45:  Lab-university collaborations to develop reusable software “solutions” and partner with application groups For FY2002, 51 new projects at $57M/year total Approximately one-third for applications A third for integrated software infrastructure centers A third for grid infrastructure and collaboratories 5 Tflop/s IBM SP platforms “Seaborg” at NERSC (#3 in latest “Top 500”) and “Cheetah” at ORNL (being installed now) available for SciDAC Slide46:  34 apps groups (BER, BES,FES, HENP) 7 ISIC groups (4 CS, 3 Math) 10 grid, data collaboratory groups software integration performance optimization Introducing “Terascale Optimal PDE Simulations” (TOPS) ISIC:  Introducing “Terascale Optimal PDE Simulations” (TOPS) ISIC Nine institutions, $17M, five years, 24 co-PIs Slide48:  Who we are… … the PETSc and TAO people … the hypre and PVODE people … the SuperLU and PARPACK people Slide49:  Plus some university collaborators TOPS:  TOPS Not just algorithms, but vertically integrated software suites Portable, scalable, extensible, tunable, modular implementations Starring PETSc and hypre, among other existing packages Background of PETSc Library (in which FUN3D example was implemented):  Background of PETSc Library (in which FUN3D example was implemented) Developed under MICS at ANL to support research, prototyping, and production parallel solutions of operator equations in message-passing environments Distributed data structures as fundamental objects - index sets, vectors/gridfunctions, and matrices/arrays Iterative linear and nonlinear solvers, combinable modularly and recursively, and extensibly Portable, and callable from C, C++, Fortran Uniform high-level API, with multi-layered entry Aggressively optimized: copies minimized, communication aggregated and overlapped, caches and registers reused, memory chunks preallocated, inspector-executor model for repetitive tasks (e.g., gather/scatter) See http://www.mcs.anl.gov/petsc User code/PETSc library interactions:  PETSc code User code Application Initialization Function Evaluation Jacobian Evaluation Post- Processing PC KSP PETSc Main Routine Linear Solvers (SLES) Nonlinear Solvers (SNES) Timestepping Solvers (TS) User code/PETSc library interactions User code/PETSc library interactions:  PETSc code User code Application Initialization Function Evaluation Jacobian Evaluation Post- Processing PC KSP PETSc Main Routine Linear Solvers (SLES) Nonlinear Solvers (SNES) Timestepping Solvers (TS) User code/PETSc library interactions To be AD code Background of Hypre library (to be combined with PETSc 3.0 under SciDAC by Fall’02):  Background of Hypre library (to be combined with PETSc 3.0 under SciDAC by Fall’02) Developed under ASCI at LLNL to support research, prototyping, and production parallel solutions of operator equations in message-passing environments Object-oriented design similar to PETSc Concentrates on linear problems only Richer in preconditioners than PETSc, with focus on algebraic multigrid Includes other preconditioners, including sparse approximate inverse (Parasails) and parallel ILU (Euclid) See http://www.llnl.gov/CASC/hypre/ Hypre’s “Conceptual Interfaces”:  Hypre’s “Conceptual Interfaces” Slide c/o E. Chow, LLNL Example of Hypre’s scaled efficiency:  Example of Hypre’s scaled efficiency Slide c/o E. Chow, LLNL Abstract Gantt chart for TOPS :  Abstract Gantt chart for TOPS Algorithmic Development Research Implementations Hardened Codes Applications Integration Dissemination time e.g.,PETSc e.g.,TOPSLib e.g., ASPIN Each color module represents an algorithmic research idea on its way to becoming part of a supported community software tool. At any moment (vertical time slice), TOPS has work underway at multiple levels. While some codes are in applications already, they are being improved in functionality and performance as part of the TOPS research agenda. Each TOPS researcher should be conscious of where on the “up and to the right” insertion path he or she is working, and when and to whom the next hand-off will occur. Scope for TOPS:  Scope for TOPS Design and implementation of “solvers” Time integrators, with sens. analysis Nonlinear solvers, with sens. analysis Optimizers Linear solvers Eigensolvers Software integration Performance optimization Optimizer Linear solver Eigensolver Time integrator Nonlinear solver Indicates dependence Sens. Analyzer TOPS philosophy on PDEs:  TOPS philosophy on PDEs Solution of a system of PDEs is rarely a goal in itself PDEs are typically solved to derive various outputs from specified inputs, e.g. lift-to-drag ratios from angles or attack Actual goal is characterization of a response surface or a design or control strategy Black box approaches may be inefficient and insufficient Together with analysis, sensitivities and stability are often desired Tools for PDE solution should also support related desires TOPS philosophy on operators:  TOPS philosophy on operators A continuous operator may appear in a discrete code in many different instances Optimal algorithms tend to be hierarchical and nested iterative Processor-scalable algorithms tend to be domain-decomposed and concurrent iterative Majority of progress towards desired highly resolved, high fidelity result occurs through cost-effective low resolution, low fidelity parallel efficient stages Operator abstractions and recurrence must be supported It’s 2002; do you know what your solver is up to?:  It’s 2002; do you know what your solver is up to? Has your solver not been updated in the past five years? Is your solver running at 1-10% of machine peak? Do you spend more time in your solver than in your physics? Is your discretization or model fidelity limited by the solver? Is your time stepping limited by stability? Are you running loops around your analysis code? Do you care how sensitive to parameters your results are? If the answer to any of these questions is “yes”, you are a potential customer! TOPS project goals/success metrics:  TOPS project goals/success metrics Understand range of algorithmic options and their tradeoffs (e.g., memory vs. time, inner iteration work vs. outer) Can try all reasonable options from different sources easily without recoding or extensive recompilation Know how their solvers are performing Spend more time in their physics than in their solvers Are intelligently driving solver research, and publishing joint papers with TOPS researchers Can simulate truly new physics, as solver limits are steadily pushed back (finer meshes, higher fidelity models, complex coupling, etc.) TOPS will have succeeded if users — Conclusions:  Conclusions Domain decomposition and multilevel iteration the dominant paradigm in contemporary terascale PDE simulation Several freely available software toolkits exist, and successfully scale to thousands of tightly coupled processors for problems on quasi-static meshes Concerted efforts underway to make elements of these toolkits interoperate, and to allow expression of the best methods, which tend to be modular, hierarchical, recursive, and unfortunately — adaptive! Many challenges loom at the “next scale” of computation Undoubtedly, new theory/algorithms will be part of the interdisciplinary solution Acknowledgments:  Acknowledgments Collaborators: Xiao-Chuan Cai (Univ. Colorado, Boulder) Dinesh Kaushik (ODU) PETSc team at Argonne National Laboratory hypre team at Lawrence Livermore National Laboratory Aztec team at Sandia National Laboratory Sponsors: DOE, NASA, NSF Computer Resources: LLNL, LANL, SNL, NERSC, SGI Related URLs:  Related URLs Personal homepage: papers, talks, etc. http://www.math.odu.edu/~keyes SciDAC initiative http://www.science.doe.gov/scidac TOPS project http://www.math.odu.edu/~keyes/scidac PETSc project http://www.mcs.anl.gov/petsc Hypre project http://www.llnl.gov/CASC/hypre ASCI platforms http://www.llnl.gov/asci/platforms Bibliography:  Bibliography Jacobian-Free Newton-Krylov Methods: Approaches and Applications, Knoll & Keyes, 2002, to be submitted to J. Comp. Phys. Nonlinearly Preconditioned Inexact Newton Algorithms, Cai & Keyes, 2002, to appear in SIAM J. Sci. Comp. High Performance Parallel Implicit CFD, Gropp, Kaushik, Keyes & Smith, 2001, Parallel Computing 27:337-362 Four Horizons for Enhancing the Performance of Parallel Simulations based on Partial Differential Equations, Keyes, 2000, Lect. Notes Comp. Sci., Springer, 1900:1-17 Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel CFD, Gropp, Keyes, McInnes & Tidriri, 2000, Int. J. High Performance Computing Applications 14:102-136 Achieving High Sustained Performance in an Unstructured Mesh CFD Application, Anderson, Gropp, Kaushik, Keyes & Smith, 1999, Proceedings of SC'99 Prospects for CFD on Petaflops Systems, Keyes, Kaushik & Smith, 1999, in “Parallel Solution of Partial Differential Equations,” Springer, pp. 247-278 How Scalable is Domain Decomposition in Practice?, Keyes, 1998, in “Proceedings of the 11th Intl. Conf. on Domain Decomposition Methods,” Domain Decomposition Press, pp. 286-297

Related presentations


Other presentations created by Me_I

body language
15. 06. 2007
0 views

body language

Pregnant Man Delivers Baby Girl
06. 07. 2008
0 views

Pregnant Man Delivers Baby Girl

Pregnant Man delivers baby girl
04. 07. 2008
0 views

Pregnant Man delivers baby girl

Interview guide
22. 04. 2008
0 views

Interview guide

Price
17. 04. 2008
0 views

Price

valuation models
17. 04. 2008
0 views

valuation models

Stan Mcmillen
14. 04. 2008
0 views

Stan Mcmillen

ABch11
13. 04. 2008
0 views

ABch11

IPStrategy mozambique
10. 04. 2008
0 views

IPStrategy mozambique

magaddinometrans01
09. 04. 2008
0 views

magaddinometrans01

lecture7 07 time cmp
07. 04. 2008
0 views

lecture7 07 time cmp

Easter
09. 07. 2007
0 views

Easter

PHSC1013 Reactions
02. 01. 2008
0 views

PHSC1013 Reactions

Digestive System Disorders
26. 03. 2008
0 views

Digestive System Disorders

ABC on HIV AIDS
16. 06. 2007
0 views

ABC on HIV AIDS

plateau
09. 07. 2007
0 views

plateau

4 YCCC 1500 Competition K1IR
16. 06. 2007
0 views

4 YCCC 1500 Competition K1IR

2DShapes
16. 06. 2007
0 views

2DShapes

Analyzing Political Cartoons 1
15. 06. 2007
0 views

Analyzing Political Cartoons 1

kwanza
09. 07. 2007
0 views

kwanza

7 passions
16. 06. 2007
0 views

7 passions

GreekDrama
09. 07. 2007
0 views

GreekDrama

L9medicine
12. 10. 2007
0 views

L9medicine

Business opportunities 2005
18. 10. 2007
0 views

Business opportunities 2005

Rocio Guirao Diaz
03. 09. 2007
0 views

Rocio Guirao Diaz

Jorge Tellez Fuentes
22. 10. 2007
0 views

Jorge Tellez Fuentes

87 81
22. 10. 2007
0 views

87 81

Active Aging Steps
27. 11. 2007
0 views

Active Aging Steps

Yafei ECG poster final
15. 11. 2007
0 views

Yafei ECG poster final

NACAA Conference FSEP
29. 12. 2007
0 views

NACAA Conference FSEP

chem 101 chapter 6 slides
04. 01. 2008
0 views

chem 101 chapter 6 slides

comp intel
03. 10. 2007
0 views

comp intel

03n 0203 ts00002 LEACH
19. 11. 2007
0 views

03n 0203 ts00002 LEACH

mikelle
28. 12. 2007
0 views

mikelle

Dear God
03. 10. 2007
0 views

Dear God

Team Leader Training 12 11 06
09. 07. 2007
0 views

Team Leader Training 12 11 06

Schneider and Schmitt
09. 07. 2007
0 views

Schneider and Schmitt

paraguay
09. 07. 2007
0 views

paraguay

mowbjan06
09. 07. 2007
0 views

mowbjan06

mlk 2
09. 07. 2007
0 views

mlk 2

Mayan Cosmology
09. 07. 2007
0 views

Mayan Cosmology

GCSE ART D of Dead
09. 07. 2007
0 views

GCSE ART D of Dead

Enter the 2007 Art Comp1
09. 07. 2007
0 views

Enter the 2007 Art Comp1

cases
24. 02. 2008
0 views

cases

a032405
09. 10. 2007
0 views

a032405

sfa ch2
01. 01. 2008
0 views

sfa ch2

6 12USAITAsecondpanel
22. 10. 2007
0 views

6 12USAITAsecondpanel

Gatekeepers 6
11. 12. 2007
0 views

Gatekeepers 6

EU New Member States Brno NA
18. 03. 2008
0 views

EU New Member States Brno NA

Rym Kefi
05. 01. 2008
0 views

Rym Kefi

Chinese Dynasties
25. 03. 2008
0 views

Chinese Dynasties

2 4 Adams Hallam
07. 10. 2007
0 views

2 4 Adams Hallam

Wings of the Centennial
09. 07. 2007
0 views

Wings of the Centennial

Creating a User Group 2005
30. 03. 2008
0 views

Creating a User Group 2005

Observation
09. 07. 2007
0 views

Observation

usingmypyramidadult
04. 03. 2008
0 views

usingmypyramidadult

Stefan Stanciugelu
15. 10. 2007
0 views

Stefan Stanciugelu

carreira
19. 06. 2007
0 views

carreira

Capitalization Part 2
19. 06. 2007
0 views

Capitalization Part 2

Caceres 2007
19. 06. 2007
0 views

Caceres 2007

CHETEMPOFA
18. 06. 2007
0 views

CHETEMPOFA

cervantes
18. 06. 2007
0 views

cervantes

CEIS 23 06
18. 06. 2007
0 views

CEIS 23 06

cartoon classics
18. 06. 2007
0 views

cartoon classics

cardiac2
19. 06. 2007
0 views

cardiac2

hphennalect
09. 07. 2007
0 views

hphennalect

Current Presentation March06
18. 06. 2007
0 views

Current Presentation March06

cul 18
18. 06. 2007
0 views

cul 18

competitive rviews
18. 06. 2007
0 views

competitive rviews

company wbc
18. 06. 2007
0 views

company wbc

City Guide Mobile web en
18. 06. 2007
0 views

City Guide Mobile web en

MICROIII Aula11
14. 11. 2007
0 views

MICROIII Aula11

comunicado2
22. 10. 2007
0 views

comunicado2

Camilla Schreiner UiO
19. 06. 2007
0 views

Camilla Schreiner UiO

verb som laanord
27. 09. 2007
0 views

verb som laanord

LCWS05 gao 1
12. 10. 2007
0 views

LCWS05 gao 1

ambiguity
16. 06. 2007
0 views

ambiguity

all about burgers 1
16. 06. 2007
0 views

all about burgers 1

AGME 1
16. 06. 2007
0 views

AGME 1

Advanced ADO
16. 06. 2007
0 views

Advanced ADO

adolposter winkles
16. 06. 2007
0 views

adolposter winkles

AbrahamseOverviewAM
16. 06. 2007
0 views

AbrahamseOverviewAM

31 Presentation
16. 06. 2007
0 views

31 Presentation

303lec09
16. 06. 2007
0 views

303lec09

04 06 training
15. 06. 2007
0 views

04 06 training

Brian PM FINAL
15. 06. 2007
0 views

Brian PM FINAL

Berkeley ISSC Feb 07
15. 06. 2007
0 views

Berkeley ISSC Feb 07

Autism 06 BU handout
15. 06. 2007
0 views

Autism 06 BU handout

Attraction
15. 06. 2007
0 views

Attraction

Asali
15. 06. 2007
0 views

Asali

April Showers
15. 06. 2007
0 views

April Showers

APA San Diego
15. 06. 2007
0 views

APA San Diego

ap05blogs
15. 06. 2007
0 views

ap05blogs

anastasatos
15. 06. 2007
0 views

anastasatos

Analyzing Political Cartoons
15. 06. 2007
0 views

Analyzing Political Cartoons

analyzing cartoons incent asia
15. 06. 2007
0 views

analyzing cartoons incent asia

africanamericanhumor
16. 06. 2007
0 views

africanamericanhumor

americanpoplanguage
15. 06. 2007
0 views

americanpoplanguage

mainmenu2007changes jml
06. 03. 2008
0 views

mainmenu2007changes jml

StephenAbram RecreatingServices3
10. 03. 2008
0 views

StephenAbram RecreatingServices3

Joanna Krupa
03. 09. 2007
0 views

Joanna Krupa

HowtheWestWasWon
19. 02. 2008
0 views

HowtheWestWasWon

accidental humor
16. 06. 2007
0 views

accidental humor

VolkswagenCoaching
16. 11. 2007
0 views

VolkswagenCoaching

07 kauai1
26. 02. 2008
0 views

07 kauai1

Urban Waters and Ports html
30. 12. 2007
0 views

Urban Waters and Ports html

awakening
15. 06. 2007
0 views

awakening

RR11 NIA Hodes
03. 01. 2008
0 views

RR11 NIA Hodes

AAAR diesel1
29. 02. 2008
0 views

AAAR diesel1

RSS AnnualMtg3
29. 09. 2007
0 views

RSS AnnualMtg3

USTC
29. 11. 2007
0 views

USTC

ESS Adv Plan Present 4 18
09. 07. 2007
0 views

ESS Adv Plan Present 4 18

CMR0101
18. 06. 2007
0 views

CMR0101