winton

Information about winton

Published on October 23, 2007

Author: Lindon

Source: authorstream.com

Content

Grids, Data Grids, and High Energy Physics...:  Grids, Data Grids, and High Energy Physics... A Melbourne perspective. Lyle Winton [email protected] Overview:  Overview Grids ««« Data Grids ««« Data Grids in practice ««« Overview of products How it works HEP Grid Projects ATLAS Data Challenges University of Melbourne Project 1 - Belle Experiment Project 2 - Grid Interface Grids:  Grids “The Grid refers to an infrastructure that enables the integrated, collaborative use of high-end computers, networks, databases, and scientific instruments owned and managed by multiple organizations.” - Globus “Tomorrow, the grid will be a single, sustained engine for scientific invention. It will link petaflops of computing power, petabytes of data, simulation and modeling codes of every stripe, sensors and instruments around the globe, and tools for discovering and managing these resources. At your desktop and at your whim, you'll have access to the world and its computing assets.” - NCSA (National Center for Supercomputing Applications) The grid “...consists of physical resources (computers/clusters, disks and networks) and "middleware" (software) that ensures the access and the co-ordinated use of such resources.” - EDG (European Data Grid) Grids:  Grids The Grid is not “The Internet 2” it uses existing internet infrastructure however, Grid projects exist to extend current infrastructure The Grid does not replace Cluster or Parallel computing. it does not fix the problems of SMP/clustering it is designed to help manage, share, and utilise existing technologies and resources a cluster or SMP machine becomes a single Grid node or computation resource resource information and status are available to Grid users resources may be utilised by Grid users Grids:  Grids Analogy - a power grid Nodes that supply power and nodes that use power Standard connections are required (eg. 240V 50Hz) Users access and consume power as required transparently. Data Grids:  Data Grids What? Grids where access to data is as important as access to computational resources Where? Collaborative research or data intensive research… Experimental High Energy Physics (HEP) data focused, simulation & analysis data, large collaborations LHC from 2006-2007 will produce 8 PB/yr raw data (8x1015 bytes ; not all required by all users) accessible to 1000’s of users in many institutes Earth Systems Grid (ESG) climate studies in the US 3 PB/yr data which requires 3 TFLOPS processing power ( 1 MFLOP » P4 @ 1 MHz ; 3 TFLOP » 1500 P4 2GHz ) Data Grids:  Data Grids Commercial Applications? Teleimmersive Applications engineering design environments: world-wide access to parts descriptions, assembly, finite element analysis, rendering Entertainment Industry Lord of the Rings, Fellowship of the Ring 45TB of HD footage, additional scene overlays, computer generated images 420 Intel 1GHz CPUs plus 200 SGI, Mac, PC workstations workloads could be split across multiple production centres Data Grids in Practice:  Data Grids in Practice Why bother? collaborations, experiments, and data are getting larger Administration time on system administration required/collaborative software accounts, security, network access ... Data Management storage/transfer of large data is expensive backup of large data is expensive/impossible software changes + many users = proliferation of data Computational Resources CPU required for peak processing loads can be expensive idle/wasted CPU will exist if usage varies greatly Service Replication reinventing of code for… distributed/parallel computing ; data or database access ; access to mass storage ; packaging of software Data Grids in Practice:  Data Grids in Practice Advantages (goals) of Data Grids Administration: Simple grid-wide admin tools (security/data/software) Data Management: world-wide access to data and resources (international collaborations) intelligent data storage, replication, caching (faster & cheaper) potentially better data description and versioning Computational Resources: wider access to more computers and CPU spread of load may lead to less idletime (less CPU required) Services: intelligent job/process location (automatic move-code-to-data or vice-versa ; fast processing times ; reduced cost of data access) parallel computing transparent network infrastructure for users standard API for mass-storage, data, and database access software packaging tools? Data Grids in Practice:  Data Grids in Practice Problems? How are we going to provide the above? grids of 10,000+ computers must be administered user groups of 1000+ must be administered maintain access privileges for users/data/space/computers monitoring of distributed processes, data, resources job profiling and resource profiling (what job can run where) job management (where and when will the job run) CPU time / CPU count (for MPI/PVM) other resources (memory, disk scratch...) data location current resource loads or availability time/cost of data transfer cost of CPU usage (smaller institutes may rent CPU time) user/job permissions and priorities resource restrictions/permissions and priorities Overview:  Overview Grids Data Grids Data Grids in practice Overview of products ««« How it works HEP Grid Projects ATLAS Data Challenges University of Melbourne Project 1 - Belle Experiment Project 2 - Grid Interface Overview of Products:  Overview of Products Data Grid services and software are termed “middleware” (they lie between the fabric/systems and the users/applications) Overview of Products:  Overview of Products Available Packages Globus Toolkit the core of most grid middleware European Data Grid (EDG, or EU-DataGrid or DataGrid) Virtual Data Toolkit (VDT - GriPhyN project) Particle Physics Data Grid (PPDG) no package as yet, just a collections of tools Nile (CLEO experiment) Java, CORBA/OrbixWeb based not built on Globus!!! SunÔ Grid Engine (SGE), Enterprise Edition Overview of Products:  Overview of Products Available Tools and Services - Security - GSI - Grid Security Infrastructure (globus) - Service/Resource Information (data/machine) - MDS - Metacomputing Directory Service (globus) - GRIS - Grid Resource Information Service (globus) - GIIS - Grid Index Information Service (globus) - Resource Management - RSL - Resource Specification Language (globus) - method to exchange resource info & requirements - GRAM - Globus Resource Allocation Manager (globus) - standard interface to computation resources like PBS/Condor - DUROC - Dynamically-Updated Request Online Coallocator (globus) - WMS - Workload Management System (EDG) - Data Management - GSIFTP - high performance, secure FTP uses GSI (globus) - Replica Catalog - data filing and tracking system (globus) - GASS - Globus Access to Secondary Storage (globus) - access data stored in any remote file system by URL - Unix like calls fopen(), fclose() - GDMP - Grid Data Mirroring Package (EDG,GriPhyn,PPDG) Overview of Products:  Overview of Products Available Tools and Services (cont.) - Data Management (cont.) - Magda - distributed data manager (PPDG) - Spitfire - grid enabled access to any RDBMS (EDG) - RLS - Replica Location Service (EDG) - Mass Storage - HPSS - high performance storage system (globus) - SRB - Storage Resource Broker (globus) - Communication - Nexus and MPICH-G (globus) - Monitoring - HBM - Heartbeat Monitor (globus) - many others! - Job Managers - PBS - portable batch system - Condor - distributed computing environment - Fabric Management - Cfengine - many others! Overview:  Overview Grids Data Grids Data Grids in practice Overview of products How it works ««« HEP Grid Projects ATLAS Data Challenges University of Melbourne Project 1 - Belle Experiment Project 2 - Grid Interface How it's supposed to work!:  How it's supposed to work! Workload Manager (EDG example) How it's supposed to work!:  How it's supposed to work! Workload Manager (EDG very simple version) How it's supposed to work!:  How it's supposed to work! Replica Catalog Data can be organised as “Logical File Names” in a virtual directory structure which can be mapped to “Physics File Names” Logical file structure is organised into Catalogs, Collections, Files How it's supposed to work!:  How it's supposed to work! File Replication (GDMP example) How it's supposed to work!:  How it's supposed to work! Object Replication vs File Replication Existing experimental data (file replication) compressed files of many independent events (random access is difficult or impractical) groups of files of similar events are called “data-sets” large data-sets filtered for most interesting events, stored as “skims” Object replication seems the most efficient each event is an object potentially stored separately a data-set is just a collection of unique event IDs no event duplication in multiple data-sets data processing accesses the nearest event However, many current storage systems have scalability problems (events > 109 in number) file replication problems solved (in part) by replication of smaller common skims Compromise: skim files might be (re)constructed by extracting events from the nearest data-set. How it's supposed to work!:  How it's supposed to work! A User Example Traditional method ssh remote qstat (choose location to run) scp files remote:dir (transfer auxiliary files: user code/libs, scripts, config) ssh remote qsub < myrun.csh (run job or submit to queue) Globus method (Grid resources) grid-proxy-init (security sign on) grid-info-search remote (choose Grid node to run) globus-url-copy files remote (transfer auxiliary files) globus-job-run myrun.csh (submit job to node) Advantages: Authenticate once, run anywhere (without agents) Greater access to resources (when more exist) Access to remote data resources Disadvantages: Essentially the same as the traditional method How it's supposed to work!:  How it's supposed to work! A User Example (cont.) Nimrod method (economic scheduler accesses multiple resources; University Melbourne GridBus www.gridbus.org) grid-proxy-init (security sign on) nimrod myrun.plan (register jobs and start Nimrod) <Specify budget/deadline> Advantages: File transfer is handled Grid nodes choice is handled, multiple can be used at once Cost and time considerations including feasibility Disadvantages: Cost of data transfer is not yet considered (in development) How it's supposed to work!:  How it's supposed to work! A User Example (cont.) Resource Broker method (EDG Workload Manager) dg-job-submit myrun.profile (security sign on & submit) Advantages: Very simple! File transfer is handled by Resource Broker Grid nodes choice is handled by Resource Broker including complex requirements Disadvantages: Cost, time, and feasibility are not considered (yet?) Ideal future method Simple usage, authenticate once File transfer and access handled transparently Choice of Grid resource handled transparently including complex requirements Cost/quota, time, feasibility consideration including data access Overview:  Overview Grids Data Grids Data Grids in practice Overview of products How it works HEP Grid Projects ««« ATLAS Data Challenges ««« University of Melbourne Project 1 - Belle Experiment Project 2 - Grid Interface HEP Grid Projects:  HEP Grid Projects HENP (High-Energy and Nuclear Physics) Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) experiments: PHOBOS, BRAHMS, PHENIX, STAR HENP Data Access Grand Challenge 50 TB data, 100’s of simultaneous users, many institutes Mock Data Challenges arranged to test the systems RHIC has been running since mid 2000 LHC grid Large Hadron Collider (LHC) at European Organization for Nuclear Research (CERN) experiments: ATLAS, LHCb, ALICE, CMS design and testing phase each experiment will perform it’s own “Data Challenges” HEP Grid Projects:  HEP Grid Projects LHC Grid Infrastructure The MONARCH Model (CERN computing division) Hierarchy of Nodes (collections/clusters of computers) Tiers 0, 1, 2, 3, 4, 5? HEP Grid Projects:  HEP Grid Projects LHC Grid Infrastructure The MONARCH Model (cont.) HEP Grid Projects:  HEP Grid Projects CMS expected grid hierarchy CMS is a symmetric p-p detector at LHC looking for the Higgs network connection between Tiers is very important HEP Grid Projects:  HEP Grid Projects ATLAS expected requirements 109 events per year 3.5 PB data per year ~1.5 PB simulation data ~1 PB raw data ~200 TB ESD (event summary data) ~20 TB analysis data ~2 TB event tag data 1000-1500 software users ~150 simultaneous jobs ~20 separate analyses ~20 PB tape, ~2600 TB disk, ~2000 kSI95 in CPU ( 1 SI95 = 1 SPECint95 » 40MIPS » P4 @ 20MHz ) ( 2000 kSI95 » 20,000 P4 2GHz ) ATLAS Data Challenges:  ATLAS Data Challenges Aims Test readiness of infrastructure Is the grid ready for physics? Performance of computing hardware, network Performance (efficiency and speed) of the software Monitoring Scalability Test different structures: hierarchy vs uniform grid Specified 3 challenges (DC0, DC1, DC2) ATLAS Data Challenges:  ATLAS Data Challenges Data Challenge 0 (DC0) Completed 8-Mar Sample of 105 events in 1 month (trivial for the hardware) “Continuity test” of the software chain Including writing to / reading from persistent store Data Challenge 1 (DC1) Now! Phase 0 April (preparation for phase 1) Phase 1 May until 15-Jul (generate events for analysis) samples of up to 107 events in 10-20 days (20-30 TB) Physics goals: Re-check technical plots, find signal in event sample Phase 2 2-Sep until 2-Dec (software test, Geant4, Databases) Infrastructure goals: Test DB options (calibration/alignment machinery), several hundred PCs world wide, test basic grid functionality (IO bottlenecks etc.) Data Challenge 2 (DC2) Starts in 2003 Sample of 108 events in 3 months complexity ~50% of 2006-2007 LHC grid infrastructure Infrastructure and Hardware: want to stress-test the model Overview:  Overview Grids Data Grids Data Grids in practice Overview of products How it works HEP Grid Projects ATLAS Data Challenges University of Melbourne ««« Project 1 - Belle Experiment ««« Project 2 - Grid Interface ««« University of Melbourne:  University of Melbourne Local Grid Activities VPAC (Victorian Partnership for Advanced Computing) funding to build expertise and resources in High Performance Computing and Data Grid technologies within HEP Existing HEP analysis within Grid environment (Belle experiment) Belle is situated at KEK B factory (Japan) researching CP violation in standard model Implementation of Grid architecture to enable collaborative access to resources/data for an existing physics application Important contribution to the HEP and Grid communities by providing a “real-life” application Grid Software expertise and development Expertise in Grid resource software and deployment to take advantage of collaborative computing is important for future HEP research in Australia (most facilities are overseas) Software development for the wider HEP/Grid communities… physics driven applications for job control and data manipulation University of Melbourne:  University of Melbourne Collaborative Grid Activities Melbourne Advanced Research Computing Centre (MARCC; UofM HPC facility) - Dirk van der Knijff, Robert Sturrock Data Grid infrastructure for HEP Currently taking part in the ATLAS DC Computer Science (UofM) - Leon Sterling, Muthukkaruppan Annamalai Ontological framework for experimental HEP analysis High-level descriptions of analysis for use by analysis code/agents Computer Science, GridBus (UofM) - Rajkummar Buyya, Shoaib Ali Burq, Srikumar Venugopal Resource Brokering and Economic Scheduling for the Grid and HEP Best/cheapest use of available Grid resources. Computer Science (RMIT) - Lin Padgham, Lito Cruz, Wei Liu, Antony Iorio Agent based technology for experimental HEP Analysis Intelligent services able to dissect, suggest, and perform analyses. Project 1 - Belle Experiment:  Project 1 - Belle Experiment Belle analysis using a Grid environment useful locally » adopted by Belle » wider community construction of a Grid Node at Melbourne Certificate Authority to approve security Globus toolkit... GRIS (Grid Resource Information Service) - LDAP with Grid security Globus Gateway - connected to local queue (GNU Queue; PBS?) GSIFTP - data resource providing access to local storage Replica Catalog - LDAP for virtual data directory initial test of Belle code with grid node & queue data access via the grid (Physical File Names as stored in Replica Catalog) modification of Belle code to access the data on the grid test of Belle code with grid node & queue & grid data access connect 2 grid nodes (Melbourne EPP and HPC?) test of Belle code running over grid of 2+ nodes implement or build Resource Broker Project 1 - Belle Experiment:  Project 1 - Belle Experiment Melbourne Experimental Particle Physics Grid Node Project 1 - Belle Experiment:  Project 1 - Belle Experiment Our future HEP Belle analysis Grid Project 1 - Belle Experiment:  Project 1 - Belle Experiment Belle analysis test case… Rohan Dowd, Ph.D. student at Uni.of Melb. Analysis of charmless B meson decays to 2 vector mesons, used to determine 2 angles of the CKM unitarity triangle. These decays have not been measured previously. Belle analysis code over Grid resources (10 files ; 2 GB total) Data files processed serially 95 mins Data files processed over Globus 35 mins Data access (2 secure protocols GASS/GSIFTP ; 100 Mbit network) NFS access for comparison 8.5 MB/s GASS access 4.8 MB/s GSIFTP access 9.1 MB/s Belle analysis using Grid data access NFS access for comparison 0.34 MB/s GSIFTP data streaming 0.36 MB/s Project 2 - Grid Interface:  Project 2 - Grid Interface Design of a generic grid/cluster/scheduler interface Why? High level job submission, control, monitoring Automatic division of jobs into separate subjobs An extra layer !? decouples the user interface from underlying environment allows development of underlying environment with little change to interface allows access to different environments with little/no retraining Project 2 - Grid Interface:  Project 2 - Grid Interface Existing Efforts Globus RSL (Resource Specification Language) “Language” for access to grid resources EUDataGrid Workload Manager Tools to access the grid using JDL (Job Description Language) JLAB (Jeferson Labs) Web interface to grid/PBS communicating in XML (cumbersome?) Many physicists are used to using command line access CMS job specification Need for extra layer between physicist and grid tools/languages. Why? To reduce complexity from user’s point of view: Automatic creation of transparent subjobs possibly for data-flow-control or output collation Auxiliary file access Automatic calculation of job profile (memory size, system load, output size) Project 2 - Grid Interface:  Project 2 - Grid Interface Design of generic Grid/Cluster Interface both graphical and command-line basis is a generic job description in XML (XJD) generic job XML descriptions can undergo grid/network specific XSL transforms to produce a network specific job description XJD(Generic) * XSLNetTrans(Network) -> XJD(Network) eg. MyJob * Our-PBS-Transform -> PBS commands, scripts, directives eg. MyJob * LHCDataGrid-Transform -> EDG commands and Grid RSL eg. MyJob * CMSJob-Transform ( * LHCDataGrid-Transform ) -> EDG commands and Grid RSL Project 2 - Grid Interface:  Project 2 - Grid Interface Why XML? Industry is converging on this as a standard for information transfer across the internet W3C has agreed on a standard for XML implementation, use, and APIs that vendors will provide Multiple vendors provide XML parsing/transforms/traversal applications and utilities which encourages development Access to a trained work force Future directions create a grid service which allows passing of XJD to define a job node administrator maintains transformation simple way of controlling user access to resources could be the basis for a new Resource Broker MyJob * UserProfile * DataProfile * JobTypeProfile -> XJD ResourceProfile * NetworkProfile * ChargingProfile Summary:  Summary The Grid will play an important role in collaborative research the technology is a long way from finished ATLAS Data Challenge is underway at Uni. of Melb. Belle analysis within a grid construction of a grid infrastructure is underway successfully utilised grid resources in an analysis test case Development of generic Grid/Cluster Interface partially complete contributing to wider community is the best way to gain experience may lead to new Resource Broker Make the experience gained accessible to the HEP (and perhaps wider) community for the betterment of collaborative research looking for others who might be interested in Grid computing References:  References University of Melbourne, Experimental Particle Physics group http://www.ph.unimelb.edu.au/epp/ Victorian Partnership for Advanced Computing (VPAC) http://www.vpac.org/ Melbourne Advanced Research Computing Centre (MARCC) http://www.hpc.unimelb.edu.au/ University of Melbourne, Department of Computer Science & Software Engineering http://www.cs.mu.oz.au/ The GridBus project (Grid Computing and Business) http://www.gridbus.org/ Monash University, School of Computer Science & Software Engineering http://www.csse.monash.edu.au/ BELLE Collaboration http://belle.kek.jp/ ATLAS data challenges http://atlas.web.cern.ch/Atlas/GROUPS/SOFTWARE/DC LHC Computing Grid project http://lhcgrid.web.cern.ch/ Globus project http://www.globus.org/ European Data Grid project (EDG) http://www.eu-datagrid.org/ Grid Data Mirroring Package (GDMP) http://project-gdmp.web.cern.ch/project-gdmp/ GriPhyN - Grid Physics Network http://www.griphyn.org/ Particle Physics Data Grid (PPDG) http://www.ppdg.net/ Nile - National Challenge Computing http://www.nile.cornell.edu/ SunÔ Grid Engine http://wwws.sun.com/software/gridware

Related presentations


Other presentations created by Lindon

Databases
16. 10. 2007
0 views

Databases

wolbring1
08. 05. 2008
0 views

wolbring1

Occupational Noise
08. 05. 2008
0 views

Occupational Noise

Paper22
07. 05. 2008
0 views

Paper22

malignant hyperthermia
02. 05. 2008
0 views

malignant hyperthermia

mastitis
02. 05. 2008
0 views

mastitis

Chapter 3 lecture
02. 05. 2008
0 views

Chapter 3 lecture

habil final
02. 05. 2008
0 views

habil final

BIOSEC PDR
02. 05. 2008
0 views

BIOSEC PDR

Steps to PDR
02. 05. 2008
0 views

Steps to PDR

ancient china
10. 10. 2007
0 views

ancient china

Latin America 07
22. 10. 2007
0 views

Latin America 07

Super3 ApplytheBig6toK 2
11. 10. 2007
0 views

Super3 ApplytheBig6toK 2

tutorial coco
12. 10. 2007
0 views

tutorial coco

c115c905
13. 10. 2007
0 views

c115c905

Marina PartIII
16. 10. 2007
0 views

Marina PartIII

Harrison EgyptianWebQuest
21. 10. 2007
0 views

Harrison EgyptianWebQuest

20040616 blair
02. 10. 2007
0 views

20040616 blair

Consumer Health Compl
06. 12. 2007
0 views

Consumer Health Compl

599 Introduction
10. 10. 2007
0 views

599 Introduction

orizon2010
23. 10. 2007
0 views

orizon2010

Saul Hahn
25. 10. 2007
0 views

Saul Hahn

Guide to the Match 2007 FINAL
30. 10. 2007
0 views

Guide to the Match 2007 FINAL

U Fix It Workshop
07. 11. 2007
0 views

U Fix It Workshop

015 parker
19. 10. 2007
0 views

015 parker

yun MPLS
30. 10. 2007
0 views

yun MPLS

11029 2003
20. 11. 2007
0 views

11029 2003

Great Grains
21. 11. 2007
0 views

Great Grains

II homology
10. 10. 2007
0 views

II homology

basel
15. 10. 2007
0 views

basel

miele
03. 10. 2007
0 views

miele

pres4 7
31. 12. 2007
0 views

pres4 7

piero messina
04. 01. 2008
0 views

piero messina

ASBESTOS AWARENESS
04. 01. 2008
0 views

ASBESTOS AWARENESS

osmoregulation2
10. 10. 2007
0 views

osmoregulation2

Clim Application Water
07. 01. 2008
0 views

Clim Application Water

Accountability Foster
07. 01. 2008
0 views

Accountability Foster

145 9 new
07. 01. 2008
0 views

145 9 new

Fundraising AWNY2007
28. 09. 2007
0 views

Fundraising AWNY2007

VanBorm
12. 10. 2007
0 views

VanBorm

608uetic
24. 10. 2007
0 views

608uetic

SwissBCH
18. 10. 2007
0 views

SwissBCH

the superstring adventure
15. 10. 2007
0 views

the superstring adventure

El Mozote
26. 02. 2008
0 views

El Mozote

don cote flh env conf
20. 03. 2008
0 views

don cote flh env conf

Shyama Bijapurkar
26. 03. 2008
0 views

Shyama Bijapurkar

Session 8 Meteorological Hazards
03. 10. 2007
0 views

Session 8 Meteorological Hazards

RiseofBigBusiness
27. 02. 2008
0 views

RiseofBigBusiness

4550ch16
11. 04. 2008
0 views

4550ch16

bbhh107 presentation
26. 11. 2007
0 views

bbhh107 presentation

Arbitragepresentation
16. 04. 2008
0 views

Arbitragepresentation

holiday 2007
17. 04. 2008
0 views

holiday 2007

class8n4
22. 04. 2008
0 views

class8n4

mindful habits
11. 03. 2008
0 views

mindful habits

PraesentationAchtonl ine
07. 10. 2007
0 views

PraesentationAchtonl ine

12 Project02 05
04. 10. 2007
0 views

12 Project02 05

Balschmiter
05. 10. 2007
0 views

Balschmiter

DARPA NoD
08. 10. 2007
0 views

DARPA NoD

308
02. 05. 2008
0 views

308

200606INRIADREIen
13. 03. 2008
0 views

200606INRIADREIen

Organigrama actualizado 01
22. 10. 2007
0 views

Organigrama actualizado 01

Japan Outlook
25. 03. 2008
0 views

Japan Outlook

Inferring Main Ideas New
15. 10. 2007
0 views

Inferring Main Ideas New

Nutrients
07. 03. 2008
0 views

Nutrients

secession
17. 10. 2007
0 views

secession

L5PresentationFive
19. 10. 2007
0 views

L5PresentationFive

JIGSAW
23. 11. 2007
0 views

JIGSAW

fall
02. 11. 2007
0 views

fall

holland poodles
16. 11. 2007
0 views

holland poodles

niimoto
09. 10. 2007
0 views

niimoto

comment to Itai
07. 04. 2008
0 views

comment to Itai

L24 Sartorius
04. 01. 2008
0 views

L24 Sartorius

Power Point Final
02. 11. 2007
0 views

Power Point Final

Prof Thomas Meyer
17. 10. 2007
0 views

Prof Thomas Meyer

Bauer
15. 11. 2007
0 views

Bauer

PBritton
10. 04. 2008
0 views

PBritton

AST5220 2 print
15. 10. 2007
0 views

AST5220 2 print

PRESENTATIONISAAC
01. 11. 2007
0 views

PRESENTATIONISAAC

Willits FIA Program News 2007
08. 10. 2007
0 views

Willits FIA Program News 2007

15 Tauchi
09. 10. 2007
0 views

15 Tauchi

hedberg
19. 10. 2007
0 views

hedberg

Analyst briefing condenced
19. 10. 2007
0 views

Analyst briefing condenced

funcProg2
19. 10. 2007
0 views

funcProg2

28 C4 avispachaquetaamar
24. 10. 2007
0 views

28 C4 avispachaquetaamar