toc wkshp nov03

Information about toc wkshp nov03

Published on October 18, 2007

Author: Lucianna

Source: authorstream.com

Content

LHC Computing Grid Project - LCG:  LHC Computing Grid Project - LCG The LHC Computing Grid First steps towards a Global Computing Facility for Physics 11 November 2003 Les Robertson – LCG Project Leader CERN – European Organization for Nuclear Research Geneva, Switzerland [email protected] LHC DATA:  LHC DATA 15 PetaBytes/year This is reduced by online computers that filter out around 100 “good” events per sec. The accelerator generates 40 million particle collisions (events) every second at the centre of each of the four experiments’ detectors Which are recorded on disk and magnetic tape at 100-1,000 MegaBytes/sec 15 million DVD movies The LHC accelerator – the largest superconducting installation in the world 27 kilometres of magnets cooled to – 300o C colliding proton beams at an energy of 14 TeV accumulating a large data base The LHC Accelerator Slide3:  LHC data analysis Reconstruction: transform signals from the detector to physical properties (energies, charge, tracks, particles, ..) Simulation: start from theory and detector characteristics and compute what detector should have seen Analysis: Find collisions with similar features, use of complex algorithms to extract physics… Collective iterative discovery of hundreds of physicists – professors and their students Slide4:  LHC data – storage & analysis ~15 PetaBytes – about 20 million CDs each year! Concorde (15 Km) Balloon (30 Km) CD stack with 1 year LHC data! (~ 20 Km) Mt. Blanc (4.8 Km) Its analysis will need the computing power of ~ 100,000 of today's fastest PC processors! Slide5:  Computing for LHC Solution: Computing centres, which were isolated in the past, will be connected into a computing grid Europe: 267 institutes 4603 users Elsewhere: 208 institutes 1632 users -- uniting the computing resources of particle physicists in the world!   CERN has over 6,000 users from more than 450 institutes from around the world LHC Computing Grid Project:  LHC Computing Grid Project The LCG Project is a collaboration of – The LHC experiments The Regional Computing Centres Physics institutes .. working together to prepare and deploy the computing environment that will be used by the experiments to analyse the LHC data This includes support for applications provision of common tools, frameworks, environment, data persistency .. and the development and operation of a computing service exploiting the resources available to LHC experiments in computing centres, physics institutes and universities around the world presenting this as a reliable, coherent environment for the experiments LCG Project:  LCG Project Technology Office - David Foster Overall coherence of the project; Pro-active technology watch Long-term grid technology strategy; Computing models Project Management Board:  Project Management Board Project Management Management Team SC2, GDB chairs Experiment Delegates External Projects EDG, GridPP, INFN Grid, VDT, Trillium Other Resource Suppliers IN2P3, Germany, CERN-IT Architects’ Forum Applications Area Manager Experiment Architects Computing Coordinators Grid Deployment Board Experiment delegates, national regional centre delegates PEB deals directly with the Fabric and Middleware areas Elements of a Production Grid Service:  Elements of a Production Grid Service Middleware: - the systems software that interconnects the computing clusters at regional centres to provide the illusion of a single computing facility Information publishing and finding, distributed data catalogue, data management tools, work scheduler, performance monitors, etc. Operations: Grid infrastructure services Registration, accounting, security Regional centre and network operations Grid operations centre(s) – trouble and performance monitoring, problem resolution – 24x7 around the world Support: Middleware and systems support for computing centres Applications integration, production User support – call centres/helpdesk – global coverage; documentation; training LCG Service:  Certification and distribution process established Middleware package – components from – European DataGrid (EDG) US (Globus, Condor, PPDG, GriPhyN)  the Virtual Data Toolkit Agreement reached on principles for registration and security Rutherford Lab (UK) to provide the initial Grid Operations Centre FZK (Karlsruhe) to operate the Call Centre The “certified” release was made available to 14 centres on 1 September – Academia Sinica Taipei, BNL, CERN, CNAF, Cyfronet Cracow, FNAL, FZK, IN2P3 Lyon, KFKI Budapest, Moscow State Univ., Prague, PIC Barcelona, RAL, Univ. Tokyo LCG Service LCG Service – Next Steps:  LCG Service – Next Steps Deployment status – 12 sites active when service opened on 15 September 20 sites now active Pakistan, Sofia, Switzerland, ..preparing to join Preparing now for adding new functionality in November to be ready for 2004 VO management system Integration of mass storage systems Experiments now starting their tests on LCG-1 CMS target is to have 80% of their production on the grid before the end of the PCP of DC04 Essential that experiments use all features (including/especially data management) -- and exercise the grid model even if not needed for short term challenges Capacity will follow readiness of experiments LCG - Current Active Sites:  LCG - Current Active Sites Resources in Regional Centres:  Resources in Regional Centres Resources planned for the period of the data challenges in 2004 CERN ~12% of the total capacity Numbers have to be refined – different standards used by different countries Efficiency of use is a major question mark Reliability Efficient scheduling Sharing between Virtual Organisations (user groups) LCG Service Time-line:  LCG Service Time-line open LCG-1 (schedule – 1 July) used for simulated event productions physics computing service Level 1 Milestone – Opening of LCG-1 service 2 month delay, lower functionality than planned use by experiments will only starting now (planned for end August)  decision on final set of middleware for the 1H04 data challenges will be taken without experience of production running reduced time for integrating and testing the service with experiments’ systems before data challenges start next spring additional functionality will have to be integrated later LCG Service Time-line:  LCG Service Time-line used for simulated event productions first data physics computing service * TDR – technical design report LCG and EGEE:  LCG and EGEE EU project approved to provide partial funding for operation of a general e-Science grid in Europe, including the supply of suitable middleware – Enabling Grids for e-Science in Europe – EGEE EGEE provides funding for 70 partners, large majority of which have strong HEP ties Similar funding being sought in the US LCG and EGEE work closely together, sharing the management and responsibility for - Middleware – share out the work to implement the recommendations of HEPCAL II and ARDA Infrastructure operation – LCG will be the core from which the EGEE grid develops – ensures compatibility; provides useful funding at many Tier 1, Tier2 and Tier 3 centres Deployment of HEP applications - small amount of funding provided for testing and integration with LHC experiments Middleware - Next 15 months:  Middleware - Next 15 months Work closely with experiments on developing experience with early distributed analysis models using the grid Multi-tier model Data management, localisation, migration Resource matching & scheduling Performance, scalability Evolutionary introduction of new software – rapid testing and integration into mainline services – – while maintaining a stable service for data challenges! Establish a realistic assessment of the grid functionality that we will be able to depend on at LHC startup – a fundamental input for the Computing Model TDRs due at end 2004 Grids - Maturity is some way off:  Grids - Maturity is some way off Research still needs to be done in all key areas of importance to LHC e.g. data management, resource matching/provisioning, security, etc. Our life would be easier if standards were agreed and solid implementations were available – but they are not We are just entering now in the second phase of development Everyone agrees on the overall direction, based on Web services But these are not simple developments And we still are learning how to best approach many of the problems of a grid There will be multiple and competing implementations – some for sound technical reasons We must try to follow these developments and influence the standardisation activities of the Global Grid Forum (GGF) It has become clear that LCG will have to live in a world of multiple grids – but there is no agreement on how grids should inter-operate Common protocols? Federations of grids inter-connected by gateways? Regional Centres connecting to multiple grids? Running a service in this environment will be challenge! Human Resources used to October 2003 CERN + applications at other institutes:  Human Resources used to October 2003 CERN + applications at other institutes Slide20:  Personnel resources 140 FTEs - includes – all staff for LHC services at CERN (inc. networking) Staff at other institutes in applications area Does NOT include EDG middleware without Regional Centres, EDG middleware CERN Capacity approx one third of capacity at major centres - This is research – so the “requirements” are constrained to fit the budget expectation:  CERN Capacity approx one third of capacity at major centres - This is research – so the “requirements” are constrained to fit the budget expectation Estimated costs at CERN

Related presentations


Other presentations created by Lucianna

Nutritional Care of Burns
04. 01. 2008
0 views

Nutritional Care of Burns

spine2 no background
08. 05. 2008
0 views

spine2 no background

banking
14. 04. 2008
0 views

banking

emerging security threats
29. 09. 2007
0 views

emerging security threats

Thunderstorms
03. 10. 2007
0 views

Thunderstorms

i2 traffic shaping
03. 10. 2007
0 views

i2 traffic shaping

bind
07. 10. 2007
0 views

bind

prefix delegation requirement1
09. 10. 2007
0 views

prefix delegation requirement1

dipo
12. 10. 2007
0 views

dipo

Living Things
12. 10. 2007
0 views

Living Things

wnv062904
21. 10. 2007
0 views

wnv062904

latinoamerica
22. 10. 2007
0 views

latinoamerica

Rachinsky
11. 10. 2007
0 views

Rachinsky

Slide presentazione
24. 10. 2007
0 views

Slide presentazione

feynman
16. 10. 2007
0 views

feynman

gt bot
13. 10. 2007
0 views

gt bot

fr summit marginson 230306
30. 10. 2007
0 views

fr summit marginson 230306

Accelerators CZ
15. 11. 2007
0 views

Accelerators CZ

Les Animaux du Zoo
11. 10. 2007
0 views

Les Animaux du Zoo

Rapport Nationale MAROC
23. 10. 2007
0 views

Rapport Nationale MAROC

Grammar essentials
16. 11. 2007
0 views

Grammar essentials

sponge
20. 11. 2007
0 views

sponge

Crans Montana 03 nieuw
15. 10. 2007
0 views

Crans Montana 03 nieuw

Workshop
02. 11. 2007
0 views

Workshop

NSF 12 6 2001
31. 12. 2007
0 views

NSF 12 6 2001

Class8
07. 01. 2008
0 views

Class8

VCT Morocco
24. 10. 2007
0 views

VCT Morocco

NACADA Combined Workshop 11 04
29. 09. 2007
0 views

NACADA Combined Workshop 11 04

sky
13. 11. 2007
0 views

sky

file Kigali Strengthening Local
07. 01. 2008
0 views

file Kigali Strengthening Local

10638221831Maroc MinInt French
23. 10. 2007
0 views

10638221831Maroc MinInt French

ub geographicimagery051 001
27. 09. 2007
0 views

ub geographicimagery051 001

Presentación RR EXPORTA def
23. 10. 2007
0 views

Presentación RR EXPORTA def

prosper
28. 12. 2007
0 views

prosper

HPCN summary 7 5 2007
17. 10. 2007
0 views

HPCN summary 7 5 2007

ammosov
12. 10. 2007
0 views

ammosov

A NEW ENGLISH COURSE Book 3
20. 02. 2008
0 views

A NEW ENGLISH COURSE Book 3

Food Bank of New Jersey
29. 02. 2008
0 views

Food Bank of New Jersey

lewis
19. 10. 2007
0 views

lewis

XC Safety and mentor
03. 04. 2008
0 views

XC Safety and mentor

NA3
07. 04. 2008
0 views

NA3

Civitas Plus2006
18. 03. 2008
0 views

Civitas Plus2006

Ch14 7e
10. 04. 2008
0 views

Ch14 7e

Team2
11. 04. 2008
0 views

Team2

fmla
04. 10. 2007
0 views

fmla

retailcompetition
17. 04. 2008
0 views

retailcompetition

Using ILS
22. 04. 2008
0 views

Using ILS

shaw
16. 03. 2008
0 views

shaw

CSI Presentation 2007
19. 02. 2008
0 views

CSI Presentation 2007

NIST TDT2004
07. 05. 2008
0 views

NIST TDT2004

chapter3v2
15. 10. 2007
0 views

chapter3v2

MEDOPSBOOKFEB01
02. 05. 2008
0 views

MEDOPSBOOKFEB01

BostwPres
02. 05. 2008
0 views

BostwPres

555 Spanish
02. 05. 2008
0 views

555 Spanish

hexapod Shirke
02. 05. 2008
0 views

hexapod Shirke

Lung Expansion 1
02. 05. 2008
0 views

Lung Expansion 1

Aaron
02. 05. 2008
0 views

Aaron

CMI slides Feb05
01. 11. 2007
0 views

CMI slides Feb05

SAP1012
10. 03. 2008
0 views

SAP1012

lesson 4
15. 10. 2007
0 views

lesson 4

2006 APHA
05. 10. 2007
0 views

2006 APHA

probir
30. 03. 2008
0 views

probir

Rauf Presentation NEW
18. 10. 2007
0 views

Rauf Presentation NEW

IAJAPAN
09. 10. 2007
0 views

IAJAPAN

Mr Daisuke Matsunaga
09. 10. 2007
0 views

Mr Daisuke Matsunaga

3 KukaGLBThealthissues
29. 10. 2007
0 views

3 KukaGLBThealthissues

Bernard ANSELMETTI
24. 10. 2007
0 views

Bernard ANSELMETTI

NBII Newark 10 02
21. 10. 2007
0 views

NBII Newark 10 02

MarketingWorkshop 4 22 05rev1
24. 10. 2007
0 views

MarketingWorkshop 4 22 05rev1

FEESTDAGEN
06. 11. 2007
0 views

FEESTDAGEN

trainplanesandautomo biles
13. 03. 2008
0 views

trainplanesandautomo biles

NWA June00
05. 10. 2007
0 views

NWA June00

Panama 2004 Reporte
25. 10. 2007
0 views

Panama 2004 Reporte

SAKURA Yamamoto
25. 03. 2008
0 views

SAKURA Yamamoto

tiner presentation
04. 01. 2008
0 views

tiner presentation

aseancjp
09. 10. 2007
0 views

aseancjp

schools talk
29. 10. 2007
0 views

schools talk

BethkeA
02. 10. 2007
0 views

BethkeA

DeVidtsPresentation
11. 10. 2007
0 views

DeVidtsPresentation

Zhu Zhiyong
16. 10. 2007
0 views

Zhu Zhiyong

pres1 1
22. 10. 2007
0 views

pres1 1

AESC 2005 VERMONT Result
02. 11. 2007
0 views

AESC 2005 VERMONT Result