ron price grid world presentation

Information about ron price grid world presentation

Published on February 7, 2008

Author: Berta

Source: authorstream.com

Content

Digital Sherpa: Custom Grid Applications on the TeraGrid and Beyond :  Digital Sherpa: Custom Grid Applications on the TeraGrid and Beyond GGF18 / GridWorld 2006 Ronald C. Price, Victor E. Bazterra, Wayne Bradford, Julio C. Facelli Center for High Performance Computing at the University of Utah Partially funded by NSF ITR award #0326027 First Things First:  First Things First HAPPY BIRTHDAY GLOBUS!!! Roles & Acknowledgments:  Roles & Acknowledgments Ron: Grid Architect and Software Engineer Victor: Research Scientist & Grid Researcher, user of many HPC Resources Wayne: Grid Sys Admin Julio: Director Globus Mailing list and especially the Globus Alliance Entire Center for High Performance Computing University of Utah Staff Overview:  Overview Problem & Solution general problem Solution traditional approaches Past sys admin caveats (briefly) concepts and implementation Present examples applications Future applications features General Problem & Solution:  General Problem & Solution General Problem: Many High Performance Computing (HPC) scientific projects require large number of loosely coupled executions in numerous HPC resources which can not be managed manually. Solution (Digital Sherpa): Distribute the jobs of HPC scientific applications across a grid allowing access to more resources with automatic staging, jobs submission, monitoring, fault recovery and efficiency improvement. Traditional Approach: “babysitter” scripts:  Traditional Approach: “babysitter” scripts “babysitter” scripts are common but in general they have some problems: not scalable (written to work with a specific scheduler) Hard to maintain (typically a hack) not portable (system specific) Digital Sherpa & Perspective:  Digital Sherpa & Perspective A different perspective: Schedulers: System Oriented Perspective Many jobs on one HPC resource, user doesn’t have control Sherpa: User Oriented perspective Many jobs on many resource, user has control Digital Sherpa In General:  Digital Sherpa In General Digital Sherpa is a grid application for executing HPC applications across many grid enabled HPC resources. It automates non-scalable tasks such as staging, job submission and monitoring, including recovery features such as resubmission of failed jobs. The goal is to allow any HPC application to easily interoperate with Digital Sherpa to become a custom grid application. Distributing the jobs across HPC resources increases the amount of computer resources that can be accessed at a given time. Success using Digital Sherpa has been found on the TeraGrid and there are many more applications of Digital Sherpa in progress. So, what is Digital Sherpa?:  So, what is Digital Sherpa? Naming Convention for rest of Slides: Digital Sherpa = Sherpa Sherpa is a multi threaded custom extension of the GT4 WS-GRAM client. Sherpa has been designed and planned to be scalable, maintainable and used directly by people or other applications. It is based on Web Services Resource Framework (WSRF) and it is implemented in Java 1.5 using the Globus Toolkit 4.0 (GT4). Sherpa has the ability to do a complete HPC submission (stage data in, run/monitor PBS job, stage data out and auto restart of failed jobs, improve efficiency) Why the name Sherpa?:  Why the name Sherpa? Digital Sherpa takes its name from “sherpa” who are known for their great mountaineering skills in the Himalayas, expert route finders and porters. find the route for you (find an HPC resource for your needs, future feature ) carry gear in for you (stage data in) climb to the top (execute job and restart job if necessary) and carry gear out for you (stage data out). Benefits and Significance:  Benefits and Significance Benefits: Automation of login, data stage in and stage out, job submission, monitoring, and auto restart if the job fails, efficiency improvement Distribute your jobs across various HPC resources to increase the amount of resources that can be used at a time. Reduction of queue wait time by submitting jobs to several queues resulting in an increase of efficiency Load balancing from increased granularity Can be called from a separate application Significance: Automates the flow of large number of jobs within grid environments Increases throughput of HPC Scientific Applications Globus Toolkit 4:  Globus Toolkit 4 The Globus Toolkit is an open source software toolkit used for building Grid systems and applications Globus Toolkit 4.0.x (GT4) is the most recent release GT4 is best thought of as a Grid Development Kit (GDK) GT4 has four main components: Grid Security Infrastructure (GSI) Reliable File Transfer (RFT) Web Services - Monitoring and Discovery Service (WS-MDS) Web Services – Grid Resource Allocation Management (WS-GRAM) Sherpa Requirements:  Sherpa Requirements Globus Tookit 4: Dependent GT4 Components: WS-GRAM (Execution Management) RFT (Data Management) Java 1.5 Past: Sys Admin Caveats :  Past: Sys Admin Caveats Did a lot of initial testing and configuration Build notes: http://wiki.chpc.utah.edu/index.php/System_Administration_and_GT4:_An_Addendum_to_the_Globus_Alliance_Quick_Start_Guide GT 4.0.2 doesn’t require postgres config Motivations for Creating Sherpa:  Motivations for Creating Sherpa Reasons for Creating Digital Sherpa, Motivations: Allow scientists to be scientists in their own fields, don’t force them to become computer scientists Eliminate error prone time consuming non-scalable tasks of: job submission, monitoring, data staging Allow easy access to more resources Reduce total queue time Increase efficiency Before Sherpa: BabySitter:  Before Sherpa: BabySitter BabySitter before GT4 Conceptual details of BabySitter Resource manager and handler Proprietary states similar to the external states of the managed job services in WS-GRAM Not a general solution, scheduler specific Took GT4 into the lab as it became available Sherpa Conceptually Past and Present: States:  Sherpa Conceptually Past and Present: States Past: Null, idle, running, done Realized Globus Alliance had already defined the states as GT4 was finalized Present: external states of the managed job services in WS-GRAM Unsubmitted, StageIn, Pending, Active, Suspended, StageOut, CleanUp, Done, Failed Digital Sherpa Implementation: Choice of API, Past and Present :  Digital Sherpa Implementation: Choice of API, Past and Present Past: babysitter Java app using J2SSH to login to HPC resource and then query the output from the scheduler Present: GT4 GDK WS-GRAM API when I wrote the Sherpa code JavaCOG and GAT did not work with GT4 and I needed GT4 WS-GRAM hides scheduler specific complexities The “BLAH” Example: Test Jobs:  The “BLAH” Example: Test Jobs A test case for Sherpa: ***_blah.xml corresponds to ***_blah.out and ***_blahblah.xml corresponds to blahblah.out … Stage In: Local blahsrc.txt -> remote RFT server blah.txt Run: /bin/more blah.txt (std out to: blahtemp.out) Stage Out: Remote RFT serverblahtemp.out -> local blah.out Clean Up: deletes blahtemp.out at remote HPC resource Sherpa Input File:  Sherpa Input File Made use of the WS-GRAM XML Schema Example: argonne_blah.xml File walk through “BLAH” on TeraGrid: Sherpa in Action:  “BLAH” on TeraGrid: Sherpa in Action -bash-3.00$ java -DGLOBUS_LOCATION=$GLOBUS_LOCATION Sherpa argonne_blah.xml purdue_blahblahblah.xml ncsamercury_blahblah.xml Starting job in: argonne_blah.xml Handler 1 Starting...argonne_blah.xml Starting job in: purdue_blahblahblah.xml Handler 2 Starting...purdue_blahblahblah.xml Starting job in: ncsamercury_blahblah.xml Handler 3 Starting...ncsamercury_blahblah.xml Handler 3: StageIn Handler 2: StageIn Handler 1: StageIn Handler 3: Pending Handler 1: Pending Handler 2: Pending Handler 2: Active Handler 2: StageOut Handler 1: Active Handler 2: CleanUp Handler 2: Done Handler 2 Complete. Handler 3: Active Handler 1: StageOut Handler 3: StageOut Handler 1: CleanUp Handler 3: CleanUp Handler 1: Done Handler 1 Complete. Handler 3: Done Handler 3 Complete. -bash-3.00$ hostname -f watchman.chpc.utah.edu Sherpa Purdue Test Results:  Sherpa Purdue Test Results -bash-3.00$ more *.out :::::::::::::: blahblahblah.out :::::::::::::: BLAH BLAH BLAH No PBS epilogue or prologue Sherpa NCSA Mercury Results:  Sherpa NCSA Mercury Results :::::::::::::: blahblah.out :::::::::::::: ---------------------------------------- Begin PBS Prologue Thu Apr 27 13:17:09 CDT 2006 Job ID:         612149.tg-master.ncsa.teragrid.org Username:       price Group:          oor Nodes:          tg-c421 End PBS Prologue Thu Apr 27 13:17:13 CDT 2006 ---------------------------------------- BLAH BLAH ---------------------------------------- Begin PBS Epilogue Thu Apr 27 13:17:20 CDT 2006 Job ID:         612149.tg-master.ncsa.teragrid.org Username:       price Group:          oor Job Name:       STDIN Session:        4042 Limits:         ncpus=1,nodes=1,walltime=00:10:00 Resources:      cput=00:00:01,mem=0kb,vmem=0kb,walltime=00:00:06 Queue:          dque Account:                mud Nodes:          tg-c421 Killing leftovers... End PBS Epilogue Thu Apr 27 13:17:24 CDT 2006 ---------------------------------------- Sherpa UC/ANL Test Results:  Sherpa UC/ANL Test Results :::::::::::::: blah.out :::::::::::::: ---------------------------------------- Begin PBS Prologue Thu Apr 27 13:16:53 CDT 2006 Job ID:         251168.tg-master.uc.teragrid.org Username:       rprice Group:          allocate Nodes:          tg-c061 End PBS Prologue Thu Apr 27 13:16:54 CDT 2006 ---------------------------------------- BLAH ---------------------------------------- Begin PBS Epilogue Thu Apr 27 13:17:00 CDT 2006 Job ID:         251168.tg-master.uc.teragrid.org Username:       rprice Group:          allocate Job Name:       STDIN Session:        11367 Limits:         nodes=1,walltime=00:15:00 Resources:      cput=00:00:01,mem=0kb,vmem=0kb,walltime=00:00:02 Queue:          dque Account:                TG-MCA01S027 Nodes:          tg-c061 Killing leftovers... End PBS Epilogue Thu Apr 27 13:17:16 CDT 2006 ---------------------------------------- MGAC Background:  MGAC Background Modified Genetic Algorithms for Crystals and Atomic Clusters (MGAC), an HPC chemistry application written in C++ In short based off of an energy criteria MGAC tries to predict the chemical structure Computing Needs: local serial computations and distributed parallel computations MGAC & Circular Flow:  MGAC & Circular Flow MGAC-CGA: Real Science:  MGAC-CGA: Real Science Efficiency and HPC Resources:  Efficiency and HPC Resources Scheduler Side Effect: 1 job submitted requiring 5 calculations 4 calculatons require 1 hour of compute time each 1 calculation requires 10 hours of compute time The other 4 nodes are still reserved although not being used and they can’t be used by anyone else until the 10hr job has finished; 4*9 = 36hrs of wasted compute time Minimization & Waste Chart: MGAC:  Minimization & Waste Chart: MGAC Minimization & Use Chart: MGAC-CGA:  Minimization & Use Chart: MGAC-CGA Efficiency and HPC Resources:  Efficiency and HPC Resources Guesstimate: in one common MGAC run our average efficiency due to scheduler side effect is: 46%, 54% or resources are wasted Sherpa continuously submits one job at a time which reduces the scheduler side effect because multiple schedulers are involved and jobs are submitted in a more granular fashion Improved Efficiency #1: increased granularity Necessary sharing policies prohibit large number of jobs from being submitted all at one HPC resource, queue times become to long Improved Efficiency #2: access to more resources Guesstimate: total computational time (including queue time) reduced by 89%-60% in our initial testing. Sherpa Performance & Load Capability:  Sherpa Performance & Load Capability Performance: Sherpa is light weight, computationally intensive operations are done at HPC resource Memory intensive Load Capability: Hard to create a huge test case, need unique file names Ran out of file handles around 100,000 jobs without any HPC submission ( turned out system image software was misconfigured ) Successfully initiated 500 jobs Emphasis on initiated, 500 jobs appeared in the test queue and although many ran to completion we did not have time to let them all run to completion Host Cert and Sherpa:  Host Cert and Sherpa Globus GSI: Uses PKI to verify that users and hosts are who they claim to be, creates trust User certs and host certs are different and they provide different functionality Sherpa Requires a Globus host certificate ORNL granted us one Policy changed: got CRLd Confusion: Either WS-GRAM or RFT was requiring a valid host cert Had to know if there was a way around the situation Did some testing to investigate and trouble shoot Testing/Trouble Shooting:  Testing/Trouble Shooting TeraGrid CA Caveats:  TeraGrid CA Caveats How do you allow your machines to fully interoperate with the TeraGrid without a host cert from a trusted CA? Not Possible. How do you get a host cert for the TeraGrid? From least scalable to most scalable: Work with site specific orgs to accept your CA's certs.  (tedious for multiple sites) Get TeraGrid security working groups approval for Local University CA (time consuming, not EDU scalable) Get a TeraGrid trusted CA to issue you one.   (unlikely as site policy seems to contradict this) Become a TG member Side Note: A satisfactory scalable solution does not seem to be currently in place and it's our understanding that Shibboleth and/or International Grid Trust Federation (IGTF) will eventually offer this service for EDU's in the future. Not the End: Sherpa is Flexible:  Not the End: Sherpa is Flexible Sherpa can work between any two machines that have GT4 installed and configured: Flexible Can work in many locations Implicitly follows open standards Future Projects:  Future Projects MGAC-CGA is the first example, we have other projects with Sherpa: Nanotechnology simulation (web application) Biomolecular docking (circular flow) AKA: protein docking, drug discovery Combustion simulation (web application) Future Features and Implementation:  Future Features and Implementation Future efforts will be directed towards: implementing monitoring and discovery client logic polling feature that will help identify when system related issues have occurred (i.e. network down, scheduler unavailable) Grid Proxy Auto Renewal. Implementation (move to a more general API) Simple API for Grid Apps – Research Group (SAGA-RG) Grid Application Toolkit (GAT) JavaCOG How do I get a Hold of Sherpa?:  How do I get a Hold of Sherpa? We are interested in collaborative efforts. Sorry, can’t download Sherpa because we don’t have the man power for support right now. Q&A With Audience:  Q&A With Audience Mail Questions to: [email protected] Slides Availble at: http://www.chpc.utah.edu/~rprice/grid_world_2006/ron_price_grid_world_presentation.ppt

Related presentations


Other presentations created by Berta

CII PRESENTATION
09. 01. 2008
0 views

CII PRESENTATION

John Wilkes Booth
10. 01. 2008
0 views

John Wilkes Booth

Abdala Amphetamines in Russia
10. 01. 2008
0 views

Abdala Amphetamines in Russia

terzalezione scritturanarrativa
12. 01. 2008
0 views

terzalezione scritturanarrativa

Colonialism
15. 01. 2008
0 views

Colonialism

Evolution for Beginners
17. 01. 2008
0 views

Evolution for Beginners

The Analysis Gatsby
04. 02. 2008
0 views

The Analysis Gatsby

OCTranspo
05. 02. 2008
0 views

OCTranspo

mhd day2005
16. 01. 2008
0 views

mhd day2005

nut info labels etiquettes e
07. 02. 2008
0 views

nut info labels etiquettes e

american revolution
12. 02. 2008
0 views

american revolution

moorthy ms
17. 01. 2008
0 views

moorthy ms

leadinglearning
20. 02. 2008
0 views

leadinglearning

izellerbauhauscli2005
26. 02. 2008
0 views

izellerbauhauscli2005

PersuasiveComp
28. 02. 2008
0 views

PersuasiveComp

ADA overview
07. 03. 2008
0 views

ADA overview

wipo smes uln 07 www 89154
14. 03. 2008
0 views

wipo smes uln 07 www 89154

latair
22. 01. 2008
0 views

latair

Presentation UWIHARP Final
19. 01. 2008
0 views

Presentation UWIHARP Final

Stuart Anderson
03. 04. 2008
0 views

Stuart Anderson

venkatesh
08. 04. 2008
0 views

venkatesh

identifiers
09. 01. 2008
0 views

identifiers

2007112679352641
14. 04. 2008
0 views

2007112679352641

Marketing of Evil book review
16. 04. 2008
0 views

Marketing of Evil book review

21431
17. 04. 2008
0 views

21431

IR3001 Middle East security
23. 04. 2008
0 views

IR3001 Middle East security

DSRDSept19
24. 04. 2008
0 views

DSRDSept19

BasevsDIStudio
05. 02. 2008
0 views

BasevsDIStudio

TNG
02. 05. 2008
0 views

TNG

Gavin Mooney DRS Sept 06
14. 02. 2008
0 views

Gavin Mooney DRS Sept 06

STD11
02. 05. 2008
0 views

STD11

CLOCgonzalez
15. 01. 2008
0 views

CLOCgonzalez

9 10 07Watson Presentation2
30. 01. 2008
0 views

9 10 07Watson Presentation2

f3
13. 01. 2008
0 views

f3

gep2004slideshow
24. 01. 2008
0 views

gep2004slideshow

LuisLobopresentation
17. 01. 2008
0 views

LuisLobopresentation

adcas
16. 01. 2008
0 views

adcas

Weber Paul
14. 01. 2008
0 views

Weber Paul

BoraBora
29. 01. 2008
0 views

BoraBora

swga 004
05. 03. 2008
0 views

swga 004

MSThesisPresentation
07. 02. 2008
0 views

MSThesisPresentation