niit faculty

Information about niit faculty

Published on October 16, 2007

Author: Natalia

Source: authorstream.com

Content

Stanford University, SLAC, NIIT, the Digital Divide & Bandwidth Challenge:  Stanford University, SLAC, NIIT, the Digital Divide & Bandwidth Challenge Prepared by Les Cottrell, SLAC for the NIIT , February 22, 2005 Stanford University:  Stanford University Location Some facts:  Some facts Founded in 1890’s by Governor Leland Stanford & wife Jane in memory of son Leland Stanford Jr. Apocryphal story of foundation Movies invented at Stanford 1600 freshman entrants/year (12% acceptance), 7:1 student:faculty, students from 53 countries 169K living Stanford alumni Some alumni:  Some alumni Sports: Tiger Woods, John McEnroe Sally Ride Astronaut Vint Cerf “father of Internet” Industry: Hewlett & Packard, Steve Ballmer CEO Microsoft, Scott McNealy Sun … Ex-presidents: Ehud Barak Israel, Alejandro Toledo Peru US Politics: Condoleeza Rice, George Schultz, President Hoover Some Startups:  Some Startups Founded Silicon Valley (turned orchards into companies): Start by providing land and encouragement (investment) for companies started by Stanford alumni, such as HP & Varian More recently: Sun (Stanford University Network), Cisco, Yahoo, Google Excellence:  Excellence 17 Nobel prizewinners Stanford Hospital Stanford Linear Accelerator Center (SLAC) – my home: National Lab operated by Stanford University funded by US Department of Energy Roughly 1400 staff, + contractors & outside users => 3000, ~ 2000 on site at a given time Fundamental research in: Experimental particle physics Theoretical physics Accelerator research Astro-physics Synchrotron Light research Has faculty to pursue above research and awards degrees, 3 Nobel prizewinners Work with NIIT:  Work with NIIT Co-supervision of students, build research capacity, publish etc., for example: Quantify the Digital Divide: Develop a measurement infrastructure to provide information on the extent of the Digital Divide: Within Pakistan, between Pak & other regions Improve understanding, provide planning information, expectations, identify needs Provide and deploy tools in Pakistan MAGGIE-NS collaboration - projects: TULIP - Faran Network Weather Forecasting – Fawad, Fareena Anomaly – Fawad, Adnan, Muhammad Ali Detection, diagnosis and alerting PingER Management - Waqar MTBF/MTTR of networks – Not assigned Federating Network monitoring Infrastructures – Asma, Abdullah Smokeping, PingER, AMP, MonALISA, OWAMP … Digital Divide – Aziz, Akbar, Rabail Quantifying the Digital Divide: A scientific overview of the connectivity of South Asian and African Countries:  Quantifying the Digital Divide: A scientific overview of the connectivity of South Asian and African Countries Les CottrellSLAC, Aziz RehmatullahNIIT, Jerrod WilliamsSLAC, Arshad AliNIIT Presented at the CHEP06 Meeting, Mumbai, India February 2006 www.slac.stanford.edu/grp/scs/net/talk05/icfa-chep06.ppt Introduction:  Introduction PingER project originally (1995) for measuring network performance for US, Europe and Japanese HEP community Extended this century to measure Digital Divide for Academic & Research community Last year added monitoring sites in S. Africa, Pakistan & India Will report on network performance to these regions from US and Europe – trends, comparisons Plus early results within and between these regions Why does it matter?:  Why does it matter? Scientists cannot collaborate as equal partners unless they have connectivity to share data, results, ideas etc. Distance education needs good communication for access to libraries, journals, educational materials, video, access to other teachers and researchers. PingER coverage:  PingER coverage ~120 countries (99% world’s connected population), 35 monitor sites in 14 countries New monitoring sites in Cape Town, Rawalpindi, Bangalore Monitor 25 African countries, contain 83% African population Minimum RTT from US:  Minimum RTT from US Indicates best possible, i.e. no queuing >600ms probably geo-stationary satellite Only a few places still using satellite, mainly Africa Between developed regions min-RTT dominated by distance Little improvement possible Jan 2000 Dec 2003 World thruput seen from US:  World thruput seen from US Derived throughput~MSS/(RTT*sqrt(loss)), Mathis Behind Europe 6 Yrs: Russia, Latin America 7 Yrs: Mid-East, SE Asia 10 Yrs: South Asia 11 Yrs: Cent. Asia 12 Yrs: Africa South Asia, Central Asia, and Africa are in Danger of Falling Even Farther Behind Many sites in DD have less connectivity than a residence in US or Europe S. Asia & Africa from US:  S. Asia & Africa from US Data v. noisy but there are noticeable trends India may be holding its own Africa & Pakistan are falling behind Pakistan Compare to US residence:  Compare to US residence Sites in many countries have bandwidth< US residence India to India:  India to India Monitoring host in Bangalore from Oct ’05 Too early to tell much, also need more sites, have some good contacts 3 remote hosts (need to increase): R&E sites in Mumbai & Hyderabad Government site in AP Lot of difference between sites, Gov. site sees heavy congestion PERN: Network Architecture:  PERN: Network Architecture DRS DRS DRS Karachi Core ATM/Router Islamabad Core ATM/Router Lahore Core ATM/Router 2x2Mbps 2x2Mbps 2x2Mbps LAN Switch LAN Switch Access Router DXX DXX OFS OF Node University University Customer Replica of Kr./Iba International 2MB DXX DXX OFS University University 12 Universities 22 Universities 23 Universities Access Router University International 4MB International 2MB DRS OFS 57 Mbps 65 Mbps 33 Mbps HEC will invest $ 4M in Backbone 3 To 9 Points-of-Presence (Core Nodes) $ 2.4M from HEC to Public Universities for Last Mile Costs Possible Dark Fiber Initiative Pakistan to Pakistan:  Pakistan to Pakistan 3 monitoring sites in Islamabad/Rawalpindi NIIT via NTC, NIIT via Micronet, NTC (PERN supplier) All monitor 7 Universities in ISB, Lahore, KHI, Peshawar Careful: many University sites have proxies in US & Europe Minimum RTTs: best NTC 6ms, NIIT/NTC 10ms - extra 4ms for last mile, NIIT/Micronet 60ms – slower links different routes Queuing = Avg(RTT)-Min(RTT) NIIT/NTC heavily congested 200-400ms queuing Better when students holiday NIIT/Micronet & NTC OK Outages show fragility NIIT Holiday Pakistan Network Fragility:  Pakistan Network Fragility NIIT/Micronet NIIT/NTC NTC NIIT/NTC heavily congested Other sites OK NIIT outage Remote host outages Pakistan International fragility:  Pakistan International fragility Infrastructure appears fragile Losses to QEA & NIIT are 3-8% averaged over month RTT ms Loss % Feb05 Jul05 Fiber cut off Karachi causes 12 day outage Jun-Jul ’05, Huge losses of confidence and business Another fiber outage, this time of 3 hours! Power cable dug up by excavators of Karachi Water & Sewage Board Typically once a month losses go to 20% Many systemic factors: Electricity, Import duties, Skills:  Many systemic factors: Electricity, Import duties, Skills M. Jensen Slide22:  Average Cost $ 11/kbps/Month Routing in Africa:  Routing in Africa Seen from ZA Only Botswana & Zimbabwe are direct Most go via Europe or USA Wastes costly international bandwidth Loss within Africa:  Loss within Africa Satellites vs Terrestrial:  Satellites vs Terrestrial Terrestrial links via SAT3 & SEAMEW (Mediterranean) Terrestrial not available to all within countries, EASSy will help PingER min-RTT measurements from S. African TENET monitoring station Between Regions:  Between Regions Red ellipses show within region Blue = min(RTT) Red = min-avg RTT India/Pak green ellipses ZA heavy congestion Botswana, Argentina, Madascar, Ghana, BF India better off than Pak Overall:  Overall Sorted by Median throughput Within region performance better (blue ellipses) Europe, N. America, E. Asia Russia generally good M. East, Oceania, S.E. Asia, L. America acceptable Africa, C. Asia, S. Asia poor Examples:  Examples India got Internet connectivity in 1989, China 1994 India is 34Mbits/s backbones, one possible 622Mbits/s China is deploying multi 10Gbits/s Brazil and India had similar International connectivity in 2001, now Brazil is at multi-Gbits/s Pakistan PERN backbone is 50Mbits/s, and end sites are ~1Mbits/s Growth in # Internet users (2000-2005): 420% Brazil, China 393%, 5000% Pakistan, 900% India, demand outstripping growth www.internetworldstats.com/stats.htm Conclusions:  Conclusions S. Asia and Africa ~ 10 years behind and falling further behind creating a Digital Divide within a Digital Divide India appears better than Africa or Pakistan Last mile problems, and network fragility Decreasing use of satellites, still needed for many remote countries in Africa and C. Asia EASSy project will bring fibre to E. Africa Growth in # users 2000-2005 400% Africa, 5000% Pakistan networks not keeping up Need more sites in developing regions and longer time period of measurements More information:  More information Thanks to: Harvey Newman & ICFA for encouragement & support, Anil Srivastava (World Bank) & N.Subramanian (Bangalore) for India, NIIT, NTC and PERN for Pakistan monitoring sites, FNAL for PingER management support, Duncan Martin & TENET (ZA). Future: work with VSNL & ERnet for India, Julio Ibarra & Eriko Porto for L. America, NIIT & NTC for Pakistan Also see: ICFA/SCIC Monitoring report: www.slac.stanford.edu/xorg/icfa/icfa-net-paper-jan06/ Paper on Africa & S. Asia www.slac.stanford.edu/grp/scs/net/papers/chep06/paper-final.pdf PingER project: www-iepm.slac.stanford.edu/pinger/ SC|05 Bandwidth Challenge:  SC|05 Bandwidth Challenge ESCC Meeting 9th February ‘06 Yee-Ting Li Stanford Linear Accelerator Center LHC Network Requirements:  LHC Network Requirements Overview:  Overview Bandwidth Challenge ‘The Bandwidth Challenge highlights the best and brightest in new techniques for creating and utilizing vast rivers of data that can be carried across advanced networks.‘ Transfer as much data as possible using real applications over a 2 hour window We did… Distributed TeraByte Particle Physics Data Sample Analysis ‘Demonstrated high speed transfers of particle physics data between host labs and collaborating institutes in the USA and worldwide. Using state of the art WAN infrastructure and Grid Web Services based on the LHC Tiered Architecture, they showed real-time particle event analysis requiring transfers of Terabyte-scale datasets.’ Overview:  Overview In detail, during the bandwidth challenge (2 hours): 131 Gbps measured by SCInet BWC team on 17 of our waves (15 minute average) 95.37TB of data transferred. (3.8 DVD’s per second) 90-150Gbps (peak 150.7Gbps) On day of challenge Transferred ~475TB ‘practising’ (waves were shared, still tuning applications and hardware) Peak one way USN utlisation observed on a single link was 9.1Gbps (Caltech) and 8.4Gbps (SLAC) Also wrote to StorCloud SLAC: wrote 3.2TB in 1649 files during BWC Caltech: 6GB/sec with 20 nodes Networking Overview:  Networking Overview We had 22 10Gbits/s waves to the Caltech and SLAC/FNAL booths. Of these: 15 waves to the Caltech booth (from Florida (1), Korea/GLORIAD (1), Brazil (1 * 2.5Gbits/s), Caltech (2), LA (2), UCSD, CERN (2), U Michigan (3), FNAL(2)). 7 x 10Gbits/s waves to the SLAC/FNAL booth (2 from SLAC, 1 from the UK, and 4 from FNAL). The waves were provided by Abilene, Canarie, Cisco (5), ESnet (3), GLORIAD (1), HOPI (1), Michigan Light Rail (MiLR), National Lambda Rail (NLR), TeraGrid (3) and UltraScienceNet (4). Network Overview:  Network Overview Hardware (SLAC only):  Hardware (SLAC only) At SLAC: 14 x 1.8Ghz Sun v20z (Dual Opteron) 2 x Sun 3500 Disk trays (2TB of storage) 12 x Chelsio T110 10Gb NICs (LR) 2 x Neterion/S2io Xframe I (SR) Dedicated Cisco 6509 with 4 x 4x10GB blades At SC|05: 14 x 2.6Ghz Sun v20z (Dual Opteron) 10 QLogic HBA’s for StorCloud Access 50TB Storage at SC|05 provide by 3PAR (Shared with Caltech) 12 x Neterion/S2io Xframe I NICs (SR) 2 x Chelsio T110 NICs (LR) Shared Cisco 6509 with 6 x 4x10GB blades Hardware at SC|05:  Hardware at SC|05 Software:  Software BBCP ‘Babar File Copy’ Uses ‘ssh’ for authentication Multiple stream capable Features ‘rate synchronisation’ to reduce byte retransmissions Sustained over 9Gbps on a single session XrootD Library for transparent file access (standard unix file functions) Designed primarily for LAN access (transaction based protocol) Managed over 35Gbit/sec (in two directions) on 2 x 10Gbps waves Transferred 18TBytes in 257,913 files DCache 20Gbps production and test cluster traffic BWC Aggregate Bandwidth:  Last year (SC|04) BWC Aggregate Bandwidth Cumulative Data Transferred:  Cumulative Data Transferred Bandwidth Challenge period Component Traffic:  Component Traffic SLAC-FermiLab-UK Bandwidth Contributions:  SLAC-ESnet FermiLab-HOPI SLAC-ESnet-USN FNAL-UltraLight UKLight Out from booth SLAC-FermiLab-UK Bandwidth Contributions In to booth SLAC Cluster Contributions:  In to booth Out from booth ESnet routed ESnet SDN layer 2 via USN Bandwidth Challenge period SLAC Cluster Contributions SLAC/FNAL Booth:  SLAC/FNAL Booth Aggregate Mbps Waves Problems…:  Problems… Managerial/PR Initial request for loan hardware took place 6 months in advance! Lots and lots of paperwork to keep account of all loan equipment Logistical Set up and tore down a pseudo production network and servers in a space of week! Testing could not begin until waves were alight Most waves lit day before challenge! Shipping so much hardware not cheap! Setting up monitoring Problems…:  Problems… Tried to configure hardware and software prior to show Hardware NICS We had 3 bad Chelsios (bad memory) Xframe II’s did not work in UKLight’s Boston machines Hard-disks 3 dead 10K disks (had to ship in spare) 1 x 4Port 10Gb blade DOA MTU mismatch between domains Router blade died during stress testing day before BWC! Cables! Cables! Cables! Software Used golden disks for duplication (still takes 30 minutes per disk to replicate!) Linux kernels: Initially used 2.6.14, found sever performance problems compared to 2.6.12. (New) Router firmware caused crashes under heavy load Unfortunately, only discovered just before BWC Had to manually restart the affected ports during BWC Problems:  Problems Most transfers were from memory to memory (Ramdisk etc). Local caching of (small) files in memory Reading and writing to disk will be the next bottleneck to overcome Conclusion:  Conclusion Previewed the IT Challenges of the next generation Data Intensive Science Applications (High Energy Physics, astronomy etc) Petabyte-scale datasets Tens of national and transoceanic links at 10 Gbps (and up) 100+ Gbps aggregate data transport sustained for hours; We reached a Petabyte/day transport rate for real physics data Learned to gauge difficulty of the global networks and transport systems required for the LHC mission Set up, shook down and successfully ran the systems in < 1 week Understood and optimized the configurations of various components (Network interfaces, router/switches, OS, TCP kernels, applications) for high performance over the wide area network. Conclusion:  Conclusion Products from this the exercise An optimized Linux (2.6.12 + NFSv4 + FAST and other TCP stacks) kernel for data transport; after 7 full kernel-build cycles in 4 days A newly optimized application-level copy program, bbcp, that matches the performance of iperf under some conditions. Extensions of Xrootd, an optimized low-latency file access application for clusters, across the wide area Understanding of the limits of 10 Gbps-capable systems under stress. How to effectively utilize 10GE and 1GE connected systems to drive 10 gigabit wavelengths in both directions. Use of production and test clusters at FNAL reaching more than 20 Gbps of network throughput. Significant efforts remain from the perspective of high-energy physics Management, integration and optimization of network resources End-to-end capabilities able to utilize these network resources. This includes applications and IO devices (disk and storage systems) Press and PR:  Press and PR 11/8/05 - Brit Boffins aim to Beat LAN speed record from vnunet.com SC|05 Bandwidth Challenge SLAC Interaction Point. Top Researchers, Projects in High Performance Computing Honored at SC/05 ... Business Wire (press release) - San Francisco, CA, USA 11/18/05 - Official Winner Announcement 11/18/05 - SC|05 Bandwidth Challenge Slide Presentation 11/23/05 - Bandwidth Challenge Results from Slashdot 12/6/05 - Caltech press release 12/6/05 - Neterion Enables High Energy Physics Team to Beat World Record Speed at SC05 Conference CCN Matthews News Distribution Experts High energy physics team captures network prize at SC|05 from SLAC High energy physics team captures network prize at SC|05 EurekaAlert! 12/7/05 - High Energy Physics Team Smashes Network Record, from Science Grid this Week. Congratulations to our Research Partners for a New Bandwidth Record at SuperComputing 2005, from Neterion.

Related presentations


Other presentations created by Natalia

Faculty Compensation
30. 10. 2007
0 views

Faculty Compensation

Alicia Lieberman Presentation
30. 10. 2007
0 views

Alicia Lieberman Presentation

LED
05. 10. 2007
0 views

LED

Monastic Traditions
01. 10. 2007
0 views

Monastic Traditions

rti bangladesh ppp 2005
02. 10. 2007
0 views

rti bangladesh ppp 2005

Shree Ganesh
03. 10. 2007
0 views

Shree Ganesh

port
11. 10. 2007
0 views

port

EOI Prep Item Specs
22. 10. 2007
0 views

EOI Prep Item Specs

Presentation Implicatures
23. 10. 2007
0 views

Presentation Implicatures

Hoummada Korea GRID
23. 10. 2007
0 views

Hoummada Korea GRID

51
05. 10. 2007
0 views

51

russia phys
19. 10. 2007
0 views

russia phys

070110Levin
12. 10. 2007
0 views

070110Levin

shuford
13. 10. 2007
0 views

shuford

ExistantEGov
24. 10. 2007
0 views

ExistantEGov

RedlenVEFSept24Final
01. 10. 2007
0 views

RedlenVEFSept24Final

file 1971831
22. 10. 2007
0 views

file 1971831

SCDA SuperFoods ppt handout
22. 11. 2007
0 views

SCDA SuperFoods ppt handout

PanamaCanal
25. 10. 2007
0 views

PanamaCanal

fp 571
23. 11. 2007
0 views

fp 571

Mike CT Chest
03. 01. 2008
0 views

Mike CT Chest

Ch16 Notes
04. 01. 2008
0 views

Ch16 Notes

6644
16. 10. 2007
0 views

6644

2003 06 09 Canaday Murray
04. 01. 2008
0 views

2003 06 09 Canaday Murray

richman
15. 10. 2007
0 views

richman

2005 CO2 IFP ADEME BRGM
23. 10. 2007
0 views

2005 CO2 IFP ADEME BRGM

jfy2000
09. 10. 2007
0 views

jfy2000

Talk Friday Michael Moll
17. 10. 2007
0 views

Talk Friday Michael Moll

AIintheNews9 26
27. 02. 2008
0 views

AIintheNews9 26

PR Conf Brief Panel 3
28. 02. 2008
0 views

PR Conf Brief Panel 3

hi fib fruit n veg en
04. 03. 2008
0 views

hi fib fruit n veg en

Risk Seminar Globe Holdings
10. 03. 2008
0 views

Risk Seminar Globe Holdings

cdpw graham
25. 10. 2007
0 views

cdpw graham

Flickr
16. 03. 2008
0 views

Flickr

07 TechnologyRoadmaps
20. 03. 2008
0 views

07 TechnologyRoadmaps

Spotlight on Medieval Times
25. 03. 2008
0 views

Spotlight on Medieval Times

q33tso
26. 03. 2008
0 views

q33tso

handsupport
30. 03. 2008
0 views

handsupport

Felber
11. 04. 2008
0 views

Felber

Lecture 12 Chapter 15
13. 04. 2008
0 views

Lecture 12 Chapter 15

oil factor
14. 04. 2008
0 views

oil factor

Hands on Supply Chain
16. 04. 2008
0 views

Hands on Supply Chain

AlexanderEydeland
17. 04. 2008
0 views

AlexanderEydeland

PRICE DETERMINATION
18. 04. 2008
0 views

PRICE DETERMINATION

vertebrate notes
10. 10. 2007
0 views

vertebrate notes

Help Me Realtor Rhonda2
28. 04. 2008
0 views

Help Me Realtor Rhonda2

2006 INFOLAN2
07. 05. 2008
0 views

2006 INFOLAN2

MFjsr2001
15. 11. 2007
0 views

MFjsr2001

CPM
30. 04. 2008
0 views

CPM

VISX
02. 05. 2008
0 views

VISX

emergency
02. 05. 2008
0 views

emergency

lis618p04s 06
05. 10. 2007
0 views

lis618p04s 06

Learning Center
03. 10. 2007
0 views

Learning Center

corp
16. 11. 2007
0 views

corp

G Wurzburg
18. 10. 2007
0 views

G Wurzburg

03 capitaine
18. 10. 2007
0 views

03 capitaine

NLBIF final
19. 10. 2007
0 views

NLBIF final

US Cole Mendel Chirhart
22. 10. 2007
0 views

US Cole Mendel Chirhart

poster Schultz
07. 10. 2007
0 views

poster Schultz

bosfmarketingjan 20
02. 11. 2007
0 views

bosfmarketingjan 20

6 RodolfoCanto
22. 10. 2007
0 views

6 RodolfoCanto

FRC 3252 Trop M Cacao 2 11 04
03. 01. 2008
0 views

FRC 3252 Trop M Cacao 2 11 04

234 483
06. 03. 2008
0 views

234 483

schuetzler01
18. 03. 2008
0 views

schuetzler01

Psych60Lectures2007B
05. 01. 2008
0 views

Psych60Lectures2007B

Mathematics and War
27. 09. 2007
0 views

Mathematics and War

2005110803441414
24. 02. 2008
0 views

2005110803441414

voicelineoverviewend user
27. 03. 2008
0 views

voicelineoverviewend user

itr branson
11. 03. 2008
0 views

itr branson

3236 doc
29. 02. 2008
0 views

3236 doc

05 T2K QA
15. 10. 2007
0 views

05 T2K QA

RobMarquseeWCRBMFort enberry2007
29. 12. 2007
0 views

RobMarquseeWCRBMFort enberry2007