usits2001 talk 1

Information about usits2001 talk 1

Published on October 29, 2007

Author: Danielle

Source: authorstream.com

Content

Neptune: Scalable Replication Management and Programming Support for Cluster-based Network Services:  Neptune: Scalable Replication Management and Programming Support for Cluster-based Network Services Kai Shen, Tao Yang, Lingkun Chu, JoAnne L. Holliday, Douglas K. Kuschner, and Huican Zhu Department of Computer Science University of California, Santa Barbara http://www.cs.ucsb.edu/research/Neptune Motivations:  Motivations Availability, incremental-scalability, and manageability - key requirements for building large-scale network services. Challenging for those with frequent persistent data updates. Existing solutions in managing persistent data: Pure data partitioning: no availability guarantee; bad at dealing with runtime hot-spots. Disk-sharing: inherently unscalable; single-point of failure. Replication provided by database vendors: tied to specific database systems; inflexible in consistency. Neptune Project Goal:  Neptune Project Goal Design a scalable clustering architecture for aggregating and replicating network services with persistent data. Provide a simple and flexible programming model to shield complexity of data replication, service discovery, load balancing, and failover management. Provide flexible replica consistency support to address availability and performance tradeoffs for different services. Related Work:  Related Work TACC, MultiSpace: infrastructure support for cluster-based network services. DDS: distributed persistent data structure for network services. Porcupine: cluster-based email service (with commutative updates). Bayou: weak consistency for wide-area applications. BEA Tuxedo – platform middleware supporting transactional RPC. Outline:  Outline Motivations & Related Work System Architecture and Assumptions Replica Consistency and Failure Recovery System implementation and Service Deployments Experimental Studies Partitionable Network Services:  Partitionable Network Services Characteristics of network services: Information independence. Service data can be divided into independent categories (e.g. discussion group). User independence. Data accessed by different users tend to be independent (e.g. email service). Neptune is targeting partitionable network services: Service data can be divided into independent partitions. Each service access can be delivered independently on a single partition; or Each service access can be aggregated from sub-services each of which can be delivered independently on a single partition. Conceptual Architecture for a Neptune Service Cluster:  Conceptual Architecture for a Neptune Service Cluster Neptune Components:  Neptune Components Neptune components on client and server-side: Neptune Server Module: starts, regulates, terminates registered service instances and maintains replica data consistency. Neptune Client Module: provides location-transparent accesses to application service clients. Programming Interfaces:  Programming Interfaces Request/Response communications: Client-side API: (called by service clients) NeptuneCall (CltHandle, Service, Partition, SvcMethod, Request, Response); Service Interface: (abstract interface that application services implement) SvcMethod (SvcHandle, Partition, Request, Response); Stream-based communications: Neptune sets up a bi-directional stream between the service client and the service instance. Assumptions:  Assumptions All system modules follow fail-stop failure model. Network partitions do not occur inside the service cluster.  Neptune does allow persistent data survive all-node failures. Atomic execution is supported if each underlying service module ensures atomicity in stand-alone configuration. Neptune Replica Consistency Model:  Neptune Replica Consistency Model A service access is called a write if it changes the state of persistent data; and it is called a read otherwise. Level 1: Write-anywhere replication for commutative writes. Writes are accepted at any replica and propagated to peers. E.g. message board (append-only). Level 2: Primary-secondary replication for ordered writes. Writes are only accepted at primary node, then ordered and propagated to secondaries. Level 3: Primary-secondary replication with staleness control. Soft time-based staleness bound and progressive version delivery. Not strong consistency because writes completed independently at each replica. Soft Time-based Staleness Bound:  Soft Time-based Staleness Bound Semantics: each read serviced at a replica at most x seconds stale compared to the primary. Important for services such as on-line auction. Implementation: Each replica periodically announces its data version; Neptune client module directs requests only to replicas with a fresh enough version. The bound is soft, depending on network latency, announcement frequency, and intermittent packet losses. Progressive Version Delivery:  Progressive Version Delivery From each client’s point of view, Writes are always seen by subsequent reads. Versions delivered for reads are progressive. Important for services like on-line auction. Implementation: Each replica periodically announces its data version; Each service invocation returns a version number for a service client to keep as a session variable; Neptune client module directs a read to a replica with an announced version >= all the previously-returned version. Failure Recovery:  Failure Recovery A REDO log is maintained for each data partition at each replica, which has two portions: Committed portion: completed writes; Uncommitted portion: writes received but not yet completed. Three-phrase recovery for primary-secondary replication (level-2 & level-3): Synchronize with underlying service module; Recover missed writes from the current primary; Resume normal operations. Only phase one is necessary for write-anywhere replication (level-1). Outline:  Outline Motivations & Related Work System Architecture and Assumptions Replica Consistency and Failure Recovery System Implementation and Service Deployments Experimental Studies Prototype System Implementation on a Linux cluster:  Prototype System Implementation on a Linux cluster Service availability and node runtime workload are announced through IP Multicast. multicast once a second; kept as soft state, expires in five seconds. Service instances can run either as processes or threads in Neptune server runtime environment. Each Neptune server module maintains a process/thread pool and a waiting queue. Experience with Service Deployments:  Experience with Service Deployments On-line discussion group View message headers, view message, and add message. All three consistency levels can be applied. Auction Level 3 consistency with staleness control is used. Persistent cache Store key-value pairs (e.g. caching query result). Level 2 consistency (Primary-secondary) is used.  Fast prototyping and implementation without worrying about replication/clustering complexities. Experimental Settings for Performance Evaluation:  Experimental Settings for Performance Evaluation Synthetic Workloads: 10% and 50% write percentages; Balanced workload to assess best-case scalability; Skewed workload to evaluate the impact of runtime hotspots. Metric: maximum throughput when at least 98% client requests are completed in 2 seconds. Evaluation Environment: Linux cluster with dual 400MHz Pentium IIs, 512MB/1GB memory, dual 100Mb/s Ethernet interfaces. Lucent P550 Ethernet switch with 22Gb/s backplane bandwidth. Scalability under Balanced Workload:  Scalability under Balanced Workload NoRep is about twice as fast as Rep=4 under 50% writes. Insignificant performance difference across three consistency levels under balanced workload. Skewed Workload:  Skewed Workload Each skewed workload consists of requests chosen from a set of partitions according to Zipf distribution. Define the workload imbalance factor as the proportion of the requests directed to the most popular partition. For a 16-partition service, an imbalance factor of 1/16 indicates a completely balanced workload. An imbalance factor of 1 means all requests are directed to one partition. Impact of Workload Imbalance on Replication Degrees:  Impact of Workload Imbalance on Replication Degrees Replication provides dynamic load-sharing for runtime hot-spots (Rep=4 could be up to 3 times as fast as NoRep). 10% writes; level-2 consistency; 8 nodes. Impact of Workload Imbalance on Consistency Levels:  Impact of Workload Imbalance on Consistency Levels 10% writes; Rep degree 4; 8 nodes. Modest performance difference: Up to 12% between level-2 and level-3; Up to 9% between level-1 and level-2. Failure Recovery for Primary-secondary Replication:  Failure Recovery for Primary-secondary Replication Graceful performance degradation. Performance drop after the three-node failure. Errors and timeouts trailing each recovery (write recovery and sync overhead). Conclusions:  Conclusions Contributions: Scalable replication for cluster-based network services; multi-level consistency with staleness control. A simple programming model to shield replication and clustering complexities from application service authors. Evaluation results: Replication improves performance for runtime hotspots. Performance of level 3 consistency is competitive. Level 2/3 carries extra overhead during failure recovery. http://www.cs.ucsb.edu/research/Neptune

Related presentations


Other presentations created by Danielle

American Culture
07. 11. 2007
0 views

American Culture

Seminar Dec 05 06 EvansS
07. 05. 2008
0 views

Seminar Dec 05 06 EvansS

9071
02. 05. 2008
0 views

9071

cas loutraki
02. 05. 2008
0 views

cas loutraki

26904 1
30. 04. 2008
0 views

26904 1

01TaxationNaturalRes ources
28. 04. 2008
0 views

01TaxationNaturalRes ources

UC1006
22. 04. 2008
0 views

UC1006

corpsponsorprogram
18. 04. 2008
0 views

corpsponsorprogram

Diameter Credit Check Mironov
17. 04. 2008
0 views

Diameter Credit Check Mironov

Insurance Needs
16. 04. 2008
0 views

Insurance Needs

EventPlanning
05. 10. 2007
0 views

EventPlanning

MURI NOAAPAP
05. 10. 2007
0 views

MURI NOAAPAP

Chapter3 Overexploitation
10. 10. 2007
0 views

Chapter3 Overexploitation

NSFWkshp10 KoganGPS
12. 10. 2007
0 views

NSFWkshp10 KoganGPS

2006 PC chap1 5
12. 10. 2007
0 views

2006 PC chap1 5

CultureoftheCIS
15. 10. 2007
0 views

CultureoftheCIS

ICCOA IOMDP PP
15. 10. 2007
0 views

ICCOA IOMDP PP

2006BiochemA chap3
15. 10. 2007
0 views

2006BiochemA chap3

CS 595 Presentation
17. 10. 2007
0 views

CS 595 Presentation

landnav
19. 10. 2007
0 views

landnav

GlobalInsightSupplyC hain
22. 10. 2007
0 views

GlobalInsightSupplyC hain

ProfRaulBraes
22. 10. 2007
0 views

ProfRaulBraes

masjid
24. 10. 2007
0 views

masjid

walk21
17. 10. 2007
0 views

walk21

2006 Footwear Conf
25. 10. 2007
0 views

2006 Footwear Conf

protws2 4572
29. 10. 2007
0 views

protws2 4572

0611
30. 10. 2007
0 views

0611

ContactCanberra1
04. 10. 2007
0 views

ContactCanberra1

robert engle
08. 10. 2007
0 views

robert engle

PO Workbenches Data Clean up
22. 10. 2007
0 views

PO Workbenches Data Clean up

Noel Final
12. 10. 2007
0 views

Noel Final

Traina
15. 10. 2007
0 views

Traina

The End of the Cold War
23. 12. 2007
0 views

The End of the Cold War

billah
23. 10. 2007
0 views

billah

CHAPTER 18 1
05. 01. 2008
0 views

CHAPTER 18 1

Constructed Wetlands
07. 01. 2008
0 views

Constructed Wetlands

US invlovemnet in ww2
13. 11. 2007
0 views

US invlovemnet in ww2

Supercomputing
02. 10. 2007
0 views

Supercomputing

Cine y filosofia
24. 10. 2007
0 views

Cine y filosofia

open economy
04. 10. 2007
0 views

open economy

IHEP in EGEE ver4
27. 09. 2007
0 views

IHEP in EGEE ver4

PACS
15. 10. 2007
0 views

PACS

Project Gini
14. 02. 2008
0 views

Project Gini

1Unit 8
24. 02. 2008
0 views

1Unit 8

Slides Louis
24. 02. 2008
0 views

Slides Louis

Management Structure Syria
07. 01. 2008
0 views

Management Structure Syria

DHS COPLINK Data Mining 2003
07. 03. 2008
0 views

DHS COPLINK Data Mining 2003

0815 Branch 0730
12. 03. 2008
0 views

0815 Branch 0730

trudel and nelson
01. 10. 2007
0 views

trudel and nelson

national holocaust memorial day
18. 03. 2008
0 views

national holocaust memorial day

Chapter 14 Powerpoint
26. 11. 2007
0 views

Chapter 14 Powerpoint

viniciuscatao inclusao
02. 11. 2007
0 views

viniciuscatao inclusao

nile climatology
21. 10. 2007
0 views

nile climatology

Roman Spring 2006
31. 12. 2007
0 views

Roman Spring 2006

OPSPanama 1
22. 10. 2007
0 views

OPSPanama 1

FunNight
23. 11. 2007
0 views

FunNight

COrlandi ANCI
24. 10. 2007
0 views

COrlandi ANCI

VisitingUCSF
30. 10. 2007
0 views

VisitingUCSF

Portfolio INFANZIA
02. 11. 2007
0 views

Portfolio INFANZIA

Rejmanek Honza Poster 20061110
03. 10. 2007
0 views

Rejmanek Honza Poster 20061110

Session 2 Mr Hotta ENUM 07
09. 10. 2007
0 views

Session 2 Mr Hotta ENUM 07

embrapa1
23. 10. 2007
0 views

embrapa1

6 ApocaplyticLiterature
01. 10. 2007
0 views

6 ApocaplyticLiterature

dissolving
04. 01. 2008
0 views

dissolving

UUpresEng0706
15. 10. 2007
0 views

UUpresEng0706

ICOPS agarwal 2007 v6
04. 12. 2007
0 views

ICOPS agarwal 2007 v6

Druckman flu presentation
26. 03. 2008
0 views

Druckman flu presentation