 # DYNAMIC PROGRAMMING

Information about DYNAMIC PROGRAMMING

Published on March 6, 2009

Author: ankush85

Source: authorstream.com

DYNAMIC PROGRAMMING : DYNAMIC PROGRAMMING Problems like knapsack problem, shortest path can be solved by greedy method in which optimal decisions can be made one at a time. For many problems, it is not possible to make stepwise decision in such a manner that the sequence of decisions made is optimal. DYNAMIC PROGRAMMING (Contd..) : DYNAMIC PROGRAMMING (Contd..) Example: Suppose a shortest path from vertex i to vertex j is to be found. Let Ai be vertices adjacent from vertex i. which of the vertices of Ai should be the second vertex on the path? One way to solve the problem is to enumerate all decision sequences and pick out the best. In dynamic programming the principle of optimality is used to reduce the decision sequences. DYNAMIC PROGRAMMING (Contd..) : 3 DYNAMIC PROGRAMMING (Contd..) Principle of optimality: An optimal sequence of decisions has the property that whatever the initial state and decisions are, the remaining decisions must constitute an optional decision sequence with regard to the state resulting from the first decision. In the greedy method only one decision sequence is generated. In dynamic programming many decision sequences may be generated. DYNAMIC PROGRAMMING (Contd..) : DYNAMIC PROGRAMMING (Contd..) Example: [0/1 knapsack problem]:the xi’s in 0/1 knapsack problem is restricted to either 0 or 1. Using KNAP(l,j,y) to represent the problem Maximize ?pixi l=i=j subject to ?wixi = y, ………………..(1) l=i=j xi = 0 or 1 l=i=j The 0/1 knapsack problem is KNAP(l,n,M). DYNAMIC PROGRAMMING (Contd..) : DYNAMIC PROGRAMMING (Contd..) Let y1 ,y2 , …… ,yn be an optimal sequence of 0/l values for x1 ,x2 …..,xn respectively. If y1 = 0 then y2,y3,……,yn must constitute an optimal sequence for KNAP (2,n,M). If it does not, then y1 ,y2 , …… ,yn is not an optimal sequence for KNAP (1, n, M). If y1=1, then y1 ,y2 , …… ,yn is an optimal sequence for KNAP(2,n,M-wi ). If it is not there is another 0/l sequence z1,z2,….,zn such that ?pizi has greater value. Thus the principal of optimality holds. DYNAMIC PROGRAMMING (Contd..) : 6 DYNAMIC PROGRAMMING (Contd..) Let gj(y) be the value of an optimal solution to KNAP (j+1,n,y). Then g0(M) is the value of an optimal solution to KNAP(l,n,M). From the principal of optimality g0(M) = max{g1(M), g1(M-W1) + P1)} g0(M) is the maximum profit which is the value of the optimal solution . DYNAMIC PROGRAMMING (Contd..) : DYNAMIC PROGRAMMING (Contd..) The principal of optimality can be equally applied to intermediate states and decisions as has been applied to initial states and decisions. Let y1,y2,…..yn be an optimal solution to KNAP(l,n,M). Then for each j l=j=n , yi,..,yj and yj+1,….,yn must be optimal solutions to the problems KNAP(1,j,?wiyi) l=i=j and KNAP(j+1,n,M-?wiyi) respectively. l=i=j DYNAMIC PROGRAMMING (Contd..) : 8 DYNAMIC PROGRAMMING (Contd..) Then gi(y) = max{gi+1(y) (xi +1= 0 case), gi+1(y-wi +1) + pi+1}…(1) (xi +1= 1case), Equation (1) is known as recurrence relation. Dynamic programming algorithms solve the relation to obtain a solution to the given problems. To solve 0/1 knapsack problem , we use the knowledge gn(y) = 0 for all y, because gn(y) is on optimal solution (profit) to the problem KNAP(n+1, n, y) which is obviously zero for any y. DYNAMIC PROGRAMMING (Contd..) : 9 DYNAMIC PROGRAMMING (Contd..) Substituting i = n -1 and using gn(y)= 0 in the above relation(1), we obtain gn-1(y). Then using gn-1(y), we obtain gn-2(y) and so on till we get g0(M) (with i=0) which is the solution of the knapsack problem. There are two approaches to solving the recurrence relation 1 (1) Forward approach and (2) Backward approach DYNAMIC PROGRAMMING (Contd..) : 10 DYNAMIC PROGRAMMING (Contd..) In the forward approach ,decision xi is made in terms of optimal decision Sequences for xi+1……..xn (i.e we look ahead). In the backward approach, decision xi is in terms of optimal decision sequences for x1……xi-1(i.e we look backward). DYNAMIC PROGRAMMING (Contd..) : 11 DYNAMIC PROGRAMMING (Contd..) For the 0/l knapsack problems Gi(y)=max{gi+1(y),gi+1(y-wi+1)+Pi+1}………(1) Is the forward approach as gn-1(y) is obtained using gn(y). fi(y) = max{fj-1(y), fj-1(y-wi) + pj} …………..(2) is the backward approach, fj(y) is the value of optimal solution to Knap(i,j,Y). (2) may be solved by beginning with fi(y) = 0 for all y = 0 and fi(y) = -infinity for y < 0. FEATURES OF DYNAMIC PROGRAMMING SOLUTIONS : 12 FEATURES OF DYNAMIC PROGRAMMING SOLUTIONS It is easier to obtain the recurrence relation using backward approach. Dynamic programming algorithms often have polynomial complexity. Optimal solution to sub problems are retained so as to avoid recomputing their values. OPTIMAL BINARY SEARCH TREES : 13 OPTIMAL BINARY SEARCH TREES Definition: binary search tree (BST) A binary search tree is a binary tree; either it is empty or each node contains an identifier and all identifiers in the left sub tree of T are less than the identifiers in the root node T. (ii) all the identifiers the right sub tree are greater than the identifier in the root node T. (iii) the right and left sub tree are also BSTs. ALGORITHM TO SEARCH FOR AN IDENTIFIER IN THE TREE ‘T’. : 14 ALGORITHM TO SEARCH FOR AN IDENTIFIER IN THE TREE ‘T’. Procedure SEARCH (T X I) // Search T for X, each node had fields LCHILD, IDENT, RCHILD// // Return address i pointing to the identifier X// //Initially T is pointing to tree. //ident(i)=X or i=0 // i ? T Algorithm to search for an identifier in the tree ‘T’(Contd..) : 15 Algorithm to search for an identifier in the tree ‘T’(Contd..) While i ? 0 do case : X < Ident(i) : i ?LCHILD(i) : X = IDENT(i) : RETURN i : X > IDENT(i) : I ? RCHILD(i) endcase repeat end SEARCH Optimal Binary Search trees - Example : 16 Optimal Binary Search trees - Example If For while repeat loop if each identifier is searched with equal probability the average number of comparisons for the above tree are 1+2+2+3+4 = 12/5. 5 OPTIMAL BINARY SEARCH TREES (Contd..) : 17 OPTIMAL BINARY SEARCH TREES (Contd..) Let us assume that the given set of identifiers are {a1,a2,……..an} with a1<a2<…….<an. Let Pi be the probability with which we are searching for ai. Let Qi be the probability that identifier x being searched for is such that ai<x<ai+1 0=i=n, and a0=-8 and an+1=+8. OPTIMAL BINARY SEARCH TREES (Contd..) : 18 OPTIMAL BINARY SEARCH TREES (Contd..) Then ?Qi is the probability of an unsuccessful search. 0=i= n ?P(i) + ?Q(i) = 1. Given the data, 1=i=n 0=i=n let us construct one optimal binary search tree for (a1……….an). In place of empty sub tree, we add external nodes denoted with squares. Internet nodes are denoted as circles. OPTIMAL BINARY SEARCH TREES (Contd..) : 19 OPTIMAL BINARY SEARCH TREES (Contd..) If For while repeat loop Construction of optimal binary search trees : 20 Construction of optimal binary search trees A BST with n identifiers will have n internal nodes and n+1 external nodes. Successful search terminates at internal nodes unsuccessful search terminates at external nodes. If a successful search terminates at an internal node at level L, then L iterations of the loop in the algorithm are needed. Hence the expected cost contribution from the internal nodes for ai is P(i) * level(ai). OPTIMAL BINARY SEARCH TREES (Contd..) : 21 OPTIMAL BINARY SEARCH TREES (Contd..) Unsuccessful searche terminates at external nodes i.e. at i = 0. The identifiers not in the binary search tree may be partitioned into n+1 equivalent classes Ei 0=i=n. Eo contains all X such that X=ai Ei contains all X such that a<X<=ai+1 1=i=n En contains all X such that X > an OPTIMAL BINARY SEARCH TREES (Contd..) : 22 OPTIMAL BINARY SEARCH TREES (Contd..) For identifiers in the same class Ei, the search terminate at the same external node. If the failure node for Ei is at level L, then only L-1 iterations of the while loop are made ?The cost contribution of the failure node for Ei is Q(i) * level (Ei ) -1) OPTIMAL BINARY SEARCH TREES (Contd..) : 23 OPTIMAL BINARY SEARCH TREES (Contd..) Thus the expected cost of a binary search tree is: ?P(i) * level (ai) + ?Q(i) * level(Ei) – 1) ……(2) 1=i=n 0=i=n An optimal binary search tree for {a1…….,an} is a BST for which (2) is minimum . Example: Let {a1,a2, a3}={do, if, stop} OPTIMAL BINARY SEARCH TREES (Contd..) : 24 OPTIMAL BINARY SEARCH TREES (Contd..) Level 1 stop if do Level 2 if Q(3) do stop if Q(2) Level 3 do stop Q(0) Q(1) (a) (b) (c) {a1,a2,a3}={do,if,stop} OPTIMAL BINARY SEARCH TREES (Contd..) : 25 OPTIMAL BINARY SEARCH TREES (Contd..) stop do do stop if if (d) (c) OPTIMAL BINARY SEARCH TREES (Contd..) : 26 OPTIMAL BINARY SEARCH TREES (Contd..) With equal probability P(i)=Q(i)=1/7. Let us find an OBST out of these. Cost(tree a)=?P(i)*level a(i) +?Q(i)*level (Ei) -1 1=i=n 0=i=n (2-1) (3-1) (4-1) (4-1) =1/7[1+2+3 + 1 + 2 + 3 + 3 ] = 15/7 Cost (tree b) =17[1+2+2+2+2+2+2] =13/7 Cost (tree c) =cost (tree d) =cost (tree e) =15/7 ? tree b is optimal. OPTIMAL BINARY SEARCH TREES (Contd..) : 27 OPTIMAL BINARY SEARCH TREES (Contd..) If P(1) =0.5 ,P(2) =0.1, P(3) =0.005 , Q(0) =.15 , Q(1) =.1, Q(2) =.05 and Q(3) =.05 find the OBST. Cost (tree a) = .5 x 3 +.1 x 2 +.05 x 3 +.15x3 +.1x3 +.05x2 +.05x1 = 2.65 Cost (tree b) =1.9 , Cost (tree c) =1.5 ,Cost (tree d) =2.05 , Cost (tree e) =1.6 Hence tree C is optimal. OPTIMAL BINARY SEARCH TREES (Contd..) : 28 OPTIMAL BINARY SEARCH TREES (Contd..) To obtain a OBST using Dynamic programming we need to take a sequence of decisions regard. The construction of tree. First decision is which of ai is be as root. Let us choose ak as the root . Then the internal nodes for a1,…….,ak-1 and the external nodes for classes Eo,E1,……,Ek-1 will lie in the left subtree L of the root. The remaining nodes will be in the right subtree R. OPTIMAL BINARY SEARCH TREES (Contd..) : 29 OPTIMAL BINARY SEARCH TREES (Contd..) Define Cost(L) =?P(i)* level(ai) + ?Q(i)*(level(Ei )-1) 1=i=k 0=i=k Cost(R) =?P(i)*level(ai) + ?Q(i)*(level(Ei )-1) k=i=n k=i=n Tij be the tree with nodes ai+1,…..,aj and nodes corresponding to Ei,Ei+1,…..,Ej. Let W(i,j) represents the weight of tree Tij. OPTIMAL BINARY SEARCH TREES (Contd..) : 30 OPTIMAL BINARY SEARCH TREES (Contd..) W(i,j)=P(i+1) +…+P(j)+Q(i)+Q(i+1)…Q(j)=Q(i) +?j [Q(l)+P(l)] l=i+1 The expected cost of the search tree in (a) is (let us call it T) is   P(k)+cost(l)+cost(r)+W(0,k-1)+W(k,n) W(0,k-1) is the sum of probabilities corresponding to nodes and nodes belonging to equivalent classes to the left of ak. W(k,n) is the sum of the probabilities corresponding to those on the right of ak. ak L R (a) OBST with root ak

03. 04. 2009
0 views

06. 03. 2009
0 views

27. 03. 2009
0 views

20. 11. 2009
0 views

22. 11. 2009
0 views

14. 07. 2009
0 views

18. 03. 2009
0 views

08. 04. 2009
0 views

06. 03. 2009
0 views

20. 04. 2009
0 views

03. 06. 2013
0 views

03. 06. 2013
0 views

16. 10. 2012
0 views

25. 09. 2012
0 views

25. 09. 2012
0 views

03. 01. 2012
0 views

24. 11. 2009
0 views

22. 11. 2009
0 views

22. 11. 2009
0 views

16. 05. 2009
0 views

19. 04. 2009
0 views

16. 05. 2009
0 views

10. 07. 2009
0 views

01. 07. 2009
0 views

21. 07. 2009
0 views

21. 07. 2009
0 views

01. 07. 2009
0 views

05. 03. 2009
0 views

13. 06. 2009
0 views

11. 05. 2009
0 views

05. 03. 2009
0 views

25. 08. 2009
0 views

19. 06. 2009
0 views

24. 06. 2009
0 views

02. 10. 2008
0 views

25. 04. 2009
0 views

28. 04. 2009
0 views

17. 06. 2009
0 views

25. 04. 2009
0 views

25. 06. 2009
0 views

19. 06. 2009
0 views

18. 03. 2009
0 views

27. 04. 2009
0 views

23. 03. 2009
0 views

23. 05. 2009
0 views

23. 05. 2009
0 views

24. 06. 2009
0 views

24. 05. 2009
0 views

03. 04. 2009
0 views

08. 04. 2009
0 views

23. 03. 2009
0 views

06. 03. 2009
0 views

06. 03. 2009
0 views

29. 04. 2009
0 views

24. 05. 2009
0 views

06. 03. 2009
0 views

11. 04. 2009
0 views

08. 03. 2009
0 views

13. 03. 2009
0 views

04. 09. 2009
0 views

20. 11. 2009
0 views

22. 11. 2009
0 views

06. 03. 2009
0 views

06. 03. 2009
0 views

21. 04. 2009
0 views

25. 05. 2009
0 views

06. 03. 2009
0 views

05. 03. 2009
0 views

14. 07. 2009
0 views

14. 07. 2009
0 views

05. 03. 2009
0 views

09. 05. 2009
0 views

05. 03. 2009
0 views

05. 03. 2009
0 views

05. 03. 2009
0 views

06. 03. 2009
0 views

06. 03. 2009
0 views

06. 03. 2009
0 views

06. 03. 2009
0 views

06. 03. 2009
0 views

06. 03. 2009
0 views

06. 03. 2009
0 views

06. 03. 2009
0 views

09. 03. 2009
0 views

09. 03. 2009
0 views

13. 03. 2009
0 views

19. 03. 2009
0 views

25. 03. 2009
0 views

08. 04. 2009
0 views

10. 04. 2009
0 views

10. 04. 2009
0 views

13. 04. 2009
0 views

15. 04. 2009
0 views

17. 04. 2009
0 views

23. 04. 2009
0 views

21. 04. 2009
0 views

21. 04. 2009
0 views

21. 04. 2009
0 views

23. 04. 2009
0 views

25. 04. 2009
0 views

25. 04. 2009
0 views

29. 04. 2009
0 views

29. 04. 2009
0 views

26. 04. 2009
0 views

26. 04. 2009
0 views

27. 04. 2009
0 views

05. 05. 2009
0 views

14. 05. 2009
0 views

16. 05. 2009
0 views

15. 05. 2009
0 views

15. 05. 2009
0 views

23. 04. 2009
0 views

23. 05. 2009
0 views

23. 05. 2009
0 views

24. 05. 2009
0 views

28. 05. 2009
0 views

04. 06. 2009
0 views

19. 06. 2009
0 views

19. 06. 2009
0 views

07. 06. 2009
0 views

13. 06. 2009
0 views

13. 06. 2009
0 views

13. 06. 2009
0 views

13. 06. 2009
0 views

13. 06. 2009
0 views

13. 06. 2009
0 views

13. 06. 2009
0 views

13. 06. 2009
0 views

13. 06. 2009
0 views

15. 06. 2009
0 views

26. 06. 2009
0 views

27. 06. 2009
0 views

01. 07. 2009
0 views

06. 07. 2009
0 views

05. 07. 2009
0 views

05. 07. 2009
0 views

10. 07. 2009
0 views

01. 08. 2009
0 views

26. 04. 2009
0 views

21. 08. 2009
0 views

25. 08. 2009
0 views

27. 08. 2009
0 views

27. 08. 2009
0 views

27. 08. 2009
0 views

04. 09. 2009
0 views

04. 09. 2009
0 views

27. 06. 2009
0 views