Big-Oh representation

Information about Big-Oh representation

Published on March 9, 2014

Author: lakshmitharun

Source: authorstream.com

Content

Enough Mathematical Appetizers! : Fall 2002 CMSC 203 - Discrete Structures 1 Enough Mathematical Appetizers! Let us look at something more interesting: Algorithms Algorithms : Fall 2002 CMSC 203 - Discrete Structures 2 Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. This is a rather vague definition. You will get to know a more precise and mathematically useful definition when you attend CS420. But this one is good enough for now… Algorithms : Fall 2002 CMSC 203 - Discrete Structures 3 Algorithms Properties of algorithms: Input from a specified set, Output from a specified set (solution), Definiteness of every step in the computation, Correctness of output for every possible input, Finiteness of the number of calculation steps, Effectiveness of each calculation step and Generality for a class of problems. Algorithm Examples: Fall 2002 CMSC 203 - Discrete Structures 4 Algorithm Examples We will use a pseudocode to specify algorithms, which slightly reminds us of Basic and Pascal. Example: an algorithm that finds the maximum element in a finite sequence procedure max(a 1 , a 2 , …, a n : integers) max := a 1 for i := 2 to n if max < a i then max := a i {max is the largest element} Algorithm Examples: Fall 2002 CMSC 203 - Discrete Structures 5 Algorithm Examples Another example: a linear search algorithm, that is, an algorithm that linearly searches a sequence for a particular element. procedure linear_search(x: integer; a 1 , a 2 , …, a n : integers) i := 1 while (i  n and x  a i ) i := i + 1 if i  n then location := i else location := 0 {location is the subscript of the term that equals x, or is zero if x is not found} Algorithm Examples: Fall 2002 CMSC 203 - Discrete Structures 6 Algorithm Examples If the terms in a sequence are ordered, a binary search algorithm is more efficient than linear search. The binary search algorithm iteratively restricts the relevant search interval until it closes in on the position of the element to be located. Algorithm Examples: Fall 2002 CMSC 203 - Discrete Structures 7 Algorithm Examples a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval Algorithm Examples: Fall 2002 CMSC 203 - Discrete Structures 8 Algorithm Examples a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval Algorithm Examples: Fall 2002 CMSC 203 - Discrete Structures 9 Algorithm Examples a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval Algorithm Examples: Fall 2002 CMSC 203 - Discrete Structures 10 Algorithm Examples a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval Algorithm Examples: Fall 2002 CMSC 203 - Discrete Structures 11 Algorithm Examples a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval found ! Algorithm Examples: Fall 2002 CMSC 203 - Discrete Structures 12 Algorithm Examples procedure binary_search(x: integer; a 1 , a 2 , …, a n : integers) i := 1 {i is left endpoint of search interval} j := n {j is right endpoint of search interval} while (i < j) begin m := (i + j)/2 if x > a m then i := m + 1 else j := m end if x = a i then location := i else location := 0 {location is the subscript of the term that equals x, or is zero if x is not found} Complexity: Fall 2002 CMSC 203 - Discrete Structures 13 Complexity In general, we are not so much interested in the time and space complexity for small inputs. For example, while the difference in time complexity between linear and binary search is meaningless for a sequence with n = 10, it is gigantic for n = 2 30 . Complexity: Fall 2002 CMSC 203 - Discrete Structures 14 Complexity For example, let us assume two algorithms A and B that solve the same class of problems. The time complexity of A is 5,000n, the one for B is 1.1 n  for an input with n elements. For n = 10, A requires 50,000 steps, but B only 3, so B seems to be superior to A. For n = 1000, however, A requires 5,000,000 steps, while B requires 2.510 41 steps. Complexity: Fall 2002 CMSC 203 - Discrete Structures 15 Complexity This means that algorithm B cannot be used for large inputs, while algorithm A is still feasible. So what is important is the growth of the complexity functions. The growth of time and space complexity with increasing input size n is a suitable measure for the comparison of algorithms. Complexity: Fall 2002 CMSC 203 - Discrete Structures 16 Complexity Comparison: time complexity of algorithms A and B Algorithm A Algorithm B Input Size n 10 100 1,000 1,000,000 5,000n 50,000 500,000 5,000,000 510 9 1.1 n  3 2.510 41 13,781 4.810 41392 Complexity: Fall 2002 CMSC 203 - Discrete Structures 17 Complexity This means that algorithm B cannot be used for large inputs, while running algorithm A is still feasible. So what is important is the growth of the complexity functions. The growth of time and space complexity with increasing input size n is a suitable measure for the comparison of algorithms. The Growth of Functions: Fall 2002 CMSC 203 - Discrete Structures 18 The Growth of Functions The growth of functions is usually described using the big-O notation . Definition: Let f and g be functions from the integers or the real numbers to the real numbers. We say that f(x) is O(g(x)) if there are constants C and k such that |f(x)|  C|g(x)| whenever x > k. The Growth of Functions: Fall 2002 CMSC 203 - Discrete Structures 19 The Growth of Functions When we analyze the growth of complexity functions , f(x) and g(x) are always positive. Therefore, we can simplify the big-O requirement to f(x)  Cg(x) whenever x > k. If we want to show that f(x) is O(g(x)), we only need to find one pair (C, k) (which is never unique). The Growth of Functions: Fall 2002 CMSC 203 - Discrete Structures 20 The Growth of Functions The idea behind the big-O notation is to establish an upper boundary for the growth of a function f(x) for large x. This boundary is specified by a function g(x) that is usually much simpler than f(x). We accept the constant C in the requirement f(x)  Cg(x) whenever x > k, because C does not grow with x. We are only interested in large x, so it is OK if f(x) > Cg(x) for x  k. The Growth of Functions: Fall 2002 CMSC 203 - Discrete Structures 21 The Growth of Functions Example: Show that f(x) = x 2 + 2x + 1 is O(x 2 ). For x > 1 we have: x 2 + 2x + 1  x 2 + 2x 2 + x 2  x 2 + 2x + 1  4x 2 Therefore, for C = 4 and k = 1: f(x)  Cx 2 whenever x > k.  f(x) is O(x 2 ). The Growth of Functions: Fall 2002 CMSC 203 - Discrete Structures 22 The Growth of Functions Question: If f(x) is O(x 2 ), is it also O(x 3 )? Yes. x 3 grows faster than x 2 , so x 3 grows also faster than f(x). Therefore, we always have to find the smallest simple function g(x) for which f(x) is O(g(x)). The Growth of Functions: Fall 2002 CMSC 203 - Discrete Structures 23 The Growth of Functions “Popular” functions g(n) are n log n, 1, 2 n , n 2 , n!, n, n 3 , log n Listed from slowest to fastest growth: 1 log n n n log n n 2 n 3 2 n n! The Growth of Functions: Fall 2002 CMSC 203 - Discrete Structures 24 The Growth of Functions A problem that can be solved with polynomial worst-case complexity is called tractable . Problems of higher complexity are called intractable. Problems that no algorithm can solve are called unsolvable . You will find out more about this in CS420. Useful Rules for Big-O: Fall 2002 CMSC 203 - Discrete Structures 25 Useful Rules for Big-O For any polynomial f(x) = a n x n + a n-1 x n-1 + … + a 0 , where a 0 , a 1 , …, a n are real numbers, f(x) is O(x n ). If f 1 (x) is O(g 1 (x)) and f 2 (x) is O(g 2 (x)), then (f 1 + f 2 )(x) is O(max(g 1 (x), g 2 (x))) If f 1 (x) is O(g(x)) and f 2 (x) is O(g(x)), then (f 1 + f 2 )(x) is O(g(x)). If f 1 (x) is O(g 1 (x)) and f 2 (x) is O(g 2 (x)), then (f 1 f 2 )(x) is O(g 1 (x) g 2 (x)). Complexity Examples: Fall 2002 CMSC 203 - Discrete Structures 26 Complexity Examples What does the following algorithm compute? procedure who_knows(a 1 , a 2 , …, a n : integers) m := 0 for i := 1 to n-1 for j := i + 1 to n if |a i – a j | > m then m := |a i – a j | {m is the maximum difference between any two numbers in the input sequence} Comparisons: n-1 + n-2 + n-3 + … + 1 = (n – 1)n/2 = 0.5n 2 – 0.5n Time complexity is O(n 2 ). Complexity Examples: Fall 2002 CMSC 203 - Discrete Structures 27 Complexity Examples Another algorithm solving the same problem: procedure max_diff(a 1 , a 2 , …, a n : integers) min := a1 max := a1 for i := 2 to n if a i < min then min := a i else if a i > max then max := a i m := max - min Comparisons: 2n - 2 Time complexity is O(n).

Related presentations


Other presentations created by lakshmitharun

BrainleeBracelett for blind
09. 03. 2014
0 views

BrainleeBracelett for blind

TimeComplexity
09. 03. 2014
0 views

TimeComplexity

TimeAndSpaceComplexity
09. 03. 2014
0 views

TimeAndSpaceComplexity

Big-Oh Notation
22. 02. 2014
0 views

Big-Oh Notation