Monday 3 November 2014

Big-Oh and Big-Theta

Optimising the efficiency of algorithms is an important component of computer science, because how smoothly programs work and how much computer processing power needed will depend on run time. Run time analysis is a useful way of describing the resource constraints of a particular algorithm. In class, we've been analysing the efficiency of functions using lists by determining their worst-case run time for input of size n. 

Big-Oh notation expresses the upper bound of a function's run time, by overestimating the number of steps needed to produce the output. For example, the worst-case scenario when sequentially searching for an item in a list of n items, is that the desired item is not in the list, but you only know this after looking at every item in the list. This requires n steps for comparing the input to each item in the list, plus 1 more step for returning the output (False). Therefore, the run-time of this search is linear or O(n).

Big-Theta notation expresses the lower bound of a function's run time, by underestimating the number of steps needed to produce the output. 

Big-Oh and Big-Theta seem to usually have the same type of run time (constant, linear, quadratic, etc.) for a function, but it is multiplied by a different constant (c). Because Big-Oh is the asymptotic upper-bound, its c is greater than the c for Big-Theta, the asymptotic lower-bound. 

No comments:

Post a Comment