The raw performance of an algorithm, program, or a programmatic operation depends on a number of factors such, not least the computer it’s running on. Big O is not concerned with this; Big O describes the way the time taken by a program (or memory or space usage) depends on the amount of the data it has been given to work on in the first place. Big O tells us how well a program scales. We say that Big O describes the ‘complexity’ of a program.
This is the third in a series of videos about Big O. These videos show how various Big O time complexities can be derived for some well known algorithms including the linear search, stack push and pop operations, the bubble sort, the binary search and the merge sort. This particular video focuses on the bubble sort and quadratic time complexity. It introduces the concept of a dominant term in an expression that describes the number of steps taken by an algorithm, and therefore a means of deriving its time complexity. It also introduces the concept of best and worst case scenarios when describing the complexity of a program.