Introduction
Big-O notation (pronounced 'Big Oh') is used in computer science as a means to describe the worst-case performance of an algorithm. It's one of the things you really should learn if you're interested at all in designing efficient algorithms.The formal definition is as follows:
f(n) = O(g(n)) means c*g(n) is an upper bound on f(n). Thus there exists some constant c such that f(n) is always ≤ c*g(n), for large enough n (i.e. , n ≥ n0 for some constant n0).This definition is a very format way of saying that Big O-notation is the upper-bound/worst-case of a function.
The Algorithm Design Manual, Steven S. Skiena
Worst-case
Big-O notation has a counterpart Ω-notation (Big Omega notation), which describes the best-case instead of the worst-case. While it may seem unintuitive, this is actually much less useful when analysing performance issues. In general we care much more about the worst-case performance as we want everything to run as smooth as possible, not randomly fast and randomly slow. Consider the following function:public Boolean DoesTextContainChar(String text, char character) { for (int i = 0; i < text.length(); i++) { if (text.charAt(i) == character) { return true; } } return false; }If we run this function when
text
is very small or when character
is near the beginning of text
then it will run very quickly. But what about when text
is extremely long and character
doesn't exist? This is the worst-case of the algorithm.Simplicity
Big-O notation ignores the details of an algorithm and focuses only on the largest power in the term. A precise analysis of an algorithm could look something like this:
f(n) = 3n3 + 60n2 + 8n + 103
The beauty of Big-O notation is that we can ignore most of this and just focus on the largest term. We can even ignore the constant, consider the same algorithm written in a low-level language runs 3 times faster than when written in a high-level language, it's a detail that isn't really important when looking at the algorithm itself. So the function above in Big-O notation is
f(n) = O(n3)
O(1)
An O(1) function always runs in the same time regardless of the input size.public Boolean IsCharCaps(char character) { int asciiCode = (int)character; return asciiCode >= 65 && asciiCode <= 90; }
O(log n)
An O(log n) function grows at a rate of the logarithm (base 2) of the input size. A lot of sorted array search algorithms like the binary search algorithm are O(log n). This is only really possible when you can ignore input elements entirely.public Boolean BinarySearch(int[] sortedArray, int value, int min, int max) { if (max < min) return false; else { int mid = (min + max) / 2; if (sortedArray[mid] > value) return BinarySearch(sortedArray, value, min, mid-1); else if (sortedArray[mid] < value) return BinarySearch(sortedArray, value, mid+1, max); else return true; } }
O(n)
An O(n) function grows at the same rate as the input size.public Boolean DoesTextContainChar(String text, char character) { for (int i = 0; i < text.length(); i++) { if (text.charAt(i) == character) { return true; } } return false; }
O(n2)
An O(n2) function grows at a rate of input size to the power of 2.public static int[] InsertionSort(int[] array) { for (int i = 1; i < array.length; i++) { int item = array[i]; int indexHole = i; while (indexHole > 0 && array[indexHole - 1] > item) { array[indexHole] = array[--indexHole]; } array[indexHole] = item; } return array; }More information on insertion sort can be found here.