The complexity of an algorithm is a function f (n) which measures the time and space used by an algorithm in terms of input size n.
- In computer science, the complexity of an algorithm is a way to classify how efficient an algorithm is, compared to alternative ones.
- The focus is on how execution time increases with the data set to be processed.
- The computational complexity and efficient implementation of the algorithm are important in computing, and this depends on suitable data structures.
- A complexity of any algorithm tells you how much resources will the execution use over some input size n.
What effects run time of an algorithm?
- (a) computer used, the hardware platform
- (b) representation of abstract data types (ADT’s)
- (c) efficiency of compiler
- (d) competence of implementer (programming skills)
- (e) complexity of underlying algorithm
- (f) size of the input
We will show that of those above (e) and (f) are generally the most significant.
What do we measure?
In analysing an algorithm, rather than a piece of code, we will try and predict the number of times “the principle activity” of that algorithm is performed. For example, if we are analysing a sorting algorithm we might count the number of comparisons performed, and if it is an algorithm to find some optimal solution, the number of times it evaluates a solution. If it is a graph colouring algorithm we might count the number of times we check that a coloured node is compatible with its neighbours.
A function of input. However, we will attempt to characterize this by the size of the input. We will try and estimate the WORST CASE, and sometimes the BEST CASE, and very rarely the AVERAGE CASE.
What is worst Case?
The maximum run time, over all inputs of size n, ignoring effects (a) through (d) above. That is, we only consider the “number of times the principle activity of that algorithm is performed”.
What is Best Case?
In this case we look at specific instances of input of size n. For example, we might get best behaviour from a sorting algorithm if the input to it is already sorted.
What is Average Case?
Arguably, average case is the most useful measure. It might be the case that worst case behaviour is pathological and extremely rare, and that we are more concerned about how the algorithm runs in the general case.
Unfortunately this is typically a very difficult thing to measure. Firstly, we must in some way be able to define by what we mean as the “average input of size n”. We would need to know a great deal about the distribution of cases throughout all data sets of size n. Alternatively we might make a possibly dangerous assumption that all data sets of size n are equally likely.
Time for an algorithm to run t(n)
Time complexity can be seen as the measure of how fast or slow an algorithm will perform for the input size. Time complexity is always given with respect to some input size (say n). Here’s an example to clear the text :
11 people are standing in a row in ascending order with respect to their heights. Now you are asked to tell the median of their heights. You have 2 methods to do it :
- Starting from the first(smallest) person, you record the height of every person in the line and then calculate the median of the data. Notice, in this method, we asked (compute) heights of all 11 men. That means we did the work given to us in the order of total men standing in the line. What if there were n men ? then our computation would have been n. So we can say that time complexity is directly related to n or O(n).
- You act smartly this time. You knew that everyone is standing in ascending order so its not required to go to every person. You then go the the person standing on the 6th position and asked his height. He tell you his height and you declare the answer. In this approach, it didn’t matter if there were 10 or 100 men, you would just go the the middle person and get the answer. So here you had to work only once, or we can say that the time complexity is independent of the input size and can be written as O(1).
So which method would you prefer ? Clearly the second one.
Space complexity can be seen as the amount of extra memory you require to execute your algorithm. Like the time complexity, it is also given with respect to some input size (n). Another example :
10 bags are to be transported from one site to another. Again you have two options.
- You choose a small vehicle which has only one seat and you cannot keep more than one bag on the seat. So you take a bag and transport it. You make 10 trips (time complexity) but only one seat (space complexity). Similarly if you had n bags and you intend to transport them with the same medium, then also you will need only one seat. So you can say that space complexity for this solution will be constant or independent of the input size (number of bags) or O(1).
- Now you were fed up making a lot of trips between the two sites. You choose a bus with exactly same seats as the number of bags. You keep one bag per seat in the bus. Here, we have to make only one trip (time complexity) but we have used 10 seats (space complexity). Here you can say that number of seats (extra memory) is directly proportional to the input size (number of bags). So the space complexity is O(n).
You can observe the time-space trade off in this example.
Complexity of algorithm O(1) : Constant
The complexity O(1) states that no matter what the input size, the effort to get the output is always constant.
Example : Finding the median of a sorted list is always the middle of list i.e n/2 position.
Complexity of algorithm O(n) : Linear
The complexity O(n) states that the effort to get the output is directly proportional to the input size.
Complexity of algorithm O(log n) : Logarithmic
The log here is the log with the base 2 and not with the base 10 like calculus, in computer science log is by default log with the base 2.
log n can be reprsented as 2 ^? = n.
Example log 8 is 2 ^ x = 8. and the x is 3 here.
In short log n means here that we are using a dividing out input n into half everytime and trying to find the result. Binary Search has a complexity of O(log n).
Complexity of algorithm O(n^2) : Quadratic
Bubble sort algorithm has the worst case scenario of O(n^2). So how bubble sort works is, the highest value in the array in bubbled down to the bottom of the array, and for next iteration last element is not compared since it is the highest element in the array.
Bubble sort requires : 2 loops(a loop inside a loop i.e nested loop) and a swap, Since the two loops are nested loop hence the time complexity is O(n^2) since number of inputs n * n times and a swap has time complexity of O(1) since it is constant i.e independent of input n. Hence in total the complexity is O(n^2).
Insertion sort also has the same time complexity.