TIME! It is an autonomous identity that manifests its being in the world. No one has ever conquered the identity till date. The big shots in the world have just tweaked it in a certain way making them lucrative in their fields. Today’s topic is based on time complexity, A term that is hell-bent on making a computer science engineer’s life worst. In the field of computer science, the concept of time complexity is very crucial, a programmer must know the best possible way to solve an algorithm.

TIME COMPLEXITY

Time complexity is a major topic of algorithm, it tells us about how a particular solution can be optimized to reduce the runtime, memory, and compilation time so that the solution obtained is the best out there. It takes a lot of effort to find the perfect algorithm for a problem that has the least time complexity. Apparently, it means how much time an algorithm takes to complete, after knowing the time complexity our job is to reduce the extra time taken by the algorithm making it optimum.

UNDERSTANDING TIME COMPLEXITY WITH REAL-WORLD EXAMPLE

Let’s say you are studying in an university, living in the hostel, one fine day you lost your room keys and as a matter of concern you have to find it anyway. Suppose there are 200 students in your university clearly you are outnumbered. Now as a normal guy you would go and ask all 200 students for the room key that is 0(n) time complexity. On the other hand, if you would be prudent then you would have divided the whole strength into two parts that is 0(logn) time complexity. I very well know that the above terms are a bit confusing. Don’t worry, I will walk you through the basic terms of time complexity.

ORDER OF GROWTH

Let’s start with a very basic and foremost topic of time complexity i.e order of growth. Basically, it tells us about how the time of execution is dependent on the length of the input. It will help us in calculating the running time of a code with ease. We ignore the small terms as they are quite insignificant when compared to a larger input. Order of growth helps in hardwiring the process of time complexity as it lets us know how a particular algorithm is dependent on the size of the input.

TIME COMPLEXITY v/s INPUT SIZE TABLE (FOR REFERENCE)

https://images.app.goo.gl/ynMY64X68qpW1wYd7

NOTATIONS AND LIMITING BEHAVIOR OF FUNCTIONS

https://www.hackerearth.com/practice/basic-programming/complexity-analysis/time-and-space-complexity/tutorial/#:~:text=Time%20complexity%20of%20an%20algorithm,the%20length%20of%20the%20input.&text=Each%20of%20the%20operation%20in,depends%20on%20the%20value%20of%20.

O-notation:

To denote asymptotic upper bound, we use O-notation. For a given function g(n), we denote by O(g(n)) (pronounced “big-oh of g of n”) the set of functions:
O(g(n))= { f(n) : there exist positive constants c and n0 such that 0≤f(n)≤c∗g(n) for all n≥n0 }

Ω-notation:

To denote asymptotic lower bound, we use Ω-notation. For a given function g(n), we denote by Ω(g(n)) (pronounced “big-omega of g of n”) the set of functions:
Ω(g(n))= { f(n) : there exist positive constants c and n0 such that 0≤c∗g(n)≤f(n) for all n≥n0 }

Θ-notation:

To denote asymptotic tight bound, we use Θ-notation. For a given function g(n), we denote by Θ(g(n)) (pronounced “big-theta of g of n”) the set of functions:
Θ(g(n))= { f(n) : there exist positive constants c1,c2 and n0 such that 0≤c1∗g(n)≤f(n)≤c2∗g(n) for all n>n0 }

=>While calculating the time complexity we always consider big oh of n notion as it gives is the upper bound of the function which in turn is the worst case scenario.

CALCULATING TIME COMPLEXITY

Calculating time complexity is not everyone’s cup of tea, this concept requires a lot of brainstorming, and the concept of basic coding must be super clear to the individual. Let’s take some examples and try to find out time complexity.

1. statement;

In this case, time complexity must be constant as it does not depend on N. Here N is basically the number of times a statement executes.

2. for(int i=0;i<N;i++)

{

statement;

}

In this case, time complexity should be linear — 0(n) because the statement is repeating itself linearly.

3. for(int i=0;i<N;i++)

{

for(int j=0;j<N;j++)

{

statement;

}

}

In this case, the time complexity should be — 0(N*N) as in this scenario we have nested loops running simultaneously hence the answer.

4. while(low <= high)
{
mid = (low + high) / 2;
if (target < list[mid])
high = mid - 1;
else if (target > list[mid])
low = mid + 1;
else break;
}

This case is quite confusing we will have to divide the algorithm in two halves to calculate its time complexity. The time complexity of this algorithm would be — 0(logN) as the running time of this algorithm is proportional to N and moreover breaking it into 2 halves makes the difference to our answer.

UNDERSTANDING THE NOTATIONS IN AN EASY WAY

  1. O(N) — Set of functions that grows slowly or equal to the rate of the expression. It represents the worst case of an algorithm’s time complexity.
  2. Ω(N) — Set of functions that grows faster or equal to the rate of the expression. It represents the best case of an algorithm’s time complexity.
  3. Θ(N) — It consist of all the functions lying between the big oh and omega. It represents the average case of an algorithm’s time complexity.

I HOPE THIS BLOG IS INFORMATIVE AND SHOULD GIVE YOU A SLIGHT IDEA ABOUT THE TIME COMPLEXITY AND ITS CALCULATION.

THANK YOU!)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store