Big O notation is used to define the complexity and performance of an algorithm with respect to time or space consumed during execution. It is an essential technique to express the performance of an algorithm and determine the worst-case complexity of the program.
To understand it in detail, let's go through some code examples and use Big O notation to calculate their performance.
If we calculate the complexity of the following program, the Big O notation will be equal to O(1):
static int SumNumbers(int a, int b) { return a + b; }
This is because, however the parameter is specified, it is just adding and returning it.
Let's consider another program that loops through the list. The Big O notation will be determined as O(N):
static bool FindItem(List<string> items, string value) { foreach(var item in items) { if (item == value) { return true; } } return...