Day 11 Asymptotic Notations

Instead of timing a program, through asymptotic notation, we can calculate a program’s runtime by looking at how many instructions the computer has to perform based on the size of the program’s input: N.

Big Theta (Θ):

The first subtype of asymptotic notation we will explore is big Theta (denoted by Θ). We use big Theta when a program has only one case in term of runtime. But what exactly does that mean? Take a look at the pseudocode for a function that prints the values in a list below:

Function with input that is a list of size N:
   For each value in list:
    Print the value
 
But what happens when there are multiple runtime cases for a single program? We will learn about that in a future exercise.
 
Sometimes, a program may have a different runtime for the best case and worst case. For instance, a program could have a best case runtime of Θ(1) and a worst case of Θ(N). We use a different notation when this is the case. We use big Omega or Ω to describe the best case and big O or O to describe the worst case. Take a look at the following pseudocode that returns True if 12 is in the list and False otherwise:
 
Adding Runtimes:
 
Sometimes, a program has so much going on that it’s hard to find the runtime of it. Take a look at the pseudocode program that first prints all the positive values up to N and then returns the number of times it takes to divide N by 2 until N is 1.
 
Function that takes a positive integer N:
    Set a variable i equal to 1
    Loop until i is equal to N:
        Print i
        Increment i

    Set a count variable to 0
    Loop while N is not equal to 1:
        Increment count
        N = N/2
    Return count

Rather than look at this program all at once, let’s divide into two chunks: the first loop and the second loop. 

  • In the first loop, we iterate until we reach N. Thus the runtime of the first loop is Θ(N). 
  • However, the second loop, as demonstrated in a previous exercise, runs in Θ(log N). 

Now, we can add the runtimes together, so the runtime is Θ(N) + Θ(log N).

However, when analyzing the runtime of a program, we only care about the slowest part of the program, and because Θ(N) is slower than Θ(log N), we would actually just say the runtime of this program is Θ(N). It is also appropriate to say the runtime is O(N) because if it runs in Θ(N) for every case, then it also runs in Θ(N) for the worst case. Most of the time people will just use big O notation.

Space Complexity:

Asymptotic notation is often used to describe the runtime of a program or algorithm, but it can also be used to describe the space, or memory, that a program or algorithm will need.

Think about a simple function that takes in two numbers and returns their sum:

def add_numbers(a, b):
  return ab

This function has a space complexity of O(1), because the amount of space it needs will not change based on the input. While this function also has a constant runtime of O(1), most functions do not have matching space and time complexities. 

Let’s take a look at another function:

def simple_loop(input_array):
  for i in input_array:
    print(i)

As we know, a simple for loop that goes through every element in an array of size n has a linear runtime of O(n). However, this function takes O(1) space since no new variables are being created and therefore no more space must be allocated.

A recursive function that is passed the same array or object in each call doesn’t add to the space complexity if the array or object is passed by reference (which it is in Python).

 

posted @ 2022-01-23 22:18  M1stF0rest  阅读(33)  评论(0)    收藏  举报