Analyzing Nonrecursive Algorithms

Time Efficiency of Nonrecursive Algorithms

General Plan for Analysis

  1. Decide on parameter n (input size)
  2. Identify algorithm’s basic operation.
  3. Determine worst, average, and best cases for input of size n.
  4. Set up a sum of the number of times the basic operation is executed.
  5. Simplify sum using standard formulas and rules.

And now, a quick-fire round of problems.


Example: Maximum Element

Q: Analyze the time complexity

ALGORITHM MaxElement(A[0..n-1]
// Determines value of largest element in a given array
// Input: Array A[0..n-1] of real numbers
// Output: Value of largest element in A
    maxval \leftarrow A[0]
    for i \leftarrow 1 to n-1 do
        if A[i] \gt maxval
            maxval \leftarrow A[i]
    return maxval

A:

Our basic operation is the comparison A[i] > maxval.

The best, worst, and average are all equal because we’ll always iterate over everything in the array.

Looking at the for loop, we do i \leftarrow 1 to n - 1.

C(n) = \sum_{i=1}^{n-1} 1 = ?

Note how each iteration does a single basic operation.

Thus,

C(n) = \sum_{i=1}^{n-1} 1 = n - 1

Therefore, the time complexity is C(n) = n - 1 \in \Theta(n).

Note: See Essential Summation Formulas if you’re lost on the simplification. In this case, we used counting formula.

Example: Element Uniqueness

Q: Find the worst-case time complexity.

ALGORITHM UniqueElements(A[0..n-1])
// Determines whether all elements in given array are distinct
// Input: An array A[0..n-1]
// Output: Returns true if all elements in A are distinct, false otherwise.
    for i \leftarrow 0 to n \text{$-$} 2 do
        for j \leftarrow i \text{$+$} 1 to n \text{$-$} 1 do
            if A[i] = A[j] return false
    return true

A:

The basic operation is the A[i] = A[j] check.

We can see that in the worst case, we iterate over everything. In the best case, the first comparison lets us return immediately.

Let’s define this mathematically.

C_\text{worst} (n) = \sum_{i=0}^{n-2} \left( \sum_{j=i+1}^{n-1} 1 \right)

Now, simplify:

\begin{aligned} C_\text{worst} (n) &= \sum_{i=0}^{n-2} \left( \sum_{j=i+1}^{n-1} 1 \right) \\ &= \sum_{i=0}^{n-2} \left[ (n-1) - (i + 1) + 1 \right] \\ &= \sum_{i=0}^{n-2} (n-1-i) \end{aligned}

Note: See Essential Summation Formulas if you’re lost on the simplification. In this case, we used counting formula and arithmetic series.

\begin{aligned} \sum_{i=0}^{n-2} (n-1-i) &= (n-1) + (n-2) + ... + 1 \\ &= \frac{(n-1)n}{2} \\ &\approx \frac{1}{2}n^2 \in \Theta(n^2) \end{aligned}

Solving average case for this problem because is beyond our current abilities.

Example: Matrix Multiplication

Q: Find the time efficiency

ALGORITHM MatrixMultiplication(A[0..n-1,0..n-1], B[0..n-1,0..n-1])
// Multiplies two n-by-n matrices by the definition-based algorithm
// Input: Two n-by-n matrices A and B
// Output: Matrix C = AB
    for i \leftarrow 0 to n \text{$-$} 1 do
        for j \leftarrow 0 to n \text{$-$} 1 do
            C[i,j] \leftarrow 0.0
            for k \leftarrow 0 to n \text{$-$} 1 do
                C[i,j] \leftarrow C[i,j] \text{$+$} A[i,k] \large{*} B[k, j]
    return C

A:

The basic operation is C[i,j] \leftarrow C[i,j] + A[i,k] * B[k, j]

To find the time efficiency, recall that T(n) \approx c_\text{op} C(n).

This is from Theoretical Analysis (CS3310)

We’ll start by finding the # of times the basic operation runs (C(n))

\begin{aligned} C(n) &= \sum_{i=0}^{n-1} \sum_{j=0}^{n-1} \sum_{k=0}^{n-1} 1 \\ &= \sum_{i=0}^{n-1} \sum_{j=0}^{n-1} n \\ &= \sum_{i=0}^{n-1} n^2 \\ &= n^3 \end{aligned}

We simplified the above with, once again, the Essential Summation Formulas.

Now, we can plug C(n) back into the time-efficiency formula

\begin{aligned} T(n) &\approx c_\text{op} C(n) \\ &= c_\text{op} n^3 \end{aligned}

Since c_\text{op} is a constant, we conclude T(n) \in \Theta(n^3)

Example: Counting Binary Digits

ALGORITHM Binary(n)
// Input: A positive decimal integer n
// Output: Humber of binary digits in n's binary representation
    count \leftarrow 1
    while n \gt 1 do
        count \leftarrow count \text{$+$} 1
        n \leftarrow \lfloor n / 2 \rfloor

The basic operation is n \leftarrow \lfloor n / 2 \rfloor.

This cannot be analyzed using summation formulas like previous examples because the while loop’s iteration count depends on the value of n, not just its size.

Instead, we ask: “How many times can we divide n by 2 before reaching 1?”

The answer is approximately \log_2 n times.

We’ll go more into this in later chapters.

Therefore: T(n) \in \Theta(\log n)