This note is part of my learning notes on Data Structures & Algorithms.


Amortized Analysis

Amortized analysis is a method used in computer science to analyze the average time complexity of an algorithm over a sequence of operations, rather than evaluating the time complexity of a single operation. In the case where an algorithm has an occasional operation that is very slow, but most of the other operations are faster, this approach helps to spread the cost of expensive operations over multiple cheap operations.

Common Techniques Used In Amortized Analysis

  • Aggregate Analysis
  • Accounting / Banker’s / Charge Mehod
  • Potential Method

Aggregate Analysis

Aggregate analysis is a technique that involves calculating the total cost of a sequence of operations and then dividing this total by the number of operations to obtain the average cost per operation. This method works well when the total cost is easy to compute and when operations do not vary drastically in cost.

Steps

  1. Determine the sequence of operations that we need to analyze.
  2. Determine an upper bound T(n)T(n) on the total cost of a sequence of nn operations.
  3. Divide the total cost by the number of operations to get the average cost per operation Tnn\frac{T_n}{n}.

Accounting / Banker’s / Charge Method

The accounting method is an amortized analysis technique where we assign a “charge” to each operation, and this charge might be more than the actual cost of the operation. The excess charge is stored as “credit” and can be used to pay for future operations that are more expensive. This method ensures that the total assigned charges cover the actual total cost of all operations.

Steps

  1. Determine a suitable charge for each operation.
  2. Track the credits accumulated and used for each operation.
  3. Make sure that the total charge is always greater than or equal to the actual cost, ensuring credits cover future expensive operations.

Potential Method

The potential method is a technique in amortized analysis that uses a potential function to measure the “stored energy” or potential of a data structure. The potential function (Φ)(\Phi) helps account for the cost of operations by considering both the actual cost and the change in potential.

Steps

  1. Choose an appropriate potential function Φ\Phi that reflects the state of the data structure.
  2. Calculate the change in potential function that changes with each operation.
  3. Compute the amortized cost

Amortized Cost = Actual Cost +(ΦnewΦold)\text{Amortized Cost = Actual Cost } + (\Phi_{new} - \Phi_{old})


Example 1: Multi-pop Operations In A Stack

Using aggregate analysis

  1. There is a sequence of nn PUSH with O(1)O(1) cost , POP with O(1)O(1) cost and MULTIPOP with O(n)O(n) cost operations on an initially empty stack.
  2. The worst-case cost of MULTIPOP operation in the sequence is O(n)O(n). Therefore, the worst-case time of any sequence of nn stack operations is O(n)O(n).
  3. The average cost of an operation is then O(n)n=O(1)\frac{O(n)}{n} = O(1).

Example 2: Incrementing A Binary Counter

Using aggregate analysis

  1. There is a sequence of nn INCREMENT operations with cost that depends on the number of bits flipped, to add 1 to the value of binary counter.
  2. A single execution of INCREMENT takes θ(k)\theta(k) in the worst case, in which array AA contains all 11s. Thus, a sequence of nn INCREMENT operations on an initially zero counter takes O(nk)O(nk) time in the worst case.
  3. Tighten the bounds: For iki \geq k, bit A[i]A[i] does not exist. The total number of flips in the sequence is thus

i=0k1n2i=2n\sum^{k-1}_{i=0} \frac{n}{2^i} = 2n

  1. The worst case time for a sequence of nn INCREMENT operations on an initially zero counter is therefore O(n)O(n).
  2. The average or amortized cost of each operation is O(n)n=O(1)\frac{O(n)}{n} = O(1).

Using potential method

  1. There is a sequence of nn INCREMENT operations with cost that depends on the number of bits flipped, to add 1 to the value of binary counter.
  2. Let Φ\Phi be the number of 1s in the binary counter.
  3. Change in potential ΔΦ=1k\Delta \Phi = 1-k.
    • actual cost: kk (flipping k bits)
    • potential change: each 1 flipped to 0 decreases potential by 1, and the last 0 flipped to 1 increases potential by 1.
  4. Amortized costs: k+(1k)=1k + (1-k) = 1 which is O(1)O(1).

Example 3: Dynamic Arrays

We have a dynamic array that doubles in size when full. We want to analyze the cost of inserting nn elements into this array.

Using banker’s method

  1. There is a sequence of nn INSERTION operations with O(1)O(1) cost and operations with O(n)O(n) cost.
  2. Assign 3 units of charge for INSERTION (1 for the operation itself, another 2 as credit).
  3. Insert 1 element: Charge 3 units = actual cost 1 unit + credit 2 units.
  4. Insert another element (causes resizing): Charge 3 units = actual insert cost 1 unit + credit 2 units. Resize cost = 2 units (covered by credits from previous insertion).
  5. Total charges
    • INSERTION: n×2=2nn \times 2 = 2n
    • RESIZING: The credits saved are used for resizing.
    • The total actual cost: O(n)O(n)
  6. Amortized cost = 2nn=2\frac{2n}{n} = 2 which is O(1)O(1).