The Shortest Path Problem (SPP) is one of the most significant problems in combinatorial optimization. Beyond the vast number of direct applications, the SPP frequently serves as a subroutine in solving other optimization problems. This paper presents a comprehensive review of monotone priority queues, a class of data structures that play a crucial role in solving the SPP efficiently. Monotone priority queues are characterized by the property that their minimum key does not decrease over time, making them particularly effective for label-setting algorithms like Dijkstra’s. Some key data structures within this category are explored, emphasizing those derived directly from Dial’s algorithm, including variations of multi-level bucket structures and radix heaps. Theoretical complexities and practical considerations of these structures are discussed, with insights into their development and refinement provided through a historical timeline.
This paper presents a comprehensive overview of monotone priority queues and their pivotal role in optimizing Dijkstra’s shortest path algorithm. Its significance lies in detailing how exploiting specific properties of the shortest path problem, particularly the non-decreasing nature of extracted keys and the assumption of non-negative integer edge weights, leads to highly efficient priority queue implementations. These specialized data structures often outperform general-purpose priority queues in this domain.
The central innovation explored is the concept of a monotone priority queue, defined as one where the minimum key extracted never decreases over time. This property is inherently met by label-setting algorithms like Dijkstra’s, making these tailored priority queues highly effective.
Key innovations discussed include:
Dial’s 1-Level Bucket Structure: This foundational innovation implicitly uses a monotone priority queue. It organizes elements into an array of “buckets,” where each bucket B[i]
holds vertices whose current shortest path distance from the source is i
. The algorithm simply extracts from the lowest-indexed non-empty bucket. This simple design, exploiting the integer nature and non-decreasing property of distances, provides a base complexity of O(m + nC), where m
is the number of edges, n
the number of vertices, and C
is the maximum arc weight.
Multi-Level Bucket Structures: To improve upon Dial’s algorithm, specifically to reduce the cost of scanning many empty buckets during extract-min operations, multi-level structures were developed.
√C + 1
buckets, each containing √C + 1
bottom-level buckets. This reduces the search for the next minimum element from O(C) to O(√C), yielding a Dijkstra complexity of O(m + n√C).C
to O(C^(1/k)) for k
levels, leading to complexities like O(km + n(k + C^(1/k))). Modern implementations often use bit manipulation and the RAM model to achieve even better theoretical bounds, sometimes reaching O(m + nC) for practical purposes by cleverly packing information.Hot Queues (Cherkassky et al.): This innovation combines the benefits of bucket structures with a general-purpose heap (like a Fibonacci heap or Thorup’s heap). The idea is to direct elements to the heap if the active bucket in the multi-level structure contains few elements, thereby avoiding expensive “expansions” or reorganizations that occur when a small bucket needs to be distributed into a lower level. This hybrid approach aims for strong amortized theoretical bounds while maintaining practical efficiency.
Radix Heaps (Ahuja et al.): Distinct from simple bucket structures, radix heaps exploit the property that successive extracted minimum keys are non-decreasing. They employ buckets with exponentially increasing widths (e.g., powers of 2). This allows for efficient insertion and decrease-key operations by mapping keys to appropriate variable-width buckets. Two-level radix heaps further improved performance, bringing Dijkstra’s complexity to O(m + n log(C)/log log(C)). Integrating these with Fibonacci heaps further refined the search for non-empty inner buckets.
Theoretical Limits and RAM Model Optimizations (Thorup): Landmark work by Thorup demonstrated that with the assumption of the Random Access Machine (RAM) model and integer keys, Dijkstra’s algorithm can achieve nearly linear time. This involves sophisticated bit manipulation and insights into the equivalence between sorting and monotone priority queues, pushing the theoretical boundaries for these specialized data structures to O(m log log(n)) or even O(m + n log log(C)) under certain conditions.
The main prior ingredients that these innovations build upon are:
insert
, delete
, extract-min
, and crucially, decrease-key
operations, which are fundamental to Dijkstra’s label-setting process.C
.