Fundamentals of algorithms
Download Fundamentals of algorithms
Preview text
CHAPTER
Fundamentals of algorithms
Chung-Yang (Ric) Huang National Taiwan University, Taipei, Taiwan
Chao-Yue Lai National Taiwan University, Taipei, Taiwan
Kwang-Ting (Tim) Cheng University of California, Santa Barbara, California
4
ABOUT THIS CHAPTER
In this chapter, we will go through the fundamentals of algorithms that are essential for the readers to appreciate the beauty of various EDA technologies covered in the rest of the book. For example, many of the EDA problems can be either represented in graph data structures or transformed into graph problems. We will go through the most representative ones in which the efficient algorithms have been well studied.
The readers should be able to use these graph algorithms in solving many of their research problems. Nevertheless, there are still a lot of the EDA problems that are naturally difficult to solve. That is to say, it is computationally infeasible to seek for the optimal solutions for these kinds of problems. Therefore, heuristic algorithms that yield suboptimal, yet reasonably good, results are usually adopted as practical approaches. We will also cover several selected heuristic algorithms in this chapter. At the end, we will talk about the mathematical programming algorithms, which provide the theoretical analysis for the problem optimality. We will especially focus on the mathematical programming problems that are most common in the EDA applications.
4.1 INTRODUCTION
An algorithm is a sequence of well-defined instructions for completing a task or solving a problem. It can be described in a natural language, pseudocode, a flowchart, or even a programming language. For example, suppose we are interested in knowing whether a specific number is contained in a given sequence of numbers. By traversing the entire number sequence from a certain beginning number 173
174 CHAPTER 4 Fundamentals of algorithms
Inputs: a sequence of number S a number n
Let variable x = S.begin()
x == n ?
yes FOUND
no
x == S.end() ? yes
NOT
FOUND
no
x = x.next()
FIGURE 4.1 Flowchart of the “Linear Search” algorithm.
to a certain ending number, we use a search algorithm to find this specific number. Figure 4.1 illustrates this intuitive algorithm known as linear search.
Such kinds of algorithms can be implemented in a computer program and then used in real-life applications [Knuth 1968; Horowitz 1978]. However, the questions that must be asked before implementation are: “Is the algorithm efficient?” “Can the algorithm complete the task within an acceptable amount of time for a specific set of data derived from a practical application?” As we will see in the next section, there are methods for quantifying the efficiency of an algorithm. For a given problem, different algorithms can be applied, and each of them has a different degree of efficiency. Such metrics for measuring an algorithm’s efficiency can help answer the preceding questions and aid in the selection of the best possible algorithm for the task.
Devising an efficient algorithm for a given EDA problem could be challenging. Because a rich collection of efficient algorithms already exists for a set of standard problems where data are represented in the form of graphs, one possible approach is to model the given problem as a graph problem and then apply a known, efficient algorithm to solve the modeled graph problem. In Section 4.3, we introduce several graph algorithms that are commonly used for a wide range of EDA problems.
Many EDA problems are intrinsically difficult, because finding an optimal solution within a reasonable runtime is not always possible. For such problems, certain heuristic algorithms can be applied to find an acceptable solution first. If time or computer resources permit, such algorithms can further improve the result incrementally.
In addition to modeling EDA problems in graphs, it is sometimes possible to transform them into certain mathematical models, such as linear inequalities or nonlinear equations. The primary advantage of modeling an EDA problem with
4.2 Computational complexity 175
a mathematical formula is that there are many powerful tools that can automatically handle these sorts of mathematical problems. They may yield better results than the customized heuristic algorithms. We will briefly introduce some of these useful mathematical programming techniques near the end of this chapter.
4.2 COMPUTATIONAL COMPLEXITY
A major criterion for a good algorithm is its efficiency—that is, how much time and memory are required to solve a particular problem. Intuitively, time and memory can be measured in real units such as seconds and megabytes. However, these measurements are not subjective for comparisons between algorithms, because they depend on the computing power of the specific machine and on the specific data set. To standardize the measurement of algorithm efficiency, the computational complexity theory was developed [Ullman 1984; Papadimitriou 1993, 1998; Wilf 2002]. This allows an algorithm’s efficiency to be estimated and expressed conceptually as a mathematical function of its input size.
Generally speaking, the input size of an algorithm refers to the number of items in the input data set. For example, when sorting n words, the input size is n. Notice that the conventional symbol for input size is n. It is also possible for an algorithm to have an input size with multiple parameters. Graph algorithms, which will be introduced in Section 4.3, often have input sizes with two parameters: the number of vertices jV j and the number of edges jE j in the graph.
Computational complexity can be further divided into time complexity and space complexity, which estimate the time and memory requirements of an algorithm, respectively. In general, time complexity is considered much more important than space complexity, in part because the memory requirement of most algorithms is lower than the capacity of current machines. In the rest of the section, all calculations and comparisons of algorithm efficiency refer to time complexity as complexity unless otherwise specified. Also, time complexity and running time can be used interchangeably in most cases.
The time complexity of an algorithm is calculated on the basis of the number of required elementary computational steps that are interpreted as a function of the input size. Most of the time, because of the presence of conditional constructs (e.g., if-else statements) in an algorithm, the number of necessary steps differs from input to input. Thus, average-case complexity should be a more meaningful characterization of the algorithm. However, its calculations are often difficult and complicated, which necessitates the use of a worst-case complexity metric. An algorithm’s worst-case complexity is its complexity with respect to the worst possible inputs, which gives an upper bound on the average-case complexity. As we shall see, the worst-case complexity may sometimes provide a decent approximation of the average-case complexity.
The calculation of computational complexity is illustrated with two simple examples in Algorithm 4.1 and 4.2. Each of these entails the process of looking
176 CHAPTER 4 Fundamentals of algorithms
up a word in a dictionary. The input size n refers to the total number of words in the dictionary, because every word is a possible target. The first algorithm— linear search—is presented in Algorithm 4.1. It starts looking for the target word t from the first word in the dictionary (Dic[0]) to the last word (Dic[n-1]). The conclusion “not found” is made only after every word is checked. On the other hand, the second algorithm—binary search—takes advantage of the alphabetic ordering of the words in a dictionary. It first compares the word in the middle of the dictionary (Dic[mid]) with the target t. If t is alphabetically “smaller” than Dic[mid], t must rest in the front part of the dictionary, and the algorithm will then focus on the front part of the word list in the next iteration (line 5 of Binary_Search), and vice versa. In every iteration, the middle of the search region is compared with the target, and one half of the current region will be discarded in the next iteration. Binary search continues until the target word t is matched or not found at all.
Algorithm 4.1 Linear Search Algorithm
Linear_Search(Array_of_words Dic[n], Target t)
1. for counter ctr from 0 to n-1
2. if (Dic[ctr] is t) return Dic[ctr];
3. return NOT_FOUND;
Algorithm 4.2 Binary Search Algorithms
Binary_Search(Array_of_words Dic[n], Target t)
1. Position low = 0, high = n-1;
2. while (low <= high) do
3. Position mid = (low + high)/2;
4. if (Dic[mid] < t) low = mid;
5. else if (Dic[mid] > t) high = mid;
6. else // Dic[mid] is t
7.
return Dic[mid];
8. end if
9. end while
10. return NOT_FOUND;
In linear search, the worst-case complexity is obviously n, because every word must be checked if the dictionary does not contain the target word at all. Different target words require different numbers of executions of lines 1-2 in Linear_Search, yet on average, n/2 times of checks are required.
4.2 Computational complexity 177
Thus, the average-case complexity is roughly n/2. Binary search is apparently
quicker than linear search. Because in every iteration of the while loop in
Binary_Search one-half of the current search area is discarded, at most
log2 n (simplified as lg n in the computer science community) of lookups are
required—the worst-case complexity. n is clearly larger than lg n, which proves
that binary search is a more efficient algorithm. Its average-case complexity can
be calculated as in Equation (4.1) by adding up all the possible numbers of
executions and dividing the result by n.
0
1
average À case À complexity ¼ @1Á1 þ 2Á2 þ 4Á3 þ 8Á4 þ . . . þ n Álg nA=n 2
¼ lg n À 1 þ 3 n
ð4:1Þ
4.2.1 Asymptotic notations
In computational complexity theory, not all parts of an algorithm’s running time are essential. In fact, only the rate of growth or the order of growth of the running time is typically of most concern in comparing the complexities of different algorithms. For example, consider two algorithms A and B, where A has longer running time for smaller input sizes, and B has a higher rate of growth of running time as the input size increases. Obviously, the running time of B will outnumber that of A for input sizes greater than a certain number. As in real applications, the input size of a problem is typically very large, algorithm B will always run more slowly, and thus we will consider it as the one with higher computational complexity.
Similarly, it is also sufficient to describe the complexity of an algorithm considering only the factor that has highest rate of growth of running time. That is, if the computational complexity of an algorithm is formulated as an equation, we can then focus only on its dominating term, because other lower-order terms are relatively insignificant for a large n. For example, the average-case complexity of Binary_Search, which was shown in Equation (4.1), can be simplified to only lg n, leaving out the terms À1 and 3/n. Furthermore, we can also ignore the dominating term’s constant coefficient, because it contributes little information for evaluating an algorithm’s efficiency. In the example of Linear_Search in Algorithm 4.1, its worst-case complexity and averagecase complexity—n and n/2, respectively—are virtually equal under this criterion. In other words, they are said to have asymptotically equal complexity for larger n and are usually represented with the following asymptotic notations.
Asymptotic notations are symbols used in computational complexity theory to express the efficiency of algorithms with a focus on their orders of growth. The three most used notations are O-notation, O-notation, and Y-notation.
178 CHAPTER 4 Fundamentals of algorithms
Also called
O(1) Constant time
O(lg n) Logarithmic time
O(n)
Linear time
O(nlg n)
O(n2) Quadratic time
O(n3)
Cubic time
O(2n) Exponential time
O(n!) Factorial time
n = 100
n = 10,000 n = 1,000,000
0.000001 sec. 0.000001 sec. 0.000001 sec.
0.000007 sec. 0.000013 sec. 0.00002 sec.
0.0001 sec.
0.01 sec.
1 sec.
0.00066 sec.
0.13 sec.
20 sec.
0.01 sec.
100 sec. 278 hours
1 sec. 278 hours 317 centuries
1014 centuries 102995 centuries1030087centuries
10143 centuries 1035645centuries
N/A
FIGURE 4.2
Frequently used orders of functions and their aliases, along with their actual running time on a million-instructions-per-second machine with three input sizes: n ¼ 100, 10,000, and 1,000,000.
4.2.1.1 O-notation
O-notation is the dominant method used to express the complexity of algorithms. It denotes the asymptotic upper bounds of the complexity functions. For a given function g(n), the expression O(g(n)) (read as “big-oh of g of n”) represents the set of functions
OðgðnÞÞ ¼ ff ðnÞ: positive constants c and n0 exist such that 0 f ðnÞ cgðnÞ for all n ! n0g
A non-negative function f(n) belongs to the set of functions O(g(n)) if there is a positive constant c that makes f(n) cg(n) for a sufficiently large n. We can write f(n) 2 O(g(n)) because O(g(n)) is a set, but it is conventionally written as f(n) ¼ O(g(n)). Readers have to be careful to note that the equality sign denotes set memberships in all kinds of asymptotic notations.
The definition of O-notation explains why lower-order terms and constant coefficients of leading terms can be ignored in complexity theory. The following are examples of legal expressions in computational theory:
n2 ¼ Oðn2Þ n3 þ 1000n2 þ n ¼ Oðn3Þ
1000n ¼ OðnÞ 20n3 ¼ Oð0:5n3 þ n2Þ
Figure 4.2 shows the most frequently used O-notations, their names, and the comparisons of actual running times with different values of n. The first order of functions, O(1), or constant time complexity, signifies that the algorithm’s running time is independent of the input size and is the most efficient. The other O-notations are listed in their rank order of efficiency. An algorithm can be considered feasible with quadratic time complexity O(n2) for a relatively small n, but when n ¼ 1,000,000, a quadratic-time algorithm takes dozens of
4.2 Computational complexity 179
days to complete the task. An algorithm with a cubic time complexity may handle a problem with small-sized inputs, whereas an algorithm with exponential or factorial time complexity is virtually infeasible. If an algorithm’s time complexity can be expressed with or is asymptotically bounded by a polynomial function, it has polynomial time complexity. Otherwise, it has exponential time complexity. These will be further discussed in Subsection 4.2.2.
4.2.1.2 O-notation and Q-notation O-notation is the inverse of O-notation. It is used to express the asymptotic lower bounds of complexity functions. For a given function g(n), the expression O( g(n)) (read as “big-omega of g of n”) denotes the set of functions:
O ðgðnÞÞ ¼ ff ðnÞ: positive constants c and n0 exist such that 0 cgðnÞ f ðnÞ for all n ! n0g
From the definitions of O- and O-notation, the following mutual relationship holds:
f ðnÞ ¼ OðgðnÞÞ if and only if gðnÞ ¼ O ðf ðnÞÞ
O-notation receives much less attention than O-notation, because we are usually concerned about how much time at most would be spent executing an algorithm instead of the least amount of time spent.
Y-notation expresses the asymptotically tight bounds of complexity functions. Given a function g(n), the expression Y(g(n)) (read as “big-theta of g of n”) denotes the set of functions
YðgðnÞÞ ¼ f f ðnÞ: positive constants c1; c2; and n0 exist such that 0 c1gðnÞ f ðnÞ c2gðnÞ for all n ! n0g
A function f(n) can be written as f(n) ¼ Y(g(n)) if there are positive coefficients c1 and c2 such that f(n) can be squeezed between c1g(n) and c2g(n) for a sufficiently large n. Comparing the definitions of all three asymptotic notations, the following relationship holds:
f ðnÞ ¼ YðgðnÞÞ if and only if f ðnÞ ¼ OðgðnÞÞ and f ðnÞ ¼ OðgðnÞÞ
In effect, this powerful relationship is often exploited for verifying the asymptotically tight bounds of functions [Knuth 1976].
Although Y-notation is more precise when characterizing algorithm complexity, O-notation is favored over Y-notation for the following two reasons: (1) upper bounds are considered sufficient for characterizing algorithm complexity, and (2) it is often much more difficult to prove a tight bound than it is to prove an upper bound. In the remainder of the text, we will stick with the convention and use O-notation to express algorithm complexity.
180 CHAPTER 4 Fundamentals of algorithms
4.2.2 Complexity classes
In the previous subsection, complexity was shown to characterize the efficiency of algorithms. In fact, complexity can also be used to characterize the problems themselves. A problem’s complexity is equivalent to the time complexity of the most efficient possible algorithm. For instance, the dictionary lookup problem mentioned in the introduction of Section 4.2 has a complexity of O(lg n), the complexity of Binary_Search in Algorithm 4.2.
To facilitate the exploration and discussion of the complexities of various problems, those problems that share the same degree of complexity are grouped, forming complexity classes. Many complexity classes have been established in the history of computer science [Baase 1978], but in this subsection we will only discuss those that pertain to problems in the EDA applications. We will make the distinction between optimization and decision problems first, because these are key concepts within the area of complexity classes. Then, four fundamental and important complexity classes will be presented to help readers better understand the difficult problems encountered in the EDA applications.
4.2.2.1 Decision problems versus optimization problems Problems can be categorized into two groups according to the forms of their answers: decision problems and optimization problems. Decision problems ask for a “yes” or “no” answer. The dictionary lookup problem, for example, is a decision problem, because the answer could only be whether the target is found or not. On the other hand, an optimization problem seeks for an optimized value of a target variable. For example, in a combinational circuit, a critical path is a path from an input to an output in which the sum of the gate and wire delays along the path is the largest. Finding a critical path in a circuit is an optimization problem. In this example, optimization means the maximization of the target variable. However, optimization can also be minimization in other types of optimization problems.
An example of a simple decision problem is the HAMILTONIAN CYCLE problem. The names of decision problems are conventionally given in all capital letters [Cormen 2001]. Given a set of nodes and a set of lines such that each line connects two nodes, a HAMILTONIAN CYCLE is a loop that goes through all the nodes without visiting any node twice. The HAMILTONIAN CYCLE problem asks whether such a cycle exists for a given graph that consists of a set of nodes and lines. Figure 4.3 gives an example in which a Hamiltonian cycle exists.
A famous optimization problem is the traveling salesman problem (TSP). As its name suggests, TSP aims at finding the shortest route for a salesman who needs to visit a certain number of cities in a round tour. Figure 4.4 gives a simple example of a TSP. There is also a version of the TSP as a decision problem: TRAVELING SALESMAN asks whether a route with length under a constant k exists. The optimization version of TSP is more difficult to solve than its
4.2 Computational complexity 181
FIGURE 4.3 A graph with one HAMILTONIAN CYCLE marked with thickened lines.
(a)
(b)
(c)
FIGURE 4.4
(a) An example of the traveling salesman problem, with dots representing cities. (b) A non-optimal solution. (c) An optimal solution.
decision version, because if the former is solved, the latter can be immediately answered for any constant k. In fact, an optimization problem usually can be decomposed into a series of decision problems by use of a different constant as the target for each decision subproblem to search for the optimal solution. Consequently, the optimization version of a problem always has a complexity equal to or greater than that of its decision version.
4.2.2.2 The complexity classes P versus NP
The complexity class P, which stands for polynomial, consists of problems that can be solved with known polynomial-time algorithms. In other words, for any problem in the class P, an algorithm of time complexity O(nk) exists, where k is a constant. The dictionary lookup problem mentioned in Section 4.2 lies in P, because Linear_Search in Algorithm 4.1 has a complexity of O(n).
The nondeterministic polynomial or NP complexity class involves the concept of a nondeterministic computer, so we will explain this idea first. A nondeterministic computer is not a device that can be created from physical components but is a conceptual tool that only exists in complexity theory. A deterministic computer, or an ordinary computer, solves problems with deterministic algorithms. The characterization of determinism as applied to an algorithm means that at any point in the process of computation the next step is always determined or uniquely defined by the algorithm and the inputs. In other words, given certain inputs and a deterministic computer, the result is always the same no matter how many times the computer executes the algorithm. By contrast, in a nondeterministic computer multiple
182 CHAPTER 4 Fundamentals of algorithms
possibilities for the next step are available at each point in the computation, and the computer will make a nondeterministic choice from these possibilities, which will somehow magically lead to the desired answer. Another way to understand the idea of a nondeterministic computer is that it can execute all possible options in parallel at a certain point in the process of computation, compare them, and then choose the optimal one before continuing.
Problems in the NP complexity class have three properties:
1. They are decision problems. 2. They can be solved in polynomial time on a nondeterministic computer. 3. Their solution can be verified for correctness in polynomial time on a
deterministic computer.
The TRAVELING SALESMAN decision problem satisfies the first two of these properties. It also satisfies the third property, because the length of the solution route can be calculated to verify whether it is under the target constant k in linear time with respect to the number of cities. TRAVELING SALESMAN is, therefore, an NP class problem. Following the same reasoning process, HAMILTONIAN CYCLE is also in this class.
A problem that can be solved in polynomial time by use of a deterministic computer can also definitely be solved in polynomial time on a nondeterministic computer. Thus, P NP. However, the question of whether NP ¼ P remains unresolved—no one has yet been able to prove or disprove it. To facilitate this proof (or disproof), the most difficult problems in the class NP are grouped together as another complexity class, NP-complete; proving P ¼ NP is equivalent to proving P ¼ NP-complete.
4.2.2.3 The complexity class NP-complete
Informally speaking, the complexity class NP-complete (or NPC) consists of the most difficult problems in the NP class. Formally speaking, for an arbitrary problem Pa in NP and any problem Pb in the class NPC, a polynomial transformation that is able to transform an example of Pa into an example of Pb exists.
A polynomial transformation can be defined as follows: given two problems Pa and Pb, a transformation (or reduction) from Pa to Pb can express any example of Pa as an example of Pb. Then, the transformed example of Pb can be solved by an algorithm for Pb, and its answer can then be mapped back to an answer to the problem of Pa. A polynomial transformation is a transformation with a polynomial time complexity. If a polynomial transformation from Pa to Pb exists, we say that Pa is polynomially reducible to Pb. Now we illustrate this idea by showing that the decision problem HAMILTONIAN CYCLE is polynomially reducible to another decision problem—TRAVELING SALESMAN.
Given a graph consisting of n nodes and m lines, with each line connecting two nodes among the n nodes, a HAMILTONIAN CYCLE consists of n lines that traverse all n nodes, as in the example of Figure 4.3. This HAMILTONIAN CYCLE problem can be transformed into a TRAVELING SALESMAN problem by assigning
Fundamentals of algorithms
Chung-Yang (Ric) Huang National Taiwan University, Taipei, Taiwan
Chao-Yue Lai National Taiwan University, Taipei, Taiwan
Kwang-Ting (Tim) Cheng University of California, Santa Barbara, California
4
ABOUT THIS CHAPTER
In this chapter, we will go through the fundamentals of algorithms that are essential for the readers to appreciate the beauty of various EDA technologies covered in the rest of the book. For example, many of the EDA problems can be either represented in graph data structures or transformed into graph problems. We will go through the most representative ones in which the efficient algorithms have been well studied.
The readers should be able to use these graph algorithms in solving many of their research problems. Nevertheless, there are still a lot of the EDA problems that are naturally difficult to solve. That is to say, it is computationally infeasible to seek for the optimal solutions for these kinds of problems. Therefore, heuristic algorithms that yield suboptimal, yet reasonably good, results are usually adopted as practical approaches. We will also cover several selected heuristic algorithms in this chapter. At the end, we will talk about the mathematical programming algorithms, which provide the theoretical analysis for the problem optimality. We will especially focus on the mathematical programming problems that are most common in the EDA applications.
4.1 INTRODUCTION
An algorithm is a sequence of well-defined instructions for completing a task or solving a problem. It can be described in a natural language, pseudocode, a flowchart, or even a programming language. For example, suppose we are interested in knowing whether a specific number is contained in a given sequence of numbers. By traversing the entire number sequence from a certain beginning number 173
174 CHAPTER 4 Fundamentals of algorithms
Inputs: a sequence of number S a number n
Let variable x = S.begin()
x == n ?
yes FOUND
no
x == S.end() ? yes
NOT
FOUND
no
x = x.next()
FIGURE 4.1 Flowchart of the “Linear Search” algorithm.
to a certain ending number, we use a search algorithm to find this specific number. Figure 4.1 illustrates this intuitive algorithm known as linear search.
Such kinds of algorithms can be implemented in a computer program and then used in real-life applications [Knuth 1968; Horowitz 1978]. However, the questions that must be asked before implementation are: “Is the algorithm efficient?” “Can the algorithm complete the task within an acceptable amount of time for a specific set of data derived from a practical application?” As we will see in the next section, there are methods for quantifying the efficiency of an algorithm. For a given problem, different algorithms can be applied, and each of them has a different degree of efficiency. Such metrics for measuring an algorithm’s efficiency can help answer the preceding questions and aid in the selection of the best possible algorithm for the task.
Devising an efficient algorithm for a given EDA problem could be challenging. Because a rich collection of efficient algorithms already exists for a set of standard problems where data are represented in the form of graphs, one possible approach is to model the given problem as a graph problem and then apply a known, efficient algorithm to solve the modeled graph problem. In Section 4.3, we introduce several graph algorithms that are commonly used for a wide range of EDA problems.
Many EDA problems are intrinsically difficult, because finding an optimal solution within a reasonable runtime is not always possible. For such problems, certain heuristic algorithms can be applied to find an acceptable solution first. If time or computer resources permit, such algorithms can further improve the result incrementally.
In addition to modeling EDA problems in graphs, it is sometimes possible to transform them into certain mathematical models, such as linear inequalities or nonlinear equations. The primary advantage of modeling an EDA problem with
4.2 Computational complexity 175
a mathematical formula is that there are many powerful tools that can automatically handle these sorts of mathematical problems. They may yield better results than the customized heuristic algorithms. We will briefly introduce some of these useful mathematical programming techniques near the end of this chapter.
4.2 COMPUTATIONAL COMPLEXITY
A major criterion for a good algorithm is its efficiency—that is, how much time and memory are required to solve a particular problem. Intuitively, time and memory can be measured in real units such as seconds and megabytes. However, these measurements are not subjective for comparisons between algorithms, because they depend on the computing power of the specific machine and on the specific data set. To standardize the measurement of algorithm efficiency, the computational complexity theory was developed [Ullman 1984; Papadimitriou 1993, 1998; Wilf 2002]. This allows an algorithm’s efficiency to be estimated and expressed conceptually as a mathematical function of its input size.
Generally speaking, the input size of an algorithm refers to the number of items in the input data set. For example, when sorting n words, the input size is n. Notice that the conventional symbol for input size is n. It is also possible for an algorithm to have an input size with multiple parameters. Graph algorithms, which will be introduced in Section 4.3, often have input sizes with two parameters: the number of vertices jV j and the number of edges jE j in the graph.
Computational complexity can be further divided into time complexity and space complexity, which estimate the time and memory requirements of an algorithm, respectively. In general, time complexity is considered much more important than space complexity, in part because the memory requirement of most algorithms is lower than the capacity of current machines. In the rest of the section, all calculations and comparisons of algorithm efficiency refer to time complexity as complexity unless otherwise specified. Also, time complexity and running time can be used interchangeably in most cases.
The time complexity of an algorithm is calculated on the basis of the number of required elementary computational steps that are interpreted as a function of the input size. Most of the time, because of the presence of conditional constructs (e.g., if-else statements) in an algorithm, the number of necessary steps differs from input to input. Thus, average-case complexity should be a more meaningful characterization of the algorithm. However, its calculations are often difficult and complicated, which necessitates the use of a worst-case complexity metric. An algorithm’s worst-case complexity is its complexity with respect to the worst possible inputs, which gives an upper bound on the average-case complexity. As we shall see, the worst-case complexity may sometimes provide a decent approximation of the average-case complexity.
The calculation of computational complexity is illustrated with two simple examples in Algorithm 4.1 and 4.2. Each of these entails the process of looking
176 CHAPTER 4 Fundamentals of algorithms
up a word in a dictionary. The input size n refers to the total number of words in the dictionary, because every word is a possible target. The first algorithm— linear search—is presented in Algorithm 4.1. It starts looking for the target word t from the first word in the dictionary (Dic[0]) to the last word (Dic[n-1]). The conclusion “not found” is made only after every word is checked. On the other hand, the second algorithm—binary search—takes advantage of the alphabetic ordering of the words in a dictionary. It first compares the word in the middle of the dictionary (Dic[mid]) with the target t. If t is alphabetically “smaller” than Dic[mid], t must rest in the front part of the dictionary, and the algorithm will then focus on the front part of the word list in the next iteration (line 5 of Binary_Search), and vice versa. In every iteration, the middle of the search region is compared with the target, and one half of the current region will be discarded in the next iteration. Binary search continues until the target word t is matched or not found at all.
Algorithm 4.1 Linear Search Algorithm
Linear_Search(Array_of_words Dic[n], Target t)
1. for counter ctr from 0 to n-1
2. if (Dic[ctr] is t) return Dic[ctr];
3. return NOT_FOUND;
Algorithm 4.2 Binary Search Algorithms
Binary_Search(Array_of_words Dic[n], Target t)
1. Position low = 0, high = n-1;
2. while (low <= high) do
3. Position mid = (low + high)/2;
4. if (Dic[mid] < t) low = mid;
5. else if (Dic[mid] > t) high = mid;
6. else // Dic[mid] is t
7.
return Dic[mid];
8. end if
9. end while
10. return NOT_FOUND;
In linear search, the worst-case complexity is obviously n, because every word must be checked if the dictionary does not contain the target word at all. Different target words require different numbers of executions of lines 1-2 in Linear_Search, yet on average, n/2 times of checks are required.
4.2 Computational complexity 177
Thus, the average-case complexity is roughly n/2. Binary search is apparently
quicker than linear search. Because in every iteration of the while loop in
Binary_Search one-half of the current search area is discarded, at most
log2 n (simplified as lg n in the computer science community) of lookups are
required—the worst-case complexity. n is clearly larger than lg n, which proves
that binary search is a more efficient algorithm. Its average-case complexity can
be calculated as in Equation (4.1) by adding up all the possible numbers of
executions and dividing the result by n.
0
1
average À case À complexity ¼ @1Á1 þ 2Á2 þ 4Á3 þ 8Á4 þ . . . þ n Álg nA=n 2
¼ lg n À 1 þ 3 n
ð4:1Þ
4.2.1 Asymptotic notations
In computational complexity theory, not all parts of an algorithm’s running time are essential. In fact, only the rate of growth or the order of growth of the running time is typically of most concern in comparing the complexities of different algorithms. For example, consider two algorithms A and B, where A has longer running time for smaller input sizes, and B has a higher rate of growth of running time as the input size increases. Obviously, the running time of B will outnumber that of A for input sizes greater than a certain number. As in real applications, the input size of a problem is typically very large, algorithm B will always run more slowly, and thus we will consider it as the one with higher computational complexity.
Similarly, it is also sufficient to describe the complexity of an algorithm considering only the factor that has highest rate of growth of running time. That is, if the computational complexity of an algorithm is formulated as an equation, we can then focus only on its dominating term, because other lower-order terms are relatively insignificant for a large n. For example, the average-case complexity of Binary_Search, which was shown in Equation (4.1), can be simplified to only lg n, leaving out the terms À1 and 3/n. Furthermore, we can also ignore the dominating term’s constant coefficient, because it contributes little information for evaluating an algorithm’s efficiency. In the example of Linear_Search in Algorithm 4.1, its worst-case complexity and averagecase complexity—n and n/2, respectively—are virtually equal under this criterion. In other words, they are said to have asymptotically equal complexity for larger n and are usually represented with the following asymptotic notations.
Asymptotic notations are symbols used in computational complexity theory to express the efficiency of algorithms with a focus on their orders of growth. The three most used notations are O-notation, O-notation, and Y-notation.
178 CHAPTER 4 Fundamentals of algorithms
Also called
O(1) Constant time
O(lg n) Logarithmic time
O(n)
Linear time
O(nlg n)
O(n2) Quadratic time
O(n3)
Cubic time
O(2n) Exponential time
O(n!) Factorial time
n = 100
n = 10,000 n = 1,000,000
0.000001 sec. 0.000001 sec. 0.000001 sec.
0.000007 sec. 0.000013 sec. 0.00002 sec.
0.0001 sec.
0.01 sec.
1 sec.
0.00066 sec.
0.13 sec.
20 sec.
0.01 sec.
100 sec. 278 hours
1 sec. 278 hours 317 centuries
1014 centuries 102995 centuries1030087centuries
10143 centuries 1035645centuries
N/A
FIGURE 4.2
Frequently used orders of functions and their aliases, along with their actual running time on a million-instructions-per-second machine with three input sizes: n ¼ 100, 10,000, and 1,000,000.
4.2.1.1 O-notation
O-notation is the dominant method used to express the complexity of algorithms. It denotes the asymptotic upper bounds of the complexity functions. For a given function g(n), the expression O(g(n)) (read as “big-oh of g of n”) represents the set of functions
OðgðnÞÞ ¼ ff ðnÞ: positive constants c and n0 exist such that 0 f ðnÞ cgðnÞ for all n ! n0g
A non-negative function f(n) belongs to the set of functions O(g(n)) if there is a positive constant c that makes f(n) cg(n) for a sufficiently large n. We can write f(n) 2 O(g(n)) because O(g(n)) is a set, but it is conventionally written as f(n) ¼ O(g(n)). Readers have to be careful to note that the equality sign denotes set memberships in all kinds of asymptotic notations.
The definition of O-notation explains why lower-order terms and constant coefficients of leading terms can be ignored in complexity theory. The following are examples of legal expressions in computational theory:
n2 ¼ Oðn2Þ n3 þ 1000n2 þ n ¼ Oðn3Þ
1000n ¼ OðnÞ 20n3 ¼ Oð0:5n3 þ n2Þ
Figure 4.2 shows the most frequently used O-notations, their names, and the comparisons of actual running times with different values of n. The first order of functions, O(1), or constant time complexity, signifies that the algorithm’s running time is independent of the input size and is the most efficient. The other O-notations are listed in their rank order of efficiency. An algorithm can be considered feasible with quadratic time complexity O(n2) for a relatively small n, but when n ¼ 1,000,000, a quadratic-time algorithm takes dozens of
4.2 Computational complexity 179
days to complete the task. An algorithm with a cubic time complexity may handle a problem with small-sized inputs, whereas an algorithm with exponential or factorial time complexity is virtually infeasible. If an algorithm’s time complexity can be expressed with or is asymptotically bounded by a polynomial function, it has polynomial time complexity. Otherwise, it has exponential time complexity. These will be further discussed in Subsection 4.2.2.
4.2.1.2 O-notation and Q-notation O-notation is the inverse of O-notation. It is used to express the asymptotic lower bounds of complexity functions. For a given function g(n), the expression O( g(n)) (read as “big-omega of g of n”) denotes the set of functions:
O ðgðnÞÞ ¼ ff ðnÞ: positive constants c and n0 exist such that 0 cgðnÞ f ðnÞ for all n ! n0g
From the definitions of O- and O-notation, the following mutual relationship holds:
f ðnÞ ¼ OðgðnÞÞ if and only if gðnÞ ¼ O ðf ðnÞÞ
O-notation receives much less attention than O-notation, because we are usually concerned about how much time at most would be spent executing an algorithm instead of the least amount of time spent.
Y-notation expresses the asymptotically tight bounds of complexity functions. Given a function g(n), the expression Y(g(n)) (read as “big-theta of g of n”) denotes the set of functions
YðgðnÞÞ ¼ f f ðnÞ: positive constants c1; c2; and n0 exist such that 0 c1gðnÞ f ðnÞ c2gðnÞ for all n ! n0g
A function f(n) can be written as f(n) ¼ Y(g(n)) if there are positive coefficients c1 and c2 such that f(n) can be squeezed between c1g(n) and c2g(n) for a sufficiently large n. Comparing the definitions of all three asymptotic notations, the following relationship holds:
f ðnÞ ¼ YðgðnÞÞ if and only if f ðnÞ ¼ OðgðnÞÞ and f ðnÞ ¼ OðgðnÞÞ
In effect, this powerful relationship is often exploited for verifying the asymptotically tight bounds of functions [Knuth 1976].
Although Y-notation is more precise when characterizing algorithm complexity, O-notation is favored over Y-notation for the following two reasons: (1) upper bounds are considered sufficient for characterizing algorithm complexity, and (2) it is often much more difficult to prove a tight bound than it is to prove an upper bound. In the remainder of the text, we will stick with the convention and use O-notation to express algorithm complexity.
180 CHAPTER 4 Fundamentals of algorithms
4.2.2 Complexity classes
In the previous subsection, complexity was shown to characterize the efficiency of algorithms. In fact, complexity can also be used to characterize the problems themselves. A problem’s complexity is equivalent to the time complexity of the most efficient possible algorithm. For instance, the dictionary lookup problem mentioned in the introduction of Section 4.2 has a complexity of O(lg n), the complexity of Binary_Search in Algorithm 4.2.
To facilitate the exploration and discussion of the complexities of various problems, those problems that share the same degree of complexity are grouped, forming complexity classes. Many complexity classes have been established in the history of computer science [Baase 1978], but in this subsection we will only discuss those that pertain to problems in the EDA applications. We will make the distinction between optimization and decision problems first, because these are key concepts within the area of complexity classes. Then, four fundamental and important complexity classes will be presented to help readers better understand the difficult problems encountered in the EDA applications.
4.2.2.1 Decision problems versus optimization problems Problems can be categorized into two groups according to the forms of their answers: decision problems and optimization problems. Decision problems ask for a “yes” or “no” answer. The dictionary lookup problem, for example, is a decision problem, because the answer could only be whether the target is found or not. On the other hand, an optimization problem seeks for an optimized value of a target variable. For example, in a combinational circuit, a critical path is a path from an input to an output in which the sum of the gate and wire delays along the path is the largest. Finding a critical path in a circuit is an optimization problem. In this example, optimization means the maximization of the target variable. However, optimization can also be minimization in other types of optimization problems.
An example of a simple decision problem is the HAMILTONIAN CYCLE problem. The names of decision problems are conventionally given in all capital letters [Cormen 2001]. Given a set of nodes and a set of lines such that each line connects two nodes, a HAMILTONIAN CYCLE is a loop that goes through all the nodes without visiting any node twice. The HAMILTONIAN CYCLE problem asks whether such a cycle exists for a given graph that consists of a set of nodes and lines. Figure 4.3 gives an example in which a Hamiltonian cycle exists.
A famous optimization problem is the traveling salesman problem (TSP). As its name suggests, TSP aims at finding the shortest route for a salesman who needs to visit a certain number of cities in a round tour. Figure 4.4 gives a simple example of a TSP. There is also a version of the TSP as a decision problem: TRAVELING SALESMAN asks whether a route with length under a constant k exists. The optimization version of TSP is more difficult to solve than its
4.2 Computational complexity 181
FIGURE 4.3 A graph with one HAMILTONIAN CYCLE marked with thickened lines.
(a)
(b)
(c)
FIGURE 4.4
(a) An example of the traveling salesman problem, with dots representing cities. (b) A non-optimal solution. (c) An optimal solution.
decision version, because if the former is solved, the latter can be immediately answered for any constant k. In fact, an optimization problem usually can be decomposed into a series of decision problems by use of a different constant as the target for each decision subproblem to search for the optimal solution. Consequently, the optimization version of a problem always has a complexity equal to or greater than that of its decision version.
4.2.2.2 The complexity classes P versus NP
The complexity class P, which stands for polynomial, consists of problems that can be solved with known polynomial-time algorithms. In other words, for any problem in the class P, an algorithm of time complexity O(nk) exists, where k is a constant. The dictionary lookup problem mentioned in Section 4.2 lies in P, because Linear_Search in Algorithm 4.1 has a complexity of O(n).
The nondeterministic polynomial or NP complexity class involves the concept of a nondeterministic computer, so we will explain this idea first. A nondeterministic computer is not a device that can be created from physical components but is a conceptual tool that only exists in complexity theory. A deterministic computer, or an ordinary computer, solves problems with deterministic algorithms. The characterization of determinism as applied to an algorithm means that at any point in the process of computation the next step is always determined or uniquely defined by the algorithm and the inputs. In other words, given certain inputs and a deterministic computer, the result is always the same no matter how many times the computer executes the algorithm. By contrast, in a nondeterministic computer multiple
182 CHAPTER 4 Fundamentals of algorithms
possibilities for the next step are available at each point in the computation, and the computer will make a nondeterministic choice from these possibilities, which will somehow magically lead to the desired answer. Another way to understand the idea of a nondeterministic computer is that it can execute all possible options in parallel at a certain point in the process of computation, compare them, and then choose the optimal one before continuing.
Problems in the NP complexity class have three properties:
1. They are decision problems. 2. They can be solved in polynomial time on a nondeterministic computer. 3. Their solution can be verified for correctness in polynomial time on a
deterministic computer.
The TRAVELING SALESMAN decision problem satisfies the first two of these properties. It also satisfies the third property, because the length of the solution route can be calculated to verify whether it is under the target constant k in linear time with respect to the number of cities. TRAVELING SALESMAN is, therefore, an NP class problem. Following the same reasoning process, HAMILTONIAN CYCLE is also in this class.
A problem that can be solved in polynomial time by use of a deterministic computer can also definitely be solved in polynomial time on a nondeterministic computer. Thus, P NP. However, the question of whether NP ¼ P remains unresolved—no one has yet been able to prove or disprove it. To facilitate this proof (or disproof), the most difficult problems in the class NP are grouped together as another complexity class, NP-complete; proving P ¼ NP is equivalent to proving P ¼ NP-complete.
4.2.2.3 The complexity class NP-complete
Informally speaking, the complexity class NP-complete (or NPC) consists of the most difficult problems in the NP class. Formally speaking, for an arbitrary problem Pa in NP and any problem Pb in the class NPC, a polynomial transformation that is able to transform an example of Pa into an example of Pb exists.
A polynomial transformation can be defined as follows: given two problems Pa and Pb, a transformation (or reduction) from Pa to Pb can express any example of Pa as an example of Pb. Then, the transformed example of Pb can be solved by an algorithm for Pb, and its answer can then be mapped back to an answer to the problem of Pa. A polynomial transformation is a transformation with a polynomial time complexity. If a polynomial transformation from Pa to Pb exists, we say that Pa is polynomially reducible to Pb. Now we illustrate this idea by showing that the decision problem HAMILTONIAN CYCLE is polynomially reducible to another decision problem—TRAVELING SALESMAN.
Given a graph consisting of n nodes and m lines, with each line connecting two nodes among the n nodes, a HAMILTONIAN CYCLE consists of n lines that traverse all n nodes, as in the example of Figure 4.3. This HAMILTONIAN CYCLE problem can be transformed into a TRAVELING SALESMAN problem by assigning
Categories
You my also like
Time Complexity Analysis
510.4 KB3.2K415Fundamentals of Algorithmic Problem Solving
2.8 MB61.1K19.6KCommunication and information complexity
812.8 KB26.3K8.9KMatrix Multiplication Using Strassen’s Method
668.9 KB16.7K8.2KOrder Computations in Generic Groups
734 KB46.9K8.4KReduced Complexity Divide and Conquer Algorithm for Large
501.2 KB52.9K8.5KCourse Outline COMP6226 Competitive Programming
252.7 KB33.7K9.1KAn Improved Decoding Algorithm for Generalized RDP Codes
280.3 KB26.7K4.3KFast Rectangular Matrix Multiplication and Applications
560.9 KB9.5K1.1K