Algorithm Analysis and Design


Download Algorithm Analysis and Design


Preview text

Chapter 13
Algorithm Analysis and Design
If you have worked your way through to this point in the book, you are well on the way to becoming a programmer. Way back in Chapter 1, I discussed the relationship between programming and the study of computer science. Now that you have some programming skills, you are ready to start considering the broader issues in the field. Here we will take up one of the central issues, namely the design and analysis of algorithms.
13.1 Searching
Let’s begin by considering a very common and well-studied programming problem: search. Search is the process of looking for a particular value in a collection. For example, a program that maintains the membership list for a club might need to look up the information about a particular member. This involves some form of search process.
13.1.1 A Simple Searching Problem
To make the discussion of searching algorithms as simple as possible, let’s boil the problem down to its essential essence. Here is the specification of a simple searching function:
def search(x, nums): # nums is a list of numbers and x is a number # RETURNS the position in the list where x occurs or -1 if # x is not in the list.
Here are a couple interactive examples that illustrate its behavior:
>>> search(4, [3, 1, 4, 2, 5]) 2 >>> search(7, [3, 1, 4, 2, 5]) -1
In the first example, the function returns the index where 4 appears in the list. In the second example, the return value -1 indicates that 7 is not in the list.
You may recall from our discussion of list operations that Python actually provides a number of built-in search-related methods. For example, we can test to see if a value appears in a sequence using in.
if x in nums: # do something
If we want to know the position of x in a list, the index method fills the bill nicely.
225

226

CHAPTER 13. ALGORITHM ANALYSIS AND DESIGN

>>> nums = [3,1,4,2,5] >>> nums.index(4) 2
In fact, the only difference between our search function and index is that the latter raises an exception if the target value does not appear in the list. We could implement the search function using index by simply catching the exception and returning -1 for that case.
def search(x, nums): try: return nums.index(x) except: return -1
This approach begs the question, however. The real issue is how does Python actually search the list? What is the algorithm?

13.1.2 Strategy 1: Linear Search
Let’s try our hand at developing a search algorithm using a simple “be the computer” strategy. Suppose that I gave you a page full of numbers in no particular order and asked whether the number 13 is in the list. How would you solve this problem? If you are like most people, you would simply scan down the list comparing each value to 13. When you see 13 in the list, you quit and tell me that you found it. If you get to the very end of the list without seeing 13, then you tell me it’s not there.
This strategy is called a linear search. You are searching through the list of items one by one until the target value is found. This algorithm translates directly into simple code.

def search(x, nums):

for i in range(len(nums)):

if nums[i] == x:

# item found, return the index value

return i

return -1

# loop finished, item was not in list

This algorithm was not hard to develop, and it will work very nicely for modest-sized lists. For an unordered list, this algorithm is as good as any. The Python in and index operators both implement linear searching algorithms.
If we have a very large collection of data, we might want to organize it in some way so that we don’t have to look at every single item to determine where, or if, a particular value appears in the list. Suppose that the list is stored in sorted order (lowest to highest). As soon as we encounter a value that is greater than the target value, we can quit the linear search without looking at the rest of the list. On average, that saves us about half of the work. But, if the list is sorted, we can do even better than this.

13.1.3 Strategy 2: Binary Search
When a list is ordered, there is a much better searching strategy, one that you probably already know. Have you ever played the number guessing game? I pick a number between 1 and 100, and you try to guess what it is. Each time you guess, I will tell you if your guess is correct, too high, or too low. What is your stategy?
If you play this game with a very young child, they might well adopt a strategy of simply guessing numbers at random. An older child might employ a systematic approach corresponding to linear search, guessing 1¤ 2¤ 3¤ 4¤ until the mystery value is found.
Of course, virtually any adult will first guess 50. If told that the number is higher, then the range of possible values is 50–100. The next logical guess is 75. Each time we guess the middle of the remaining range to try to narrow down the possible range. This strategy is called a binary search. Binary means two, and at each step, we are dividing the possible range into two parts.
We can employ a binary search strategy to look through a sorted list. The basic idea is that we use two variables to keep track of the endpoints of the range in the list where the item could be. Initially, the target

13.1. SEARCHING

227

could be anywhere in the list, so we start with variables low and high set to the first and last positions of the list, respectively.
The heart of the algorithm is a loop that looks at the item in the middle of the remaining range to compare it to x. If x is smaller than the middle item, then we move top, so that the search is narrowed to the lower half. If x is larger, then we move low, and the search is narrowed to the upper half. The loop terminates when x is found or there are no longer any more places to look (i.e., low > high). Here is the code.

def search(x, nums): low = 0 high = len(nums) - 1 while low <= high: mid = (low + high) / 2 item = nums[mid] if x == item : return mid elif x < item: high = mid - 1 else: low = mid + 1 return -1

# There is still a range to search # position of middle item

# Found it! Return the index

# x is in lower half of range

#

move top marker down

# x is in upper half

#

move bottom marker up

# no range left to search, x is not there

This algorithm is quite a bit more sophisticated than the simple linear search. You might want to trace through a couple of example searches to convince yourself that it actually works.

13.1.4 Comparing Algorithms
So far, we have developed two solutions to our simple searching problem. Which one is better? Well, that depends on what exactly we mean by better. The linear search algorithm is much easier to understand and implement. On the other hand, we expect that the binary seach is more efficient, because it doesn’t have to look at every value in the list. Intuitively, then, we might expect the linear search to be a better choice for small lists and binary search a better choice for larger lists. How could we actually confirm such intuitions?
One approach would be to do an empirical test. We could simply code up both algorithms and try them out on various sized lists to see how long the search takes. These algorithms are both quite short, so it would not be difficult to run a few experiments. When I tested the algorithms on my particular computer (a somewhat dated laptop), linear search was faster for lists of length 10 or less, and there was no significant difference in the range of length 10–1000. After that, binary search was a clear winner. For a list of a million elements, linear search averaged 2.5 seconds to find a random value, whereas binary search averaged only 0.0003 seconds.
The empirical analysis has confirmed our intuition, but these are results from one particular machine under specific circumstances (amount of memory, processor speed, current load, etc.). How can we be sure that the results will always be the same?
Another approach is to analyze our algorithms abstractly to see how efficient they are. Other factors being equal, we expect the algorithm with the fewest number of “steps” to be the more efficient. But how do we count the number of steps? For example, the number of times that either algorithm goes through its main loop will depend on the particular inputs. We have already guessed that the advantage of binary search increases as the size of the list increases.
Computer scientists attack these problems by analyzing the number of steps that an algorithm will take relative to the size or difficulty of the specific problem instance being solved. For searching, the difficulty is determined by the size of the collection. Obviously, it takes more steps to find a number in a collection of a million than it does in a collection of ten. The pertinent question is how many steps are needed to find a value in a list of size n. We are particularly interested in what happens as n gets very large.
Let’s consider the linear search first. If we have a list of ten items, the most work our algorithm might have to do is to look at each item in turn. The loop will iterate at most ten times. Suppose the list is twice as big. Then we might have to look at twice as many items. If the list is three times as large, it will take three

228

CHAPTER 13. ALGORITHM ANALYSIS AND DESIGN

times as long, etc. In general, the amount of time required is linearly related to the size of the list n. This is what computer scientists call a linear time algorithm. Now you really know why it’s called a linear search.
What about the binary search? Let’s start by considering a concrete example. Suppose the list contains sixteen items. Each time through the loop, the remaining range is cut in half. After one pass, there are eight items left to consider. The next time through there will be four, then two, and finally one. How many times will the loop execute? It depends on how many times we can halve the range before running out of data. This table might help you to sort things out:

List size 1 2 4 8 16

Halvings 0 1 2 3 4

Can you see the pattern here? Each extra iteration of the loop doubles the size of the list. If the binary search loops i times, it can find a single value in a list of size 2i. Each time through the loop, it looks at one value (the middle) in the list. To see how many items are examined in a list of size n, we need to solve this relationship: n 2i for i. In this formula, i is just an exponent with a base of 2. Using the appropriate logarithm gives us this relationship: i log2 n. If you are not entirely comfortable with logarithms, just remember that this value is the number of times that a collection of size n can be cut in half.
OK, so what does this bit of math tell us? Binary search is an example of a log time algorithm. The amount of time it takes to solve a given problem grows as the log of the problem size. In the case of binary search, each additional iteration doubles the size of the problem that we can solve.
You might not appreciate just how efficient binary search really is. Let me try to put it in perspective. Suppose you have a New York City phone book with, say, twelve million names listed in alphabetical order. You walk up to a typical New Yorker on the street and make the following proposition (assuming their number is listed): I’m going to try guessing your name. Each time I guess a name, you tell me if your name comes alphabetically before or after the name I guess. How many guesses will you need?
Our analysis above shows the answer to this question is log212¤ 000¤ 000. If you don’t have a calculator handy, here is a quick way to estimate the result. 210 1024 or roughly 1000, and 1000x1000 1¤ 000¤ 000. That means that 210x210 220 1¤ 000¤ 000. That is, 220 is approximately one million. So, searching a million items requires only 20 guesses. Continuting on, we need 21 guesses for two million, 22 for four million, 23 for eight million, and 24 guesses to search among sixteen million names. We can figure out the name of a total stranger in New York City using only 24 guesses! By comparison, a linear search would require (on average) 6 million guesses. Binary search is a phenomenally good algorithm!
I said earlier that Python uses a linear search algorithm to implement its built-in searching methods. If a binary search is so much better, why doesn’t Python use it? The reason is that the binary search is less general; in order to work, the list must be in order. If you want to use binary search on an unordered list, the first thing you have to do is put it in order or sort it. This is another well-studied problem in computer science, and one that we should look at. Before we turn to sorting, however, we need to generalize the algorithm design technique that we used to develop the binary search.

13.2 Recursive Problem-Solving
Remember the basic idea behind the binary search algorithm was to sucessively divide the problem in half. This is sometimes referred to as a “divide and conquer” approach to algorithm design, and it often leads to very efficient algorithms.
One interesting aspect of divide and conquer alorithms is that the original problem divides into subproblems that are just smaller versions of the original. To see what I mean, think about the binary search again. Initially, the range to search is the entire list. Our first step is to look at the middle item in the list. Should the middle item turn out to be the target, then we are finished. If it is not the target, we continue by performing binary search on either the top-half or the bottom half of the list.
Using this insight, we might express the binary search algorithm in another way.

13.2. RECURSIVE PROBLEM-SOLVING

229

Algorithm: binarySearch -- search for x in range nums[low] to nums[high]
mid = (low + high) / 2 if low > high
x is not in nums elif x < nums[mid]
perform binary search for x in range nums[low] to nums[mid-1] else
perform binary search for x in range nums[mid+1] to nums[high]
Rather than using a loop, this defintion of the binary search seems to refer to itself. What is going on here? Can we actually make sense of such a thing?

13.2.1 Recursive Definitions

A description of something that refers to itself is called a recursive definition. In our last formulation, the

binary search algorithm makes use of its own description. A “call” to binary search “recurs” inside of the

definition—hence, the label recursive definition.

At first glance, you might think recursive definitions are just nonsense. Surely you have had a teacher

who insisted that you can’t use a word inside of its own definition? That’s called a circular definition and is

usually not worth much credit on an exam.

In mathematics, however, certain recursive definitions are used all the time. As long as we excercise some

care in the formulation and use of recursive definitions, they can be quite handy and surprisingly powerful.

Let’s look at a simple example to gain some insight and then apply those ideas to binary search.

The classic recursive example in mathematics is the definition of factorial. Back in Chapter 3, we defined

the factorial of a value like this:

n! n n 1 n 2 1 ¢

¡

¢

¡

¡

For example, we can compute

5! 5 4 3 2 1 ¡

¡

¡

¡

Recall that we implemented a program to compute factorials using a simple loop that accumulates the factorial product.
Looking at the calculation of 5!, you will notice something interesting. If we remove the 5 from the front, what remains is a calculation of 4!. In general, n! n n ¢ 1¡ !. In fact, this relation gives us another way of expressing what is meant by factorial in general. Here is a recursive definition:

¡

1

if n 0

n! n n 1 ! otherwise

¢

¡

This definition says that the factorial of 0 is, by definition, 1, while the factorial of any other number is defined to be that number times the factorial of one less than that number.
Even though this definition is recursive, it is not circular. In fact, it provides a very simple method of calculating a factorial. Consider the value of 4!. By definition we have
4! 4 4 ¢ 1¡ ! 4 3!¡

But what is 3!? To find out, we apply the definition again.

4! 4 3!¡

4 3 3 1 ! ¡

¢

¡

¡

4 3¡ 2!¡

Now, of course, we have to expand 2!, which requires 1!, which requires 0!. Since 0! is simply 1, that’s the

end of it.

4! 4 3! 4 3 2! 4 3 2 1! 4 3 2 1 0! 4 3 2 1 1 24 ¡

¡

¡

¡

¡

¡

¡

¡

¡

¡

¡

¡

¡

¡

You can see that the recursive definition is not circular because each application causes us to request the factorial of a smaller number. Eventually we get down to 0, which doesn’t require another application of the definition. This is called a base case for the recursion. When the recursion bottoms out, we get a closed expression that can be directly computed. All good recursive definitions have these key characteristics:

230

CHAPTER 13. ALGORITHM ANALYSIS AND DESIGN

1. There are one or more base cases for which no recursion is required. 2. When the definition is recursively applied, it is always applied to a smaller case. 3. All chains of recursion eventually end up at one of the base cases.

13.2.2 Recursive Functions

You already know that the factorial can be computed using a loop with an accumulator. That implementation has a natural correspondence to the original definition of factorial. Can we also implement a version of factorial that follows the recursive definition?
If we write factorial as a separate function, the recursive definition translates directly into code.

def fact(n): if n == 0: return 1L else: return n * fact(n-1)

Do you see how the definition that refers to itself turns into a function that calls itself? The function first checks to see if we are at a the base case n == 0 and, if so, returns 1 (note the use of a long int constant since factorials grow rapidly). If we are not yet at the base case, the function returns the result of multiplying n by the factorial of n-1. The latter is calculated by a recursive call to fact(n-1).
I think you will agree that this is a reasonable translation of the recursive definition. The really cool part is that it actually works! We can use this recursive function to compute factorial values.

>>> from recfact >>> fact(4) 24 >>> fact(10) 3628800

import

fact

Some beginning programmers are surprised by this result, but it follows naturally from the semantics for functions that we discussed way back in Chapter 6. Remember that each call to a function starts that function anew. That means it gets its own copy of any local values, including the values of the parameters. Figure 13.1 shows the sequence of recursive calls that computes 2!. Note especially how each return value is multiplied by a value of n appropriate for each function invocation. The values of n are stored on the way down the chain and then used on the way back up as the function calls return.

n = 2 fact(2)

def fact(n):

if n == 0:

return 1

n = 1

2 else:

return n * fact(n−1)

n: 2

def fact(n): if nre=tu=rn0:1 n = 0
1 else: return n * fact(n−1)
n: 1

def fact(n):

if n == 0:

1

return 1

else:

return n * fact(n−1)

n: 0

Figure 13.1: Recursive computation of 2!

13.2.3 Recursive Search
Now that we have a technique for implementing recursive definitions, we are ready to go back and look again at binary search as a recursive process. The basic idea was to look at the middle value and then recursively search either the lower half or the upper half of the array. The base cases for the recursion are the conditions when we can stop, namely when the target value is found, or we run out of places to look. The recursive calls

13.3. SORTING ALGORITHMS

231

will cut the size of the problem in half each time. In order to do this, we need to specify the range of locations in the list that are still “in play” for each recursive call. We can do this by passing the values of low and high along with the list. Each invocation will search the list between the low and high indexes.
Here is a direct implementation of the recursive algorithm using these ideas:

def recBinSearch(x, nums, low, high):

if low > high:

# No place left to look, return -1

return -1

mid = (low + high) / 2

item = nums[mid]

if item == x:

# Found it! Return the index

return mid

elif x < item:

# Look in lower half

return recBinSearch(x, nums, low, mid-1)

else:

# Look in upper half

return recBinSearch(x, nums, mid+1, high)

We can then implement our original search function using a suitable call to the recursive binary search, telling it to start the search between 0 and len(nums)-1

def search(x, nums): return recBinSearch(x, nums, 0, len(nums)-1)

Of course, as in the case of factorial, we already implemented this algorithm using a loop, and there is no compelling reason to use a recursive implementation. In fact, the looping version is probably a bit faster because calling functions is generally slower than iterating a loop. The recursive version, however, makes the divide-and-conquer structure of binary search much more obvious. Below, we will see examples where recursive divide-and-conquer approaches provide a natural solution to some problems where loops are awkward.

13.3 Sorting Algorithms
The sorting problem provides a nice testbed for the algorithm design techniques we have been discussing. Recall, the basic sorting problem is to take a list and rearrange it so that the values are in increasing (actually, nondecreasing) order.
13.3.1 Naive Sorting: Selection Sort
Let’s start with a simple “be the computer” approach to sorting. Suppose you have a stack of index cards, each with a number on it. The stack has been shuffled, and you need to put the cards back in order. How would you accomplish this task?
There are any number of good systematic approaches. One simple method is to look through the deck to find the smallest value and then place that value at the front of the stack (or perhaps in a separate stack). Then you could go through and find the smallest of the remaining cards and put it next in line, etc. Of course, this means that you’ll also need an algorithm for finding the smallest remaining value. You can use the same approach we used for finding the max of a list (see Chapter 6). As you go through, you keep track of the smallest value seen so far, updating that value whenever you find a smaller one.
The algorithm I just described is called selection sort. Basically, the algorithm consists of a loop and each time through the loop, we select the smallest of the remaining elements and move it into its proper position. Applying this idea to a list, we proceed by finding the smallest value in the list and putting it into the 0th position. Then we find the smallest remaining value (from positions 1–(n-1)) and put it in position 1. Next the smallest value from positions 2–(n-1) goes into position 2, etc. When we get to the end of the list, everything will be in its proper place.
There is one subtlety in implementing this algorithm. When we place a value into its proper position, we need to make sure that we do not accidently lose the value that was originally stored in that position. For example, if the smallest item is in position 10, moving it into position 0 involves an assignment.

232

CHAPTER 13. ALGORITHM ANALYSIS AND DESIGN

nums[0] = nums[10]
But this wipes out the value currently in nums[0]; it really needs to be moved to another location in the list. A simple way to save the value is to swap it with the one that we are moving. Using simultaneous assignment, the statement
nums[0], nums[10] = nums[10], nums[0]
places the value from position 10 at the front of the list, but preserves the original first value by stashing it into location 10.
Using this idea, it is a simple matter to write a selection sort in Python. I will use a variable called bottom to keep track of which position in the list we are currently filling, and the variable mp will be used to track the location of the smallest remaining value. The comments in this code explain this implementation of selection sort:
def selSort(nums): # sort nums into ascending order n = len(nums)

# For each position in the list (except the very last) for bottom in range(n-1):
# find the smallest item in nums[bottom]..nums[n-1]

mp = bottom for i in range(bottom+1,n):
if nums[i] < nums[mp]: mp = i

# initially bottom is smallest so far

# look at each position

# this one is smaller

#

remember its index

# swap smallest item to the bottom lst[bottom], lst[mp] = lst[mp], lst[bottom]
One thing to notice about this algorithm is the accumulator for finding the minimum value. Rather than actually storing the minimum seen so far, mp just remembers the position of the minimum. A new value is tested by comparing the item in position i to the item in position mp. You should also notice that bottom stops at the second to last item in the list. Once all of the items up to the last have been put in the proper place, the last item has to be the largest, so there is no need to bother looking at it.
The selection sort algorithm is easy to write and works well for moderate-sized lists, but it is not a very efficient sorting algorithm. We’ll come back and analyze it after we’ve developed another algorithm.

13.3.2 Divide and Conquer: Merge Sort
As discussed above, one technique that often works for developing efficient algorithms is the divide-andconquer approach. Suppose a friend and I were working together trying to put our deck of cards in order. We could divide the problem up by splitting the deck of cards in half with one of us sorting each of the halves. Then we just need to figure out a way of combining the two sorted stacks.
The process of combining two sorted lists into a single sorted result is called merging. If you think about it, merging is pretty simple. Since our two stacks are sorted, each has its smallest value on top. Whichever of the top values is the smallest will be the first item in the merged list. Once the smaller value is removed, we can look at the tops of the stacks again, and whichever top card is smaller will be the next item in the list. We just continue this process of placing the smaller of the two top values into the big list until one of the stacks runs out. At that point, we finish out the list with the cards from the remaining stack.
Here is a Python implementation of the merge process. In this code, lst1 and lst2 are the smaller lists and lst3 is the larger list where the results are placed. In order for the merging process to work, the length of lst3 must be equal to the sum of the lengths of lst1 and lst2. You should be able to follow this code by studying the accompanying comments:

13.3. SORTING ALGORITHMS

233

def merge(lst1, lst2, lst3): # merge sorted lists lst1 and lst2 into lst3

# these indexes keep track of current position in each list i1, i2, i3 = 0, 0, 0 # all start at the front

n1, n2 = len(lst1), len(lst2)

# Loop while both lst1 and lst2 while i1 < n1 and i2 < n2:
if lst1[i1] < lst2[i2]: lst3[i3] = lst1[i1] i1 = i1 + 1
else: lst3[i3] = lst2[i2] i2 = i2 + 1
i3 = i3 + 1

have more items
# top of lst1 is smaller # copy it into current spot in lst3
# top of lst2 is smaller # copy it into current spot in lst3
# item added to lst3, update position

# Here either lst1 or lst2 is done. One of the following loops will # execute to finish up the merge.

# Copy remaining items (if while i1 < n1:
lst3[i3] = lst1[i1] i1 = i1 + 1 i3 = i3 + 1 # Copy remaining items (if while i2 < n2: lst3[i3] = lst2[i2] i2 = i2 + 1 i3 = i3 + 1

any) any)

from from

lst1 lst2

With this merging algorithm in hand, it’s easy to see the general structure for a divide-and-conquer sorting algorithm.

Algorithm: mergeSort nums

split nums into two halves sort the first half sort the second half merge the two sorted halves back into nums
Looking at the steps in this algorithm, the first and last parts look easy. We can use slicing to split the list, and we can use the merge function that we just wrote to put the pieces back together. But how do we sort the two halves?
Well, let’s think about it. We are trying to sort a list, and our algorithm requires us to sort two smaller lists. This sounds like a perfect place to use recursion. Maybe we can use mergeSort itself to sort the two lists. Let’s go back to our recursion guidelines and see if we can develop a proper recursive algorithm.
In order for recursion to work, we need to find at least one base case that does not require a recursive call, and we also have to make sure that recursive calls are always made on smaller versions of the original problem. The recursion in our mergeSort will always occur on a list that is half as large as the original, so the latter property is automatically met. Eventually, our lists will be very small, containing only a single item. Fortunately, a list with just one item is already sorted! Voila´, we have a base case. When the length of the list is less than 2, we do nothing, leaving the list unchanged.
Given our analysis, we can update the mergeSort algorithm to make it properly recursive.

234

CHAPTER 13. ALGORITHM ANALYSIS AND DESIGN

if len(nums) > 1: split nums into two halves mergeSort the first half mergeSort the second half merge the two sorted halves back into nums
We can translate this algorithm directly into Python code.
def mergeSort(nums): # Put items of nums in ascending order n = len(nums) # Do nothing if nums contains 0 or 1 items if n > 1: # split into two sublists m=n/2 nums1, nums2 = nums[:m], nums[m:] # recursively sort each piece mergeSort(nums1) mergeSort(nums2) # merge the sorted pieces back into original list merge(nums1, nums2, nums)
I know this use of recursion may still seem a bit mysterious to you. You might try tracing this algorithm with a small list (say eight elements), just to convince yourself that it really works. In general, though, tracing through recursive algorithms can be tedious and often not very enlightening.
Recursion is closely related to mathematical induction, and it requires something of a leap of faith before it becomes comfortable. As long as you follow the rules and make sure that every recursive chain of calls eventually reaches a base case, your algorithms will work. You just have to trust and let go of the grungy details. Let Python worry about that for you!

13.3.3 Comparing Sorts
Now that we have developed two sorting algorithms, which one should we use? Before we actually try them out, let’s do some analysis. As in the searching problem, the difficulty of sorting a list depends on the size of the list. We need to figure out how many steps each of our sorting algorithms requires as a function of the size of the list to be sorted.
Take a look back at the algorithm for selection sort. Remember, this algorithm works by first finding the smallest item, then finding the smallest of the remaining elements, and so on. Suppose we start with a list of size n. In order to find the smallest value, the algorithm has to inspect each of the n items. The next time around the outer loop, it has to find the smallest of the remaining n ¢ 1 items. The third time around, there are n ¢ 2 items of interest. This process continues until there is only one item left to place. Thus, the total number of iterations of the inner loop for the selection sort can be computed as the sum of a decreasing sequence.

n n 1 n 2 n 3 ¢

¢

¡

¢

¢

¡

¢

¢

¡

¢

1 ¢

In other words, the time required by selection sort to sort a list of n items in proportional to the sum of the first n whole numbers. There is a well-known formula for this sum, but even if you do not know the formula, it is easy to derive. If you add the first and last numbers in the series you get n ¢ 1. Adding the second and

second to last values gives n ¢ 1¡ ¢ 2 n ¢ 1. If you keep pairing up the values working from the outside

in, all of the pairs add to n ¢

1.

Since

there

are

n

numbers,

there

must

be

n 2

pairs.

That

means

the

sum

of

all

the pairs is n n2 1¢ .

You can see that the final formula contains an n2 term. That means that the number of steps in the

algorithm is proportional to the square of the size of the list. If the size of the list doubles, the number of

steps quadruples. If the size triples, it will take nine times as long to finish. Computer scientists call this a

quadratic or n-squared algorithm.

Preparing to load PDF file. please wait...

0 of 0
100%
Algorithm Analysis and Design