DATA STRUCTURES AND ALGORITHMS ALFRED V AHO PDF

adminComment(0)

Alfred V. Aho, Bell Laboratories, Murray Hill, New. Jersey This book presents the data structures and algorithms that underpin much of today's computer. Algorithms and Data Structures. 1. © Alfred Strohmeier, EPFL Resources. II. Sorting. III. Data Structures. IV. Trees. V. Graphs. VI. Analysis of Algorithms Aho A.V.; Hopcroft J.E.; Ullman J.D.; Data. Structures and. Computer Science homeworks / projects / backups. Contribute to sunnypatel/ Classwork development by creating an account on GitHub.


Data Structures And Algorithms Alfred V Aho Pdf

Author:DELPHA REMONDINI
Language:English, Indonesian, Dutch
Country:Mali
Genre:Lifestyle
Pages:616
Published (Last):08.04.2016
ISBN:348-5-31983-807-9
ePub File Size:27.77 MB
PDF File Size:11.76 MB
Distribution:Free* [*Registration needed]
Downloads:45576
Uploaded by: KHALILAH

The authors' treatment of data structures in "Data Structures and Algorithms" is unified by an informal notion of "abstract data types," allowing readers to compare. Data Structures and Algorithms. 6. Recommended readings. Alfred V. Aho, Jeffrey D. Ullman, John E. Hopcroft, Data Structures and. Algorithms, Addison Wesley. Data Structures and Algorithms: Table of Contents. Data Structures and Algorithms. Alfred V. Aho, Bell Laboratories, Murray Hill, New. Jersey.

Let the input be partitioned into the groups G1 , G2 ,. Assume without loss of generality that every group has exactly n 5 elements. There are 10 groups such that their medians are less than M. In each such group there are at least three elements that are less than M.

Therefore, there 3 are at least 10 n input elements that are less than M.

This in turn means that the 7 size of X2 can be at most 10 n. Similarly, we can also show that the size of X1 is no 7 more than 10 n. Thus we can complete the selection algorithm by performing an appropriate se- lection in either X1 or X2 , recursively, depending whether the element to be selected is in X1 or X2 , respectively.

Then it takes T n5 time to identify the median of medians M. This can be proved by induction. Theorem 5. Three different measures can be conceived of: the best case, the worst case, and the average case. Typically, the average case run time of an algorithm is much smaller than the worst case. While computing the average case run time one assumes a distribution e. If 12 this distribution assumption does not hold, then the average case analysis may not be valid.

Is it possible to achieve the average case run time without making any assump- tions on the input space? Randomized algorithms answer this question in the affir- mative. They make no assumptions on the inputs. The analysis done of randomized algorithms will be valid for all possible inputs.

Randomized algorithms obtain such performances by introducing randomness into the algorithms themselves. Coin flips are made to make certain decisions in randomized algorithms.

A ran- domized algorithm with one possible sequence of outcomes for the coin flips can be thought of as being different from the same algorithm with a different sequence of outcomes for the coin flips. Thus a randomized algorithm can be viewed as a family of algorithms. It should be ensured that, for any input, the number of algorithms in the family bad on this input is only a small fraction of the total number of algo- rithms. Realize that this probability is independent of the input distribution.

Good performance could mean that the algorithm outputs the correct answer, or its run time is small, and so on. Different types of randomized algorithms can be conceived of depending on the interpretation. A Las Vegas algorithm is a randomized algorithm that always outputs the correct answer but whose run time is a random variable possibly with a small mean. A Monte Carlo algorithm is a randomized algorithm that has a predetermined run time but whose output may be incorrect occasionally.

We can modify asymptotic functions such as O. A randomized algorithm is said to use O f n amount of resource like time, space, etc.

Illustrative Examples We provide two examples of randomized algorithms. The first is a Las Vegas algorithm and the second is a Monte Carlo algorithm. The problem is to identify the repeated element. This fact can be proven as follows: Let the input be chosen by an adversary who has perfect knowledge about the algorithm used.

In other words, the algorithm has to examine at least one more element and hence the claim follows. We can design a simple O n time deterministic algorithm for this problem. Then search the individual parts for the repeated element. Clearly, at least one of the parts will have at least two copies of the repeated element.

Now we present a simple and elegant Las Vegas algorithm that takes only O log n time. This algorithm is comprised of stages. Two random numbers i and j are picked from the range [1, n] in any stage. These numbers are picked independently with replacement. As a result, there is a chance that these two are the same.

If so, the repeated element has been found. If not, the next stage is entered. We repeat the stages as many times as it takes to arrive at the correct answer. Lemma 6. Since each stage takes O 1 time, the run time of the algorithm is O log n.

Here also the input is an array a[ ] of n numbers. The problem is to find an element of the array that is greater than the median. We can assume, without loss of generality, that the array numbers are distinct and that n is even. This sample is picked with replacement. Find and output the maximum element of S. The claim is that the output of this algorithm is correct with high probability. The basic idea of parallel computing is to partition the given problem into several subproblems, assign a subproblem to each processor, and put together the partial solutions obtained by the individual processors.

If P processors are used to solve a problem, then there is a potential of reducing the run time by a factor of up to P. If S is the best known sequential run time i. If not, we can simulate the parallel algorithm using a single processor and get a run time better than S which will be a contradiction. P T is referred to as the work done by the parallel algorithm. In this section we provide a brief introduction to parallel algorithms. In the RAM model, we assume that each of the basic scalar binary operations such as addition, multiplication, etc.

We have assumed this model in our discussion thus far.

In contrast, there exist many well-accepted parallel models of computing. In any such parallel model an individual processor can still be thought of as a RAM.

Variations among different architectures arise in the ways they implement interprocessor communications. In this article we categorize parallel models into shared memory models and fixed connection machines. A shared memory model also called the Parallel Random Access Machine PRAM is a collection of RAMs working in synchrony where communication takes place with 15 the help of a common block of global memory.

If processor i has to communicate with processor j it can do so by writing a message in memory cell j which can then be read by processor j. Conflicts for global memory access could arise. Depending on how these conflicts are resolved, a PRAM can further be classified into three.

In the case of a CRCW PRAM, we need an additional mechanism for handling write conflicts, since the processors trying to write at the same time in the same cell may have different data to write and a decision has to be made as to which data gets written.

Concurrent reads do not pose such problems, since the data read by different processors will be the same. A fixed connection machine can be represented as a directed graph whose nodes represent processors and whose edges represent communication links. If there is an edge connecting two processors, they can communicate in one unit of time.

If two processors not connected by an edge want to communicate they can do so by sending a message along a path that connects the two processors. We can think of each processor in a fixed connection machine as a RAM.

Examples of fixed connection machines are the mesh, the hypercube, the star graph, etc. Our discussion on parallel algorithms is confined to PRAMs owing to their sim- plicity. The input bits are stored in common memory one bit per cell. Every processor is assigned an input bit. We employ a common memory cell M that is initialized to zero.

All the processors that have ones try to write a one in M in one parallel write step. The result is ready in M after this write step. Using a similar algorithm, we can also compute the boolean AND of n bits in O 1 time. Lemma 7. Any model in the sequence is strictly less powerful than any to its right, and strictly more powerful than any to its left. Partition the processors so that there are n processors in each group.

Let the input be k1 , k2 ,. Processor i is assigned the key ki.

Data-Structures-And-Algorithms-Alfred-V-Aho

Gi is in-charge of checking if ki is the maximum. In one parallel step, processors of group Gi compare ki with every input key. The bits bi1 , bi2 ,. This can be done in O 1 time. If Gi computes a one in this step, then one of the processors in Gi outputs ki as the answer.

This is as basic as any arithmetic operation in sequential computing. Given a sequence of n elements k1 , k2 ,. The results themselves are called prefix sums. We can use the following algorithm.

If not, the input elements are partitioned into two halves. Solve the prefix com- putation problem on each half recursively assigning n2 processors to each half. This modification can be done in O 1 time using n 2 processors.

Let T n be the time needed to perform prefix computation on n elements using n processors. Each processor is assigned log n input elements. Let xi1 , xi2 ,. This takes O log n time. This also takes O log n time. In all the parallel algorithms we have seen so far, we have assumed that the number of processors is a function of the input size.

But the machines available in the market may not have these many processors. Fortunately, we can simulate these algorithms on a parallel machine with less number of processors preserving the asymptotic work done. Let A be an algorithm that solves a given problem in time T using P processors. A discus- sion on standard data structures such as red-black trees can be found in Algorithms texts also.

See e. There exist numerous wonderful texts on Algorithms also. The technique of randomization was popularized by Rabin [19]. One of the prob- lems considered in [19] was primality testing. In an independent work, at around the same time, Solovay and Strassen [25] presented a randomized algorithm for primality testing.

The idea of randomization itself had been employed in Monte Carlo simu- lations a long time before.

The sorting algorithm of Frazer and McKellar [6] is also one of the early works on randomization. Some pairs of turns, likeAB fromA toB andEC, can be carried out simultaneously, while others, likeAD andEB, cause lines of traffic to cross and therefore cannot be carried out simultaneously.

The light at the intersection must permit turns in such an order thatAD andEB are never permitted at the same time, while the light might permitAB andEC to be made simultaneously. An intersection. We can model this problem with a mathematical structure known as a graph.

A graph consists of a set of points calledvertices, and lines connecting the points, callededges.

For the traffic intersection problem we can draw a graph whose vertices represent turns and whose edges connect pairs of vertices whose turns cannot be performed simultaneously. For the intersection of Fig. The graph can aid us in solving the traffic light design problem. A coloring of a graph is an assignment of a color to each vertex of the graph so that no two vertices connected by an edge have the same color. It is not hard to see that our problem is one of coloring the graph of incompatible turns using as few colors as possible.

The problem of coloring graphs has been studied for many decades, and the theory of algorithms tells us a lot about this problem.

Unfortunately, coloring an arbitrary graph with as few colors as possible is one of a large class of problems called "NP-complete problems," for which all known solutions are essentially of the type "try all possibilities. With care, we can be a little speedier than this, but it is generally believed that no algorithm to solve this problem can be substantially more efficient than this most obvious approach.

We are now confronted with the possibility that finding an optimal solution for the problem at hand is computationally very expensive. We can adopt Fig. Graph showing incompatible turns. Table of incompatible turns. If the graph is small, we might attempt to find an optimal solution exhaustively, trying all possibilities.

This approach, however, becomes prohibitively expensive for large graphs, no matter how efficient we try to make the program. A second approach would be to look for additional information about the problem at hand.

It may turn out that the graph has some special properties, which make it unnecessary to try all possibilities in finding an optimal solution. The third approach is to change the problem a little and look for a good but not necessarily optimal solution. We might be happy with a solution that gets close to the minimum number of colors on small graphs, and works quickly, since most intersections are not even as complex as Fig.

An algorithm that quickly produces good but not necessarily optimal solutions is called a heuristic. One reasonable heuristic for graph coloring is the following "greedy" algorithm. Initially we try to color as many vertices as possible with the first color, then as many as possible of the uncolored vertices with the second color, and so on.

To color vertices with a new color, we perform the following steps. Select some uncolored vertex and color it with the new color.

Scan the list of uncolored vertices. For each uncolored vertex, determine whether it has an edge to any vertex already colored with the new color. If there is no such edge, color the present vertex with the new color.

This approach is called "greedy" because it colors a vertex whenever it can, without considering the potential drawbacks inherent in making such a move.

Even then the operations will only take O 1 time. We can also implement a dictionary or a priority queue using an array or a linked list. For example consider the implementation of a dictionary using an array.

At any given time, if there are n elements in the data structure, these elements will be stored in a[1 : n]. To Search for a given x, we scan through the elements of a[ ] until we either find a match or realize the absence of x. In the worst case this operation takes O n time. To Delete the element x, we first Search for it in a[ ].

If x is not in a[ ] we report so and quit. Thus the Delete operation takes O n time.

It is also easy to see that a priority queue can be realized using an array such that each of the three operations takes O n time. The same can be done using a linked list as well.

A binary tree is a set of nodes that is either empty or has a node called the root and two disjoint binary trees that are called the left and right children of the root. These children are also called the left and right subtrees, respectively. We store some data at each node of a binary tree. Figure 1 shows examples of binary trees.

Citations per year

Each node has a label associated with it. We might use the data stored at any node itself as its label. For example, in Figure 1 a , 5 is the root. In Figure 1 b , 11 is the root.

Index of /pdf/Gentoomen Library/Data Structures/

The subtree containing the nodes 5, 12, and 8 is the left subtree of 11, etc. We can also define parent relationship in the usual manner. For example, in the tree of Figure 1 a , 5 is the parent of 8, 8 is the parent of 3, etc.

A tree node is called a leaf if it does not have any children. The nodes 8 and 9 are leaves in the tree of Figure 1 b. The level of the root is defined to be 1. In the tree of Figure 1 b , the level of 3 and 5 is 2; the level of 12 and 1 is 3; and the level of 8 and 9 is 4. The height of a tree is defined to be the maximum level of any node in the tree.

The trees of Figure 1 have a height of 4. A binary search tree is a binary tree such that the data or key stored at any node is greater than any key in its left subtree and smaller than any key in its right subtree. Trees in Figure 1 are not binary search trees since for example, in the tree of Figure 1 a , the right subtree of node 8 has a key 3 that is smaller than 8.

Figure 2 has an example of a binary search tree. We can verify that the tree of Figure 2 is a binary search tree considering each node of the tree and its subtrees. For the node 12, the keys in its left subtree are 9 and 7 which are smaller. Keys in its right subtree are 25, 17, 30, and 28 which are all greater than Node 25 has 17 in its left subtree and 30 and 28 in its right subtree, 4 12 9 25 7 17 30 28 Figure 2: Example of a binary search tree and so on.

We can implement both a dictionary and a priority queue using binary search trees. To Search for a given element x, we compare x with the key at the root y.

Thus after making one comparison, the searching problem reduces to searching either the left or the right subtree, i. Thus the total time taken by this search algorithm is O h , where h is the height of the tree. In order to Insert a given element x into a binary search tree, we first search for x in the tree. If x is already in the tree, we can quit. If not, the search will terminate in a leaf y such that x can be inserted as a child of y.

Look at the binary search tree of Figure 2. Say we want to insert The Search algorithm begins by comparing 19 with 12 realizing that it should proceed to the right subtree. Next 19 and 25 are compared to note that the search should proceed to the left subtree. But the right subtree is empty. This is where the Search algorithm will terminate. The node 17 is y.

We can insert 19 as the right child of Thus we see that we can also process the Insert operation in O h time. A Delete operation can also be processed in O h time. Let the element to be deleted be x. We first Search for x. If x is not in the tree, we quit.

If not the Search algorithm will return the node in which x is stored. There are three cases to consider.

This is an easy case — we just delete x and quit. Let z be the parent of x. We make z the parent of y and delete x. In Figure 2, if we want to delete 9, we can make 12 the parent of 7 and delete 9. There are two ways to handle this case. The first way is to find the largest key y from the left subtree. Replace the contents of node x with y and delete node y. Note that the node y can have at most one child. In the tree of Figure 2, say we desire to delete The largest key in the left subtree 5 is 17 there is only one node in the left subtree.

We replace 25 with 17 and delete node 17 which happens to be a leaf. The second way to handle this case is to identify the smallest key z in the right subtree of x, replace x with z, and delete node z. In either case, the algorithm takes time O h. The operation Find-Min can be performed as follows. We start from the root and always go to the left child until we cannot go any further.

Item Preview

The key of the last visited node is the minimum. In the tree of Figure 2, we start from 12, go to 9, and then go to 7. We realize 7 is the minimum. This operation also takes O h time. If we have a binary search tree with n nodes in it, how large can h get? The value of h can be as large as n.

Consider a tree whose root has the value 1, its right child has a value 2, the right child of 2 is 3, and so on. This tree has a height of n. Thus we realize that in the worst case even the binary search tree may not be better than an array or a linked list. But fortunately, it has been shown that the expected height of a binary search tree with n nodes is only O log n.

This is based on the assumption that each permutation of the n elements is equally likely to be the order in which the elements get inserted into the tree. Thus we arrive at the following Theorem. Theorem 2. In the worst case, the operations might take O n time each. There are a number of other schemes based on binary trees which ensure that the height of the tree does not grow very large. These schemes will maintain a tree height of O log n at any time and are called balanced tree schemes.

Examples include red- black trees, AVL trees, trees, etc. These schemes achieve a worst case run time of O log n for each of the operations of our interest. We state the following Theorem without proof. We just illustrate one example. Consider the problem of sorting. Given a sequence of n numbers, the problem of sorting is to rearrange this sequence in non- decreasing order. This comparison problem has attracted the attention of numerous algorithm designers because of its applicability in many walks of life.

We can use a priority queue to sort. Let the priority queue be empty to begin with. We insert the input keys one at a time into the priority queue. This involves n invocations of the 6 Insert operation and hence will take a total of O n log n time c.

Fol- lowed by this we apply Delete-Min n times, to read out the keys in sorted order. This will take another O n log n time as well. Thus we have an O n log n -time sorting algorithm. This algorithm can be specified as follows. It is natural to describe any algorithm based on divide-and-conquer as a recursive algorithm i. The run time of the algorithm will be expressed as a recurrence relation which upon solution will indicate the run time as a function of the input size.

These multiplications are performed recursively. Cppersmith and Winograd have proposed an algorithm that only takes O n2. This is a complex algorithm details of which can be found in the reference supplied at the end of this article.

The problem is to check if x is a member of a[ ]. If so, the problem has been solved. We have already seen one such algorithm in Section 2. We assume that the elements to be sorted are from a linear order.

If no other assumptions are made about the keys to be sorted, the sorting problem will be called general sorting or comparison sorting. In this section we consider general sorting as well as sorting with additional assumptions.

The first algorithm is called the selection sort. Let the input numbers be in the array a[1 : n]. We first find the minimum of these n numbers by scanning through them. Let this minimum be in a[i]. We exchange a[1] and a[i].As an example, consider the problem of finding the minimum of n given numbers.

Even then the operations will only take O 1 time. Search takes as input an element x and decides if x is in the data structure. The problem of coloring graphs has been studied for many decades, and the theory of algorithms tells us a lot about this problem.

This will take another O n log n time as well. In general, at any given time, compare the current minimum element of X with the current minimum of Y , output the minimum of these two, and delete the output element from its sequence.