 ## Introduction To Algorithms

Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, Clifford Stein

Mentioned 49

An extensively revised edition of a mathematically rigorous yet accessible introduction to algorithms.

What is dynamic programming?

How's it different from recursion, memoization, etc?

I've read the wikipedia article on it, but I still don't really understand it.

Here is my answer in similar topic

If you want to test yourself my choices about online judges are

and of course

You can also checks good universities algorithms courses

After all, if you can't solve problems ask SO that many algorithms addict exist here

I can't figure out the principles of dynamic programming and I really do want it. DP is very powerful, it can solve problems like this:

Getting the lowest possible sum from numbers' difference

So, can you suggest me good books or articles (preferably with examples with real code) which would explain me what is dynamic programming? I really want simple examples first of all, then I'll move on.

In short, Dynamic Programming is a method to solve complex problems by breaking them down into simpler steps, that is, going through solving a problem step-by-step.

I hope this links will help at least a bit.

Dynamic programming is a useful type of algorithm that can be used to optimize hard problems by breaking them up into smaller subproblems. By storing and re-using partial solutions, it manages to avoid the pitfalls of using a greedy algorithm. There are two kinds of dynamic programming, bottom-up and top-down.

In order for a problem to be solvable using dynamic programming, the problem must possess the property of what is called an optimal substructure. This means that, if the problem was broken up into a series of subproblems and the optimal solution for each subproblem was found, then the resulting solution would be realized through the solution to these subproblems. A problem that does not have this structure cannot be solved with dynamic programming.

# Top-Down

Top-down is better known as memoization. It is the idea of storing past calculations in order to avoid re-calculating them each time.

Given a recursive function, say:

``````fib(n) = 0 if n = 0
1 if n = 1
fib(n - 1) + fib(n - 2) if n >= 2
``````

We can easily write this recursively from its mathematic form as:

``````function fib(n)
if(n == 0 || n == 1)
n
else
fib(n-1) + fib(n-2)
``````

Now, anyone that has been programming for awhile or knows a thing or two about algorithmic efficiency will tell you that this is a terrible idea. The reason is that, at each step, you must to re-calculate the value of fib(i), where i is 2..n-2.

A more efficient example of this is storing these values, creating a top-down dynamic programming algorithm.

``````m = map(int, int)
m = 0
m = 1
function fib(n)
if(m[n] does not exist)
m[n] = fib(n-1) + fib(n-2)
``````

By doing this, we calculate fib(i) at most once.

# Bottom-Up

Bottom-up uses the same technique of memoization that is used in top-down. The difference, however, is that bottom-up uses comparative sub-problems known as recurrences to optimize your final result.

In most bottom-up dynamic programming problems, you are often trying to either minimize or maximize a decision. You are given two (or more) options at any given point and you have to decide which is more optimal for the problem you're trying to solve. These decisions, however, are based on previous choices you made.

By making the most optimal decision at each point (each subproblem), you are making sure that your overall result is the most optimal.

The most difficult part of these problems is finding the recurrence relationships for solving your problem.

To pay for a bunch of algorithm textbooks, you plan to rob a store that has n items. The problem is that your tiny knapsack can only hold at most W kg. Knowing the weight (w[i]) and value (v[i]) of each item, you want to maximize the value of your stolen goods that all together weight at most W. For each item, you must make a binary choice - take it or leave it.

Now, you need to find what the subproblem is. Being a very bright thief, you realize that the maximum value of a given item, i, with a maximum weight, w, can be represented m[i, w]. In addition, m[0, w] (0 items at most weight w) and m[i, 0] (i items with 0 max weight) will always be equal to 0 value.

so,

``````m[i, w] = 0 if i = 0 or w = 0
``````

With your thinking full-face mask on, you notice that if you have filled your bag with as much weight as you can, a new item can't be considered unless its weight is less than or equal to the difference between your max weight and the current weight of the bag. Another case where you might want to consider an item is if it has less than or equal weight of an item in the bag but more value.

`````` m[i, w] = 0 if i = 0 or w = 0
m[i - 1, w] if w[i] > w
max(m[i - 1, w], m[i - 1, w - w[i]] + v[i]) if w[i] <= w
``````

These are the recurrence relations described above. Once you have these relations, writing the algorithm is very easy (and short!).

``````v = values from item1..itemn
w = weights from item1..itemn
n = number of items
W = maximum weight of knapsack

m[n, n] = array(int, int)
function knapsack
for w=0..W
m[0, w] = 0
for i=1 to n
m[i, 0] = 0
for w=1..W
if w[i] <= w
if v[i] + m[i-1, w - w[i]] > m[i-1, w]
m[i, w] = v[i] + m[i-1, w - w[i]]
else
m[i, w] = m[i-1, w]
else
m[i, w] = c[i-1, w]

return m[n, n]
``````

# Example Problems

Luckily, dynamic programming has become really in when it comes to competitive programming. Check out Dynamic Programming on UVAJudge for some practice problems that will test your ability to implement and find recurrences for dynamic programming problems.

As part of a correspondence Mathematics MSc I did a course based on the book http://www.amazon.co.uk/Introduction-Programming-International-mathematics-computer/dp/0080250645/ref=sr_1_4?ie=UTF8&qid=1290713580&sr=8-4 It really is more of a mathematical angle than a programming angle, but if you can spare the time and effort, it is a very thorough introduction, which seemed work for me as a course that was run pretty much out of the book.

I also have an early version of the book "Algorithms" by Sedgewick, and there is a very readable short chapter on dynamic programming in there. He now seems to sell a bewildering variety of expensive books. Looking on amazon, there seems to be a chapter of the same name at http://www.amazon.co.uk/gp/product/toc/0201361205/ref=dp_toc?ie=UTF8&n=266239

Almost all introductory algorithm books have some chapter for dynamic programming. I'd recommend:

If you want to test yourself my choices about online judges are

and of course

You can also checks good universities algorithms courses

After all, if you can't solve problems ask SO that many algorithms addict exist here

I've always been a largely independent learner gleaning what I can from Wikipedia and various books. However, I fear that I may have biased my self-education by inadvertent omission of topics and concepts. My goal is to teach myself the equivalent of an undergraduate degree in Computer Science from a top university (doesn't matter which one).

To that end, I've purchased and started reading a few academic textbooks:

As well as a few textbooks I have left over from classes I've taken at a mediocre-at-best state university:

My questions are:

• What topics aren't covered by this collection?
• Are there any books that are more rigorous or thorough (or even easier to read) than a book listed here?
• Are there any books that are a waste of my time?
• In what order should I read the books?
• What does an MIT or Stanford (or UCB or CMU ...) undergrad learn that I might miss?

Software engineering books are welcome, but in the context of academic study only please. I'm aware of Code Complete and the Pragmatic Programmer, but I'm looking for a more theoretical approach. Thanks!

I think you can use most of the other books for reference and just absorb Programming Pearls in its entirety. Doing so would make you better than 90% of the programmers I've ever met.

The "Gang of Four" Design Patterns book. The Design Patterns course I took in college was probably the most beneficial class I've ever taken.

First, I wouldn't worry about it. But if you'd like a book to learn some of the abstract CS ideas, I'd recommend The Turing Omnibus or Theoretical Introduction to Programming.

If I were deciding between hiring two programmers and neither had much experience, but one had a CS degree and the other didn't, I'd hire the one with the CS degree. But when you get to comparing two programmers with a dozen years of experience, the degree hardly matters.

Even i'm in the same plane: studying computer science in my free time after work; These are some of the books i have in my shelf right now

Will udpate this list further as soon as i finish them... :-)

File Structures: An object oriented approach with C++

A lot of good info about block devices and file structuring which you won't find in any of the books you listed. It got a few critical reviews on Amazon because people didn't like his code examples, but the point of the book is to teach the concepts, not give cut and paste code examples.

Also make sure to get a book on compilers

Biggest two omissions I see:

For operating systems I prefer the Tanenbaum instead of the Silberschatz but both are good:

And about the order, that would depend on your interests. There aren't many prerequisites, automata for compilers is the most obvious one. First read the automata book and then the dragon one.

I don't know all the books you have, but the ones I know are good enough so that may mean the others are decent as well.

You are missing some logic and discrete math books as well.

And let's not forget some database theory books!

I've a good idea of what Big-O is, and I also know a few basic sorting algorithms, although, for some reason, I was never comfortable with them, and I keep forgetting them. I've been programming for 4 years in Java, Python, C and C++; I've been a decent programmer. Now, I want to move beyond learning programming languages and start learning algorithms.

I tried 'Introduction to Algorithms' by Carmen et al. but the Math is too dense for me (or, may be, I'm too dense for the Math in that book).

Now, I'm planning to take up Algorithm Design Manual by Steve Skiena. Would you recommend it for my situation? Do you have any other recommendations if you think this is not the one for me?

If you can afford it (or your employer pays for it), and you program in Java, I'd suggest: Data Structures and Algorithms in Java. It covers the same topics you find in other books, but it makes it easy to apply an understand if your used to programming in Java. For example, C++ data structure books don't usually spend a great deal of time on hashes, as structures based on hashes aren't as common in C++ programming. In Java, however, hashes are very common, and every object has a hashCode method. The book combines a good mix of theory and practice. "Introduction to Algorithms" by Cormen, Leiserson & Rivest. See http://www.amazon.com/Introduction-Algorithms-Second-Thomas-Cormen/dp/0262032937

No, I don't think so. Try Data Structures and Algorithms in 24 Hours by Robert Lafore.

I know C and C++ and I have some experience with Java, but I don't know too much about Algorithms and Data Structures.

I did a search on Amazon, but I don't know what book should I choose. I don't want a book which put its basis only on the theoretic part; I want the practical part too (probably more than the theoretical one :) ).

I don't want the code to be implemented in a certain language, but if is in Java, probably I would happier. :)

Introduction to Algorithms by Cormen et. al. is a standard introductory algorithms book, and is used by many universities, including my own. It has pretty good coverage and is very approachable.

And anything by Robert Sedgewick is good too.

If you don't need in a complete reference to the most part of algorithms and data structures that are in use and just want to get acquainted with common techniques I would recommend something more lightweight than Cormen, Sedgewick or Knuth. I think, Algorithms and Data Structures by N. Wirth is not as bad choice even in spite of it was printed far ago.

I want to learn algorithms using some very basic simple tutorials. Are there any out there? I have heard of recursion and stuff and I would like to get good at it. Any help would be appreciated.

I would start out by taking a look at EternallyConfuzzled which contains great tutorials for basic Data Structures an Algorithms including linked lists and binary search trees, sorting and searching algorithms. If you want to learn more after this I would recommend the following books in order of increasing complexity, completeness, and required math knowledge:

I suggest that you start from sorting algorithms. Read the related wikipedia page, skip the O(n log n) stuff, and focus on the implementations of, say, insertion sort, merge sort, and quick sort. Familiarize with binary searching. Also, learn about some basic data structures, such as vectors, linked lists, stacks, their implementation, and what they are useful for. (More often than not, an algorithm to solve a problem goes together with a suitable data structure.) Once you are confident with different algorithms and data structures, you can dive in a more complete treatise such as the book by Cormen et al.

As for recursion, it is not an algorithm in itself. It is instead a technique that some algorithms employ to solve a problem, when the latter can be naturally split into subproblems. The technique of splitting a problem, solving the subproblems separately and then merging their solutions to obtain a solution for the original problem, is called "divide et impera", or "divide and conquer". (Recursion is also the related feature of most programming languages, where it basically means "functions that call themselves".)

The most cited, the most trivial, and the most useless example of a "recursive algorithm", is the one to compute factorials. Don't mind it. Instead, read about the Tower of Hanoi problem, which admits a simple and elegant recursive solution, and again, study some sorting algorithms, for many of them are indeed recursive.

In an attempt to be a better programmer, I am planning to read a lot of books and learn at least one new language (which I think is going to be python) during the 3-month long holiday that I am going to have.

The list of books that I am planning to read --

Ebooks:

The list of things that I want to do --

• Start using Linux (probably starting with Ubuntu).
• Learning to use the bunch of tools mentioned here.
• Setup a blog (hopefully).

I enjoy watching lectures as well, so, along with Introduction to Algorithms I am going to watch a bunch of Stanford Courses.

A little background: I am a 17 year old guy and I really enjoy programming and every aspect related to it. I have been programming in C and C++ for a while now. The former is more stronger than the latter. I was hoping to utilize the time that I have got on hand thoroughly. Any changes needed with the plan or any additions?

EDIT: Did not mention programming projects.

1. Making a game using Allegro.
2. Using QT4 to create a GUI-based database for my school.

Do not just passively read all that information, instead practice after every book chapter or lecture. Practice by writing those algorithms or regexpes for yourself, check your previous code in the light of what code complete has taught you and so on.

If that doesn't give you enough time to do it all, doesn't matter, you better learn properly instead of swallowing material as fast as you can. And all of that is probably too much to do it in only three months anyway, so prioritize by your interests. Take one book and a set of lectures and go at them until finished, then pickup the next, never forgetting to put in practice the concepts shown.

Along with the excellent books you and others listed here I recommend The Little Schemer which will give you a perspective of functional programming. Do not lock yourself into imperative languages (C, C++, C#, Pascal, Java,.. ) while you can dive into different paradigms easily. 17 is a great age :-) Enjoy your journey!

These probably won't fit within your three month plan, but should you wish to master C and Unix, long have I been glad that I spent the time to follow The Loginataka. After that, I found Lions' Commentary on UNIX 6th Edition to be deeply enlightening.

Programming Pearls by Jon Bentley

Code by Charles Petzold

Even as it stands, you're going to have a very busy, hopefully productive break. Good luck!

Response to question edit: If you're interested in learning about databases, then I recommend Database in Depth by Chris Date. I hope by "create a GUI-based database" you mean implementing a front-end application for an existing database back end. There are plenty of database solutions out there, and it will be well worth it for your future career to learn a few of them.

I know Prim's algorithm and I know its implementation but always I skip a part that I want to ask now. It was written that Prim's algorithm implementation with Fibonacci heap is `O(E + V log(V))` and my question is:

• what is a Fibonacci heap in brief?
• How is it implemented? And
• How can you implement Prim's algorithm with a Fibonacci heap?

I implemented Dijkstra using Fibonacci heaps a few years ago, and the problem is pretty similar. Basically, the advantage of Fibonacci heaps is that it makes finding the minimum of a set a constant operation; so that's very appropriate for Prim and Dijkstra, where at each step you have to perform this operation.

Why it's good

The complexity of those algorithms using a binomial heap (which is the more "standard" way) is O(E * log V), because - roughly - you will try every edge (E), and for each of them you will either add the new vertex to your binomial heap (log V) or decrease its key (log V), and then have to find the minimum of your heap (another log V).

Instead, when you use a Fibonacci heap the cost of inserting a vertex or decreasing its key in your heap is constant so you only have a O(E) for that. BUT deleting a vertex is O(log V), so since in the end every vertex will be removed that adds a O(V * log V), for a total O(E + V * log V).

So if your graph is dense enough (eg E >> V), using a Fibonacci heap is better than a binomial heap.

How to

The idea is thus to use the Fibonacci heap to store all the vertices accessible from the subtree you already built, indexed by the weight of the smallest edge leading to it. If you understood the implementation or Prim's algorithm with using another data structure, there is no real difficulty in using a Fibonacci heap instead - just use the insert and deletemin methods of the heap as you would normally, and use the decreasekey method to update a vertex when you release an edge leading to it.

The only hard part is to implement the actual Fibonacci heap.

I can't give you all the implementation details here (that would take pages), but when I did mine I relied heavily on Introduction to algorithms (Cormen et al). If you don't have it yet but are interested in algorithms I highly recommend that you get a copy of it! It's language agnostic, and it provides detailed explanations about all the standards algorithms, as well as their proofs, and will definitely boost your knowledge and ability to use all of them, and design and prove new ones. This PDF (from the Wikipedia page you linked) provides some of the implementation details, but it's definitely not as clear as Introduction to algorithms.

I have a report and a presentation I wrote after doing that, that explain a bit how to proceed (for Dijkstra - see the end of the ppt for the Fibonacci heap functions in pseudo-code) but it's all in French... and my code is in Caml (and French) so I'm not sure if that helps!!! And if you can understand something of it please be indulgent, I was just starting programming so my coding skills were pretty poor at the time...

I had a painful experience with the "Analysis of Algorithms" classes back in college but have recently found a need for it in the real world. -- Anyway, I'm looking for a simple-yet-effective crash course. Any ideas?

Related Sidenote: It sure would be nice if there were a "Cartoon Guide to Algorithm Analysis", taught by Dilbert.

UPDATE: A very similar question can be found at: How to get started on ALGORITHMS?

You don't say a lot about the remainder of you background. For straight out analysis of algorithms, the methods by which you evaluate an algorithm to find its order statistics and behavior, If you're comfortable with mathematics in general -- say you've had two years of calculus, or a good abstract algebra course -- then you can't really do much better than to read Knuth Volume One.

The usual "Analysis of Algorithms" course is also a data structures course, so a data structures text might be better if you also need to learn about lists, trees, etc. My favorite in graduate school was Aho, Hopcroft and Ullman.

I like Cormen as a reference; it also serves as an admirable doorstop, for the execution of large icky bugs, for clamping small glue joints (the slick cover releases most wood glues), and as an end book because it holds a bookend in place. Wouldn't recommend it as an intro text.

I recommend Data Structures and Algorithms by Adam Drozdek available in Java and C++ editions.

The single most helpful tool I've had for algorithms is Introduction to Algorithms.

It's the single best resource I know of for algorithms. It covers so many topics in good depth and has examples throughout. I still refer to it quite frequently.

Without that book my algorithms analysis class would have been a pain.

I like Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein. It's a bit heavy, but I found it to be a decent reference text.

There are a lot of good books on the subject. I like An Introduction to the Analysis of Algorithms. Also check out the algorithms course on MIT OpenCourseWare (using CLRS as the course text). It's a little bit deep, but having it online allows you to go at your own pace.

A couple of other books that I've started reading recently are Algorithms in a Nutshell and the Algorithm Design Manual. They both take a lighter approach than most algorithms books. Instead of heavy math and formal proofs these books give you realistic problem statements and show you the steps taken to refine an algorithm. They also show you how to estimate and measure the complexity of a solution. I would highly recommend either book.

Is there a master list of the Big-O notation for everything? Data structures, algorithms, operations performed on each, average-case, worst-case, etc.

Try "Introduction to Algorithms" by Cormen, Leisersen, and Rivest. If its not in there its probably not worth knowing.

The Cormen book is more about teaching you how to prove what Big-O would be for a given algorithm, rather than rote memorization of algorithm to its Big-O performance. The former is far more valuable than the latter, and requires an investment on your part.

My degree is in Electrical and Computer Engineering but i'm currently employed as a Software Engineer. I took all of the algebra, geometry and calculus classes that one would expect from someone with my degree however I must admit, I think I learned just enough to pass the test but never really saw a use for it and therefore never really retained much of the material.

Now that i've matured some, I see the use for it all of the time. I KNOW that there are lots of places that math knowledge would improve my coding so i'm ready to relearn the old stuff and learn some new stuff.

What are your favorite resources out there? (Resources that can tie math into programming are even better if you have any!) Books? Websites? Blogs?

Or as I like to refer to it: The guy that made me realize I hadn't actually invented or discovered anything that hadn't been known for years.

I really like the book Mastering Technical Mathematics 3rd Edition. It's kind of a bird's-eye view of mathematics with a technical focus. It starts out with such simple concepts as addition and multiplication, but as it explains the concepts it also explains how computers do the calculations. About half-way through you'll find quadratic equations and calculus. Page 442 begins the discussion of "General Time-Space Hypervolume". I didn't see anything about matrix math in there, but for a good "everything about math in a nutshell"-type book it's great.

Math Refresher for Scientists and Engineers (by John R. Fanchi)

Just-In-Time Math for Engineers (by Archibald L. Fripp, Jon B. Fripp and Michael L. Fripp)

When we start getting into algorithm design and more discrete computer science topics, we end up having to prove things all of the time. Every time I've seen somebody ask how to become really good at proofs, the common (and possibly lazy) answer is "practice".

Practicing is all fine if you have the basics down, but how do you get into the mind set for mathematical proofs? When did induction click? What resources are best for teaching these topics? What foundation topics should be researched prior to indulging in proof-writing?

I'll start off my answer by admitting that as a CS student, I had a really tough time grasping a formal way of thinking, and it's never easy, unless you have a talent for it.

I'm afraid there is no better answer than practice and study.

A formal mathematical and algorithmic way of thinking and visioning problems is a skill which first demands a very deep understanding of the subjects you are dealing with. Second, it requires you have good knowledge of existing proofs. Try to envision yourself as some of the great scientists who came up with the algorithms you are studying. Understand how you would have tried to tackle that specific problem. Then see how they proved the correctness of their algorithm.

I can only recommend the greatest textbook in this subject which is Intro to Algorithms by CLRS. If you go through it from start to finish, including every exercise, you will enhance your skills.

I'm afraid that "practice" really is the best answer here.

Its very similar to programming: once you get the hang of it, you find patterns which solve problems particularly well, and you can create a picture of the high-level design of novel systems which you've never implemented before. However, neophyte programmers aren't aware of patterns: they hack away at code until they accidentally stumble on some solution which appears to "work".

When you're given a problem to prove, you can usually identify properties ("Do I have a set of distinct objects?", "Am I generating permutations?", "Am I looking to minimize/maximize some value?", etc). Sooner or later, proofs will clump together into vaguely similar group, where techniques used to solve one problem can easily apply to novel variations.

After reading an introductory book on algorithms and data structures I am now craving for examples on how to combine these for optimal efficiency.

For instance, you can combine hashmaps with specific sorting algorithms to create a simple text search program.

Is there any good book or online resource for this?

Good book (worked for me):

Data Structures and Algorithm Analysis in Java (Second Edition)

ISBN: 0-321-37013-9

To answer my own question, it seems I just have to read up on a lot of algorithms and real world use cases.

As for books, I've ordered

Any good algorithms book is going to have a chapter or two on the importance of choosing the right data structures. I recommend the following books:

I also recommend you check out the Stony Brook Algorithm Repository, particularly the lectures.

http://www.amazon.com/Structure-Interpretation-Computer-Programs-Second/dp/0070004846/ref=sr_1_1?ie=UTF8&qid=1301994609&sr=8-1

I can warmly recommend this book. It is rather abstract with examples in Scheme (a Lisp dialect) but it will really change the way you think about programs, data and algorithms.

We have an application that requires assignment of jobs to resources. The resources have a number of attributes that define their suitability to a particular job -- some are preferences, some are hard constraints (all of the membership variety, e.g. "resource A is suited to jobs with color X, Y, or Z".

Resources have a cost associated with them (the duration they spend on-line). We have the ability to recruit resources -- this takes a variable amount of time. We can recruit for a fixed interval of time.

To give an idea of scale: There will be about 20 resources at any given time, 100 outstanding jobs. Completion of jobs takes 5-15 seconds. Recruiting a resource takes about 1-2 minutes, and we can recruit from 1-30 minutes of time (rerecruiting is allowed). We don't have much heads-up on jobs being submitted, maybe a few seconds.

The goal is completion of jobs with lowest cost (resource usage) for a given average latency (job completion time).

I'd appreciate pointers to algorithms, software libraries, or approaches to solving this problem.

This problem can be viewed as a linear optimization problem, so this should be a start. I have used this library however it has quite a lot of other things, so it may be overkill. Instead, it is not difficult to develop your own library, this book has a good chapter on LP.

What are the most common problems that can be solved with both these data structures?

It would be good for me to have also recommendations on books that:

• Implement the structures
• Implement and explain the reasoning of the algorithms that use them

The Algorithm Design Manual contains some interesting case studies with creative use of graphs. Despite its name, the book is very readable and even entertaining at times.

The first thing I think about when I read this question is: what types of things use graphs/trees? and then I think backwards to how I could use them.

For example, take two common uses of a tree:

• The DOM
• File systems

The DOM, and XML for that matter, resemble tree structures. It makes sense, too. It makes sense because of how this data needs to be arranged. A file system, too. On a UNIX system there's a root node, and branching down below. When you mount a new device, you're attaching it onto the tree.

You should also be asking yourself: does the data fall into this type of structure? Create data structures that make sense to the problem and the rest will follow.

As far as being easier, I think thats relative. Are you good with recursive functions to traverse a tree/graph? What if you need to balance the tree?

Think about a program that solves a word search puzzle. You could map out all the letters of the word search into a graph and check surrounding nodes to see if that string is matching any of the words. But couldn't you just do the same with with a single array? All you really need to do is move an index to check letters to the left and right, and by the width to check above and below letters. Solving this problem with a graph isn't difficult, but it can create a lot of extra work and difficulty if you're not comfortable with using them - of course that shouldn't discourage you from doing it, especially if you are learning about them.

I hope that helps you think about these structures. As for a book recommendation, I'd have to go with Introduction to Algorithms.

@DavidJoiner / all:

FWIW: A new version of the Algorithm Design Manual is due out any day now.

The entire course that he Prof Skiena developed this book for is also available on the web:

http://www.cs.sunysb.edu/~algorith/video-lectures/2007-1.html

Algorithms for Java: Part 5 by Robert Sedgewick is all about graph algorithms and datastructures. This would be a good first book to work through if you want to implement some graph algorithms.

I've been noticing answers on stack overflow that use terms like these, but I don't know what they mean. What are they called and is there a good resource that can explain them in simple terms?

Big O notation characterizes functions according to their growth rates

If you don't won't to drill into details you can very often approximate algorithm complexity by analizing its code:

``````void simpleFunction(arg); // O(1) - if number of function instructions is constant and don't depend on number of input size

for (int i=0;i<n;i++) {simpleFunction(element[i]);} // O(n)

for (int i=0;i<n;i++) { // this one runs O(n^2)
for (int j=0;j<n;j++) {
simpleFunction(element[i]);
}
}

for (int i=0;i<n;i*=2) {  // O(lgn)
simpleFunction(element[i]);
}
``````

Sometimes it is not so simple to estimate function/algorithm big O notation complexity in such cases amortized analysis is used. Above code should serve only as quickstarter.

So when someone asks you to give a O(n) or a O(nlogn) algorithm to compute something, how do you know what to answer? It seems the only way of being able to answer this sort of question is knowing the time complexities of various algorithms beforehand, rather than thinking up something on the spot. Am I correct in assuming this?

That's not too far from the truth. There are a couple of systematic methods, but they are hard work, and even then they tend to get "shortcut" at some point.

Big-O gives an upper bound. If someone asks you for any algorithm and you don't know, you can say O(n^n) and probably be correct (though there are even slower algorithms out there). It's not a tight bound, of course.

The "shortcut" is basically the point of inspiration when you spot a pattern that proves some particular upper bound.

99% of the time, most people just use their intuition to find a good way to look at the algorithm, and then do just enough to prove that bound. For example, instead of looking at the actual flow of execution, it is common to say "each item is processed at most x times, each time taking constant time" (for an O(n) algorithm). You may have missed the fact that e.g. at most log n items are ever processed, but if you really grok what the algorithm is doing, that's reasonably unlikely.

Of course this probably won't get you through an algorithms course.

For the systematic methods - well, there's the "MIT 6.046J / 18.410J Introduction to Algorithms" course videos which can be viewed on YouTube. The lecturer is one of the authors of a very well respected algorithms textbook.

You are right, you should know the time complexities for different algorithms to know this. You should know the time complexities for sorting, for finding items in a dictionary, in a hash table, union find, flow graphs, DFS, BFS, minimum spanning trees, etc. These are the basics.

Introduction to Algorithms should have you well covered.

Does anyone known of a a good reference for canonical CS problems?

I'm thinking of things like "the sorting problem", "the bin packing problem", "the travailing salesman problem" and what not.

edit: websites preferred

I don't think you'll find the answers to all those problems in only one book. I've never seen any decent, comprehensive website on algorithms, so I'd recommend you to stick to the books. That said, you can always get some introductory material on canonical algorithm texts (there are always three I usually recommend: CLRS, Manber, Aho, Hopcroft and Ullman (this one is a bit out of date in some key topics, but it's so formal and well-written that it's a must-read). All of them contain important combinatorial problems that are, in some sense, canonical problems in computer science. After learning some fundamentals in graph theory you'll be able to move to Network Flows and Linear Programming. These comprise a set of techniques that will ultimately solve most problems you'll encounter (linear programming with the variables restricted to integer values is NP-hard). Network flows deals with problems defined on graphs (with weighted/capacitated edges) with very interesting applications in fields that seemingly have no relationship to graph theory whatsoever. THE textbook on this is Ahuja, Magnanti and Orlin's. Linear programming is some kind of superset of network flows, and deals with optimizing a linear function on variables subject to restrictions in the form of a linear system of equations. A book that emphasizes the relationship to network flows is Bazaraa's. Then you can move on to integer programming, a very valuable tool that presents many natural techniques for modelling problems like bin packing, task scheduling, the knapsack problem, and so on. A good reference would be L. Wolsey's book.

You can probably find the best in an algorithms textbook like Introduction to Algorithms. Though I've never read that particular book, it's quite renowned for being thorough and would probably contain most of the problems you're likely to encounter.

I tend to finish my work related tasks pretty quickly, and I get to have some free time on my hands. What should I write in order to become a better developer ? I'm familiar with c++/java/perl/python/ruby.

I wrote the following stuff on my own:

• simple web server
• simple web clients (different languages)
• DSLs, internal and external
• some lexers
• code indenters (source beautifiers)
• simple IDE

I would like some suggestions about some software that would be both challenging and fun to write.

Write a binary search tree and implement insertion, deletion, search, etc. When that's done write a splay tree. If that's not to your liking, pick something out of Introduction to Algorithms and write that instead.

What are the best ways to learn algorithms for programming contests such as USACO. I need to start learning algorithms as I have just gotten into the silver division. Are there any good books or tutorials to learn algorithms and techniques such as Dijkstra's, Dynamic Programming, Flood-fill, etc. in Java and actually be able to know how to implement them for problems? Thanks a lot for the help!

1. PRACTICE! This is the most important point. solve problems regularly in online judges like SPOJ , UVA, etc. Solving more problems will familiarize you with the type and format of questions that are asked in the programming competitions. This way, you will also increase your ability to derive your own algorithms and see through problems.

2. Get Introduction To Algorithm, Cormen. It is an excellent book for learning and analysis of algorithms and data structures.

Where can I find e-books on Data Structures and Algorithms? I am currently reading "Computer Algorithms: Introduction to design and Analysis" by Sara Baase and Allen Van Gelder. I would like to have additional information to supplement what's in this book. Also some references on worst-case analysis would be great.

Introduction to Algorithms

The Art of Computer Programming - by Donald Knuth (hard read, but well worth it, not recommended for a first algorithms book)

Concrete Mathemetics - By Donald Knuth (understanding the math behind algorithms)

I don't know if e-book versions are available for these, but if they are...these books will definitely give you the theory behind worst-case, and asymptotic analysis of algorithms.

As a self-taught computer programmer, I'm often at a loss to estimate the O() value for a particular operation. Yeah, I know off the top of my head most of the important ones, like for the major sorts and searches, but I don't know how to calculate one when something new comes along, unless it's blindingly obvious. Is there a good web site or text that explains how to do that? Heck, I don't even know what computer scientists call it, so I can't google it.

If you really want to learn this topic, then you probably need a standard theory/algorithms textbook. I don't know of any website that can actually teach you complexity analysis ("complexity" or "time complexity" is how you call those O() values; you might also want to google for "analysis of algorithms" or "introduction to algorithms" or such).

But before that -- a free option. There are slides from a course given by Erik Demaine and Charles Leiserson in MIT, that are free and look great. I would definitely try to read them and see if that works for you. They are here.

Now, textbooks:

The classical choice for a textbook is Cormen et al's book Introduction to Algorithms (there might be a cheap version available to buy here and I remember seeing a free (possibly illegal) version online, but I don't remember where).

A more recent and modern-style book, which is IMO more fun to read and a better choice, is Kleinberg and Tardos' Algorithm Design.

Here are some websites with information (I got these by googling "algorithm analysis lecture notes" without the quotes):

The above is written by a computer science theorist. So programmers or other practical people might have some different opinions.

I'm learning for an exam in an introductory course to computer science, and i have a problem with the topic of complexity, both in "regular" algorithms and also in recursive algorithms (usually we get these questions written as C code).
I was wondering if there're online examples somewhere in the internet and/or book that covers the topic in a basic level (not too basic).
the level of the questions at least like this one:

sample exercise I have found a very good explanation in Introduction to Algorithms.... but you need some mathematics knowledge to understand it.

The lecture (video) for the Introduction to Algorithms course from MIT regarding the Asymptotic Notation is here.

I need to find an algorithm to find the best time to meet up for lets say a study group. The system has information about a group of students and their class schedules. The system should give a time for meetup, where there is no conflict with anyone's class schedules. what would be the best way attack this problem. I was looking for any scheduling algorithm, but didnt find anyone that fits.

this is a matching problem and can be solve by maximum flow algorithm

each student and study group is a node on a directional graph and input for each student have one flow unit as input and is connected to all study groups node. each study node group has unlimited output capacity , when the flow in the network is maximal you have your correct combination

In the vein of programming questions: suppose there's a collection of objects that can be compared to each other and sorted. What's the most efficient way to keep track of the smallest element in the collection as objects are added and the current smallest occasionally removed?

@Harpreet
That is not optimal. When an object is removed, erickson will have to scan entire collection to find the new smallest.

You want to read up on Binary search tree's. MS has a good site to start down the path. But you may want to get a book like Introduction to algorithms (Cormen, Leiserson, Rivest, Stein) if you want to deep dive.

I am learning algorithms and need you guys to help me. I am a beginner so forgive me if my question is not clear. Whiles am learning i am seeing something like NlogN, N^2 etc.. and something like that.

I don't really understand it clearly when it comes to checking the efficiency/performance of different algorithms using these notations. I understand Logarithms very well but the way they are been used in relation to checking algorithms performance makes me mad.

I am asking if someone can point me to a tutorial where such notations are been explained so that i can get the basics very well. I really want to understand them and am willing to learn.

Kap.

http://www.amazon.com/Structures-Algorithm-Analysis-Allen-Weiss/dp/0805390529 is one of the best books which will explain Algorithms in excellent way.

--Cheers

Buy Introduction to Algorithms. You can get a second hand version at an affordable price.

And/or view these great online video lectures from MIT built around aforementioned book.

By viewing those lectures, you'll understand how some algorithms have logarithmic time complexity, whereas some have exponential, etc.

I have been making websites in PHP and MySQL for almost ten years now but I have never used a framework. This means I have hand coded everything and I know how the code all works and interacts with itself.

Wanting to expand my horizons I have picked up Ruby on Rails to learn another web language, framework, DB etc. I have been following the Ruby on Rails tutorial and it is going smoothly so far but what bugs me how much of a black box it feels. there's too much magic, and stuff happens just because it does. Example of this "magic" include, if I add to the routes file "resources :users" all of a sudden I have near infinite possible links like /new /user/1 /user/1/edit etc. Or if I want to add a column to my db table I have to do something like this in the console "rails integrate _to_table value:type" and then I have to "rake" the db.

I have been able to make stuff following the tutorial but I don't understand what I am doing. I know part of it will come through experience but I want to know how and why Rails is doing what it does.

What are some good resources, online and books, where I can learn how RoR works?

Yes, it takes a while to know what all the magic is, but you'll get there eventually if you stick with it.

The 'bible' for ruby on rails development is http://pragprog.com/book/rails4/agile-web-development-with-rails

The 'bible' for the ruby language itself is the 'pickaxe' book, with contributions from the ruby language author himself. http://www.amazon.com/Ruby-Programming-Language-David-Flanagan/dp/0596516177

Ryan Bates has done HUNDREDS of free sceencasts and he is famous for having a really great approach, using the framework effectively. Every good rubiest rate these highly. http://railscasts.com/

Many folks find the "zombies" courses really good. http://railsforzombies.org/

Finally I'll offer my own bookmarks site with over 50 sites for rails:

and 20+ sites for ruby at

While you are learning a good IDE can help a lot. I used eclipse, then netbeans then rubyMine (from our friend at IntelliJ, well known for their java editor. rubyMine has the most features. It is not free but for the price (somewhere in the \$24-\$75 range, depending on special offers) it's well worth the cost.

I would say that using the terms "black-box" and "magic" is a quite inadequate and maybe even a bit depreciative. I believe that the difference you are feeling comes from the fact that Ruby is a very different language than PHP, and that it is easier to code high-level abstractions and conventions in Ruby that in PHP. Rails is full of these abstractions and conventions, and these may be quite confusing, specially if you have no ideia of how they internaly work.

Maybe it's not about Rails that you should be reading. I'd say that you should try to understand Ruby in the first place. A good understanding of its blocks, its object model, and its mixins is mandatory in order make that "black-box" feeling go away.

Programming in a modern programming language ain't black magic. Debugging Fortran code by printing the code and using crystals over the papersheets to find the bugs was.

I've just now started reading an Algorithms book that defined Graphs as follows:

Graphs – which represent relationships between arbitrary pairs of objects. Figure 1.8(b) models a network of roads as a graph, where the vertices are cities and the edges are roads connecting pairs of cities. Graphs are likely the object in question whenever you seek a “network,” “circuit,” “web,” or “relationship.”

Figure 1.8(b) is this: What confuses me here is the following line:

... where the vertices are cities and the edges are roads connecting pairs of cities ...

Vertices are the dots, edges are the lines. Hence cities and roads.

I'm not sure what confuses you, but in general graphs are indeed used to model connections between objects.

If you have a bunch of objects (vertices) that may be "connected" to one another, a Graph would be the high level data structure to maintain it. I'm saying "high level" because in practice you will probably need supporting data structures to maintain a graph in memory/database/file: matrices, lists of links, many-to-many tables etc.

If the "direction" is not important, like in the case of the plot above (i.e. all roads are bidirectional), you have an "undirected graph". If the connection direction does have an importance (for example if there are unidirectional roads between cities), you'll have a "directed graph", where every edge is actually an "arrow", pointing at a certain direction.

If you're very new to this, I recommend reading the relevant Wikipedia entry. For some "real" studying, I recommend Cormen et al's Introduction to Algorithms, the book I studied from, which is in my opinion one of the best computer science books ever written.

I am a server side Java programmer. In my recent job search, I have come across a few postings where they mention : ' Candidates with expereince in algorithm development will be preferred'. What exactly does this refer to? This is the posting for a Bank...so not a job for a research laboratory...just to clarify a bit.
When we use Java in applications, we use the APIs that implement algorithms...so technically we are not developing algorithms. Right?

"Algorithm development" sounds vague. Maybe the original technical requirement was knowledge of algorithms, and somewhere along the way, someone thought it didn't sound impressive enough, and rewrote it to "candidates with experience in algorithm development will be preferred".

I don't think it means "the ability to create new algorithms from scratch". Rather, you need to be able to recognize when a program could benefit from the use of some known algorithm or data structure, or a slight modification of one, and the ability to get that done. This is a crucial skill on many projects, and especially those where speed is important.

The generic algorithms provided by the Java class library (like `Arrays.sort`) make up a small fraction of what you might find in even an introductory algorithms textbook. (I'm not a Java hacker by trade, but is there even a heap sort?)

I have been searching from a week may be for a book or tutorial with extensive data structures material in C ,but I could not find.I want to cover linked list,binary trees,hash tables,graphs etc...I want it in C, because I don`t want to mess with OOP and I don't want to read for linked lists in language without pointers.Some links will be appreciated.

As a beginner you can start with Classic Data Structures by D.Samanta. This is a good book on data structures. But keep in mind that it includes no codes but only pseudo codes and after reading the pseudo codes you will be able to implement it in C language easily.

After that I would suggest you Introduction to Algorithms, Second Edition by Thomas H. Cormen, one of the best book on datastructures and algorithm.

For online tutorials: MIT Open Course Ware.

I'm currently learning to program, and I didn't take CS classes so I'm basically starting out on the bottom. I have been putting together code on and off for many years, but haven't really had a good understanding of essential concepts needed for enganging in bigger projects. Object-orientation is an obvious one, and I feel I'm beginning to understand some of the concepts there. Then there is a lot of buzz and methodology, such as MVC, UML, SCRUM, SOLID and so foth and so on.. I've looked at many of these but I'm always stumped as most explanations seem to require some understanding of other concepts.

I want to learn this stuff the "right" way, so where do I begin?

What are the overarching constructs I need to understand that enable me to understand all the underpinnings of software architecture/design/development?

What am I missing?

Are there constructs and concepts that can and should wait until I've cleared the foundation?

When you are working with any modern general purpose language, it is probably a good idea to get a handle on patterns (MVC or Model-View-Controller is one). The book by the "gang of four" is a must read for this, or at least research a few and use it as a reference. clicky

Refactoring is another concept that should be in your arsenal. The book by Martin Fowler on this subject is a very nice read and helps understand the aforementioned patterns better also a little explanation about UML is included.

Can't post more than one hyperlink so...

search on amazon for: Refactoring, Improving the design of existing code

When you want to communicate your designs UML (Unified Modelling Language) is the 'tool' of choice for many people. However UML is large and unwieldy but Martin Fowler (again) has managed to boil it down to the essentials.

search on amazon for: UML Distilled (make sure you get the most recent one)

SCRUM is one of many methods that is used to manage software development groups, I do not think there is much merit in learning that when you are just starting out or on your own. Especially not in detail.

Hope it helps...

Stay away from ACRONYMS (including those you've listed) and Methodologies(tm). At least in the beginning.

Read good books. Start with this one: Pragmatic Programmer. Learn algorithms and data structures, possibly from Introduction to algorithms by Cormen et al.

Write a lot of code. Practice is more important than anything else.

I need to modify a Binary Search Tree that I created to assure that it is balanced. I only need to modify the add and remove methods, according to my instructions. Here's what I currently have:

``````package proj;

public class BinarySearchTree<T extends Comparable<T>>{
public static void main(String[] args) {
BinarySearchTree<Integer> tree = new BinarySearchTree<Integer>();
}

private Node<T> root;
private int size;
String inorder = "";
String preorder = "";

public BinarySearchTree(){
root = null;
size = 0;
}

//adds a new item to the queue
Node<T> n = new Node<T>(obj);
if( root == null ) {
root = n;
} else {
}
size++;
}

private void add(Node<T> subtree, Node<T> n) {
if( subtree.getValue().compareTo(n.getValue()) > 0 ) {
if( subtree.getLeftChild() == null ) {
subtree.setLeftChild(n);
n.setParent(subtree);
} else {
}
} else {
if( subtree.getRightChild() == null ) {
subtree.setRightChild(n);
n.setParent(subtree);
} else {
}
}
}

//returns the head of the queue
public T peek(){
Node<T> current = root;
while(current.getLeftChild() != null){
current = current.getLeftChild();
}
return current.getValue();
}

//removes the head of the queue and returns it
public T remove(){
if(root == null){
return null;
}

Node<T> current = root;
while(current.getLeftChild() != null){
current = current.getLeftChild();
}
if( current.getParent() == null ) {
root = current.getRightChild();
if(root != null){
root.setParent(null);
}
} else {
current.getParent().setLeftChild(current.getRightChild());
if(current.getRightChild() != null){
current.getRightChild().setParent(current.getParent());
}
}
size--;
return current.getValue();
}

//returns the position of an element in the queue, or -1 if it is not found
public int search(T searchItem){
String tempOrdered = inorder(root);
for(int i = 0; i<tempOrdered.length(); i++){
if(String.valueOf(tempOrdered.charAt(i)).equals(searchItem.toString())){
return i;
}
}
return -1;
}

//returns number of nodes in the tree
//returns the total number of elements in the queue
public int getSize(){
return size;
}
public String inorder() {
inorder = "";
if( root == null )
return inorder;
return inorder(root);
}

//returns an in-order, comma-separated string of every element in the queue
private String inorder(Node<T> n){
if(n.getLeftChild() != null){
inorder(n.getLeftChild());
}
inorder += n.getValue();
if(n.getRightChild() != null){
inorder(n.getRightChild());
}
return inorder;
}

public String preorder() {
preorder = "";
if( root == null )
return preorder;
return preorder(root);
}

//returns a pre-ordered, comma-separated string of every element in the queue
private String preorder(Node<T> n){
preorder+= n.getValue();
if(n.getLeftChild() != null){
preorder(n.getLeftChild());
}
if(n.getRightChild() != null){
preorder(n.getRightChild());
}

return preorder;
}

//returns the height of the tree; returns -1 if the tree is empty
public int height(Node<T> n){
if(n == null){
return -1;
}
return Math.max(height(n.getLeftChild()), height(n.getRightChild()))+ 1;
}

//returns the root node
public Node<T> getRoot(){
return root;
}
}
``````

I'm not looking for someone to walk me through this assignment - simply looking for some advice as to how I should go about doing this so that I don't break the code half way in. I'm guessing that I'll need to do something to the effect of checking the balance factor of the tree each time something is added or removed, then reconstruct the tree or 'rotate' when it's unbalanced.

Thanks for any given advice in advance. :) Appreciate all the tips.

-Chris

The AVL tree article on Wikipedia gives all you need to implement this kind of self-balanced tree (I especially like the picture showing rotations needed for rebalancing). Basically you need to implement left and right tree rotation and use it in your `add` and `remove` methods according to the rules given in the article.

If you are more adventurous, try implementing a red-black tree. A good description with pseudo code can be found in Introduction to Algorithms.

I have learned about data structures a long time ago. I need to refresh my knowledge of basic topics in data structure for a job interview. Can anyone provide me with some resources or links about this topic?

Introduction to algorithms by Corman, Leiserson, Rivest and Stein.

I'd recommend all times classic: "Data Structures and Algorithms" by Aho, Ullman and Hopcroft.

I'm working on a project where I need to perform pathfinding to find the route which costs the least. I don't really care if it's the shortest route possible. So far it seems A* is out of the question and I honestly do not understand Prim's algorithm.

Let me explain the kind of maps that I need to find routes on. This is an example map:

``````+------|-*----
+------|----|-
+--|--------|-
+@-|----------
``````

The "*" is the start location, the "@" is the destination. The "+" signs in a line indicate a direct route which a) costs the same as a single step, and b) halves the cost of the entire route.

This means there are 10 "steps" from the start position to the destination via the "+" route, which ends up with a cost of 5. There are 15 steps to use the left-most "|" route ("|" is a lower cost than "-", but worse than "+"), which ends up with a cost of 15. Obviously, the route with a cost of 5 is the route to use.

Now I'm having trouble implementing this in C#. I currently have a "step" function which moves and returns if the way was blocked or the cost of the step, and the new position. This works well, but at the moment it is extremely naive in that it'll go down a "|" if it finds one before a "+" (which means the entire trip costs significantly more, as it hasn't found the faster route).

I was thinking of marking each location as "visited", but it's completely plausible that the lowest-cost route will loop back on itself. There are also many different paths, each of which is unique, and each of which may use different path segments (that may have already been visited by a previous run). Obviously each path needs to be traversed in order to find the cheapest path, but I can't figure out how to do that without ending up searching the same routes over and over again.

If it makes it simpler, I can limit any movement to only move towards the destination (ie, can't go back up again after going down).

If anyone could provide some insight, that'd be great!

From what I understand, the '-' fields in your map are graph nodes. Each '-' node has at most 8 edges to neighboring '-' fields. 8 if you allow diagonal movement, otherwise only 4 neighboring '-' nodes are valid. There is no edge between a '-' node and a '|' node.

This is enough to implement a simple depth-first search / breadth-first-search in which you keep a queue of unvisted nodes (LIFO for depth-first, FIFO for breadth-first) and a list of visited nodes (to avoid cycling). Both algorithms will be relatively inefficient, but can be easily improved upon.

I'm not sure what the meaning of your '+' nodes is. Is moving from one '+' to the next '+' mode a free move? If so, you can model this using edge costs. A move from or to a '-' node has cost 1, a move from '+' to '+' node has cost 0.

The breadth-first-search algorithm can be extended to Dijkstra's algorithm that calculates the shortest path between your source and destination as long as all graph edges are non-negative:

http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm

The Dijkstra algorithm can be further improved with the addition of a simple heuristic, making it the A* algorithm. If you have the coordinates of your goal in 2D coordinates, you could use the euclidian distance between a node and the goal as a rough estimate of which node is best to follow. If the '+' fields are something of a tunnel through your map with zero cost to move, the A* algorithm may not help that much because heuristically moving towards your destination will often be wrong if you should have moved towards the tunnel. If there are multiple tunnels or tunnels not leading to your destination, there may not be an heuristic better than the naive Dijkstra algorithm.

Please note that it is impossible for the lowest-cost route to contain a loop: If the lowest-cost route contained a loop, stripping the loop would still yield a valid route to the goal with lower cost contradicting the assumption that we started from a route with lowest-cost.

Have a look at Cormen's Introduction to Algorithms, or the relevant Wikipedia pages:

http://en.wikipedia.org/wiki/Shortest_path

http://en.wikipedia.org/wiki/Depth-first_search

http://en.wikipedia.org/wiki/A*_search_algorithm

I've got a school assignment to make a language analyzer that's able to guess the language of an input. The assignment states this has to be done by pre-parsing language defined texts and making statistics about letters used, combinations of letter etc and then making a guess based on this data.

The data structure we're supposed to use is simple multi-dimensional hashtables but I'd like to take this opportunity to learn a bit more about implementing structures etc. What'd I'd like to know is what to read up about. My knowledge of algorithms is very limited but I'm keen on learning if someone could point me in the right direction.

Without any real knowledge and just reading up on different posts I'm currently planing on studying undirected graphs as a datastructure for letter combinations (and somehow storing the statistics within the graph as well) and boyer-moore for the per-word search algorithm.

Am I totally on the wrong track and these would be impossible to implement in this situation or is there something else superior for this problem?

If you can get your hands on a copy of Cormen et al. "Introduction to Algorithms"

http://www.amazon.com/Introduction-Algorithms-Second-Thomas-Cormen/dp/0262032937

It's a very very good book to read up on data structures and algorithms.

Recently I came across a question,

There is an unsorted array of n elements. Once we sort the array, i th index will have an element. How would you find which element is going to be present on i th index in `O(n)` complexity on the unsorted array?

I tried many methods, finally I came to a conclusion that we may need to use an hash map. But later I found that hash map implementation usually follow a tree structure which has an `log n` complexity in insertion.

How shall I proceed?

You need Linear Time Selection algorithm. Its running time is O(n) in the worst case. You can find its description in chapter "9.3 Selection in worst-case linear time" of Introduction to Algorithms, Second Edition or on the Internet e.g. here.
You can also use Randomized select algorithm. It has the expected linear time. You can find its description in chapter "9.2 Selection in expected linear time" of the same book.

Could you please, can any one tell me the best books/sites/blogs to learn DataStructures and Algorithms as a starting level? in c++ language.

The definitive book would be Introduction to Algorithms. Try and get a used copy, it's not cheap.

As for sites, the SO tag `data-structures` has some great stuff in it too. You might want to look at the top questions there.

I am trying to divide arrays recursively... I think that is what this would be called haha....

For instance, lets say the initial array contains 50 values the highest being 97 and the lowest being 7... I want to split this array into two, dividing them based on whether they are greater or lower than the midrange of the entire set. The midrange being 52...( (97+7)/2 ) Then I want to divide these two arrays using the same method and so on, ideally having a program that repeat this process an arbitrary number of times....

``````Load Values into array1
Find Midrange
For every value in array1{
if value > midrange{
assign value to ArrayHigh1}
Else{ assign value to ArrayLow1}
}
Perform same thing on ArrayHigh1 and ArrayHigh2
``````

Etc etc etc.

I'm having trouble figuring out how I would create the successive arrays (ArrayHigh2 3 4 etc)

Also, I feel like there must be an easier way to do this, but I cannot think of one at the moment...

Thanks for the help

You seem to be working your way towards a B-tree or an implementation of Merge- or Quicksort. Plenty of reference implementations are available online.

Though speaking generally, you might benefit greatly from reading a book many here are familiar with.

I need a programming book with questions and answers, doesn't matter what programming language it uses in examples as long it is C based language (preferably C# or JavaScript). What important is that questions in this book will be high degree programming (ie. create a function that will check 4 in row following numbers in 2D array)...

Basically questions you get in computer science degree tests and most important answers...

I feel i miss a lot because i haven't done my degree, so i want to learn basic programming, stuff like loops, if conditions, lists and recursions. I know how these can be used for years but i want complex problems with solutions so i can force my brain to try and then read how writer solved it.

Thanks

The important thing about questions like "create a function that will check 4 in row following numbers in 2D array" is the algorithm that solve the problem not the programming language.

So,I suggest you to read Introduction to Algorithms,2 written by Prof. H. Cormen this book will gave you a great skill to solved mentioned type problems.

Is there any materials I can read on run space analysis of an algorithm +O, +Theta , +Omega etc ? Need help for a Data Structures and Algorithm class I am taking.

Consider It's what most computer science undergraduates have to read inorder to understand runtime complexity theory.

This is a school-related question, although not exactly homework.

I'm taking an algorithms course, currently working on Chapter 15 of Cormen's Introduction to Algorithms book. I've been successful at finding plenty of online examples of most of the algorithms in the book, and I can usually find some type of Java applet or other program that provides a good visualization of how the algorithms work.

An exception to that is the Assembly-Line Scheduling in Chapter 15 (Dynamic Programming).

Does anybody know of any online resources that provide further examples or visualizations of the Assembly-Line Scheduling algorithm?

I think you'll have better luck if you search for examples/visualizations of the technique rather than the specific problem... i.e. search for Dynamic Programming.

There may be some decent tutorials on TopCoder "dynamic programming site:topcoder.com".

To compute the worst-case running time function of an algorithm what are the steps to be followed? Please some one guide me in that. I think these steps includes some mathematical proof's. If I am correct In which parts of mathematics areas I should be strong? (I guess mathematical Induction,functions, sets are enough)

Thanks

To learn about computational complexity you need to know Calculus, Combinatorics, Set Theory, Summations amongst other maths topics.

A good book; though fairly theoretical is Introduction To Algorithms by Cormen et. al.

With Java, how to store around a billion of key-value pairs in a file, with a possibility of dynamically updating and querying the values whenever necessary?

If for some reason a database is out of the question, then you need to answer the following question about your problem:

What is the mix of the following operations?

• Insert