A key concept in computer science is *computational complexity*. That is, how the time taken to run an algorithm changes with the size of the input. Sorting a 100-dimensional array is obviously faster than sorting a 1,000,000-dimensional array, but *how much* faster? This question becomes really important when you’re working with large amounts of data as often solutions that work for small data sets take too long when run on large sets.

A simple example is looking through an array to find the minimum element. To do this, you have to look at every element in the array and compare it to the current minimum value you’ve seen so far. It takes *n* comparisons to do this, where *n* is the length of the array. There is a bit of computational effort involved to keep track of the minimum, but this is small compared to the *n* comparisons. As *n* gets large the asymptotic running time of this algorithm is proportional to *n*, i.e. O(*n*).

There exist some algorithms where the running time explodes as the input increases. A good example is the brute-force solution to the famous travelling salesman problem, where a salesman wants to find the shortest route through n towns before returning to his starting point. To do this by just searching all possible routes, the running time is O(*n*!). As *n* increases, the running time of this algorithm rapidly balloons and it becomes impractical for even small numbers of towns.

In general, an algorithm that can run in polynomial time, i.e. O(n^k) is considered *fast* in computer science. In reality, the constants may mean that the running time is impractical, but theoretically the running time does not explode as the size of the input increases.

### Like this:

Like Loading...

*Related*