When attempting to scale software systems, that is, make it possible for the system to take on a higher load or more users, people usually take on of two strategies. vertical scaling and horizontal scaling. Vertical scaling is the practice of buying newer and faster computers on which to run the software while horizontal scaling is the practice of simply buying more computers on which to spread the software across. Between the two, horizontal scaling is often interpreted as having the better scaling potential. Unfortunately, this is not always the case and can lead to some naive attempts to solve very intractable problems.
The reasoning people tend to have goes like this: When scaling up (vertically) the system is generally bound by the maximum power of current machinery. Moreover, computer systems become much more expensive and much more specialized near the top of the computing power scale. The more powerful your system, the more it will cost to upgrade it and the closer you will get to a maximum theoretical capacity. Scaling out, on the other hand, theoretically has no bounds. One can just keep adding more and more computers to a cluster and so long as the software is sufficiently parallel in its design its capacity should increase smoothly. On top of that, low end servers are cheaper and cheaper to maintain than high end specialized equipment, so it follows that one should get more bang for one’s buck when scaling horizontally versus vertically.
Now, oftentimes this way of thinking about the problem is valid and horizontal scaling can be more beneficial than vertical scaling. However, the problem isn’t always so simple. Imagine you have a cluster of hundreds of low end servers, together processing multiple teraflops of data. Now imagine that cluster is glued together on a single gigabit ethernet LAN. What’s going to happen? Not every programmer realizes this, but the speed of a typical switched LAN is bound by the speed of the root switch. No matter how you wire it, in order to prevent loops and modern LAN will actively cut off redundant pathways creating bottlenecks. An application running on a gigabit LAN, will in a worse case scenario, be bound to a gigabit capacity, globally across the network. For any sufficiently large, horizontally scaled system, a standard LAN will be a major bottleneck which you can’t overcome with parallelism.
Now there are ways to get around this. There are specialized switches you can buy to add capacity. You can add routers to your network to shape traffic on the clusters. You can even switch to things like InfiniBand which offers point-to-point connection for your cluster nodes. Any of these and many other solutions exist to overcome network bottlenecks on clustered applications, however, as soon as you adopt one you are scaling your network up and are losing some of the perceived advantages of a horizontally scaled system. In many cases it might just be cheaper and simpler to buy that more expensive computer.