Thursday, November 24, 2011

The future of computing is parallelism

predicting the future is always a tricky thing to do and many this is especially the case when you try to predict the future of computing as this is a field in which a lot of people do have an opinion. However, looking at where we currently stand and what the limits of physics are (as currently known) we can do some modest predictions.

As a statement: "the future of computing is parallelism".

If we look at the current speed of processors (per core) we see that the speed (frequency) is leveling out. Reason for this is that if you increase the frequency you will more leakage in your transistors which are on your chip. leakage is recorded to the outside world as heat. So if we where able to run the chips on a higher frequency we will see that the heat will increase up till a level which cannot be cooled in a "normal" way and against normal prices.

What chip manufacturers are doing to cope with this is issue is to build multi-core processors. Having multiple cores running on a acceptable frequency which is providing enough computational power to the system while keeping the heat within an acceptable range is the way forward. We can already see good examples of this in the AMD liano and the nvidia fermi (image below) many-core processors.


The new breed of processors will be many-core processors holding large number of cores. This will require new ways of developing software and programs. With many-core processors you will have the option (and the need) to run your programs over multiple cores to make full use of your hardware. To be able to do so one will have to consider parallelism when developing code. Developing parallel processes is another way of thinking which is currently not adopted by the majority of the developers simply because they can do without. However, as we are getting more and more data (bigdata), processes and computations are getting more complex and users are not willing to wait very long developers will have to think about parallel programming very soon.

You have some languages which are specially designd to cope with many-core processors, one example is CUDA which is developed by nvidia. You do however not need a special language, python is very well able to cope with it and java is also able to just like C for example. Issue is that developers have to start thinking about it and need to get familiar with it (in my opinion).

So what is the future of computing, parallel computing and many-core processes. Thom Dunning is explaining it in more detail in his "future of high performance computing" lecture for the National Center for Supercomputing Applications which you can watch below:

No comments: