Future Developments in ICT

Authors Avatar by AmrikSadhragmailcom (student)

[Type text][Type text][Type text]

Future Developments in ICT

Long gone are the days of yearly ‘Massive’ performance or feature increases in the consumer space, instead, we have settled for far smaller generational improvements: Slightly better graphics in games, 10 gigabytes more storage space on your iPod, 5 second faster boot times. Technological development is slowing and major breakthroughs are needed to enable larger performance leaps. What we have seen as of late is evolutionary rather than revolutionary, and this is showcased no better than in Apples recent release of the iPad Mini; a smaller iPad based on the internals of the iPhone 4S. In this essay, I will be avoiding frivolous developments as seen by the consumer, and instead be focussing on the hardware that powers such devices, I aim to convey and explain the incredible developments that lie on the horizon, but to do this, I need to delve into the past shortly…

Back in the late 80’s, computational power would increase by a factor of two in less than a few years. An excellent example is Intel’s 486 microprocessor, which doubled the performance of the previous 386 in every respect, with an extremely short development time. The increases in instructions per cycle (IPC) were massive, and this was owed to a period in time where optimisations were plentiful and it was easy to spot shortcomings in architecture and design. Slowly, as Intel filled its roadmap with faster and faster processors, the speed increases were lower.

Enter the age of the Pentium 4; it was here that Intel recognized that higher clock speeds could gain sizeable performance increases, and so they kept pushing up until 3.8 Ghz. The downsides to these higher clocks were incredible decreases in efficiency. Power consumption and heat soared to record numbers during this time and the performance returns were again diminishing. The realisation suddenly dawned on the company that they could not keep scaling their CPU’s up like this, and so their hopes of reaching an easy “10 Ghz by 2011” were dashed. Moore’s law, which dictates that the number of transistors on integrated circuits would double every two years, became extremely hard to uphold.

This was addressed two years later via a parallel approach. Rather than have a single core running at extremely high frequencies, the load could be spread across multiple power efficient cores. It’s easy to visualise this if we imagine a single, fast ant trying to form a structure versus an army of slower (but still fast) ants. Things will be much faster with more resources to chuck at the problem; fact. This parallelisation was made possible through smaller manufacturing processes (Moving from 1 micrometre lithography to 65 nanometre), more on this later. This parallel logic has carried us through to 2013, but a brick wall is starting to appear. Software must always make efficient use of the hardware available to it, to extract the maximum performance; it must evolve with hardware, if not slightly faster!

Join now!

The problem that arises is, can multiple cores complete the same task faster than one? How can you split up that task, so as to make the most efficient use of those plentiful resources? The more cores you have in a system, the more threads you need to keep them busy, and doing so is not that easy. A thread has to acquire a lock, which may necessitate waiting until another thread releases the lock. That can lead to serious lock contention, which can result in bad scaling, even to the point where more cores (and threads) can lead ...

This is a preview of the whole essay