1988 saw the emergence of DOS version 4.0 which was released for the PC. This version of DOS used the full hardware capacities at the time, even the fastest Central Processing Unit, the Intel 80386, was pushed to it’s capacity. The 80386 was released less than a year before software was able to use it to it’s full capacity. This brings about the question; that if faster CPUs had been around at the time would software be able to be developed to use it’s full capacity? The answer is probably yes. The next CPU to be released, the 80486, if released at this time would have also probably been utilised fully. Software was being developed so quickly at this point in time that designers would have almost certainly used a more powerful CPU’s capacity. The hardware was beginning to hold the software design back.
In 1990 a new CPU, four times the speed of the previous best the 80386 was released under the name 80486. There was a three year gap between the previous CPU upgrade and this pace in development was affecting the software development. Software developers could not do anything about the speed of the hardware development. And so in their wait they could only really work on their current software by fixing bugs, adding new utilities and getting the very best out of the available hardware. This was because their software already used the full capacity of the CPU. Any new operating system would have the same similar constraints of the hardware, new faster and smaller hardware was needed before any radical new software could really be created.
Once a new hardware device is developed, history has shown, that programmers were then able to create new software that could benefit from this new introduction. This proof can be shown by the release of Microsoft Windows 3.0 in 1990 which was one of the first operating systems to use G.U.I., a function that would benefit greatly through the development of a new CPU. Windows 3.0 was released in the same year as the 80486 and had planned for being used alongside it. Then in 1992, two years after the release of the 80486, IBM’s OS/2 2.0 was released. This software fully utilised the 80486’s capabilities. Within a very short time the software was already requiring new faster hardware, this though was forthcoming. The speed of software development was shown again when Microsoft Windows 98 was introduced to run on the previous years latest CPU, the Intel Pentium II, which showed itself to be largely inadequate to the task. This shows that once new hardware was developed new software could then be developed, this means that the hardware in the early 90’s was holding back software development.
More and more hardware was being built for the software, less than ten years ago this action was working in reversal. Software was becoming as big a selling point as the hardware was, with new more efficient and user-friendly operating systems being developed, and especially with the introduction of computer games!!. It was in the early 90’s that PC games really began to take off with introduction of ID’s Doom. This was new three dimensional computer game, with huge buyer appeal, it required an 80486 CPU as well as the latest graphics card at the time.
Throughout the 90’s software has become faster and faster at absorbing the latest CPU’s facilities or graphics card’s facilities. Taking the games example, we do not currently have had a graphics card that is not used to full capacity. This shows that software development is being limited by the failure to have hardware up to the required performance of how our software could be designed Software development and design started off slow but quickly its potential was seen and inspired people to think so much more grander in their development. Virtual Reality is a case in point, it is ready to make an enormous impact on the world. The software for its initialisation is available but the hardware to make it run correctly and successfully is not.
Why hardware hasn’t been up to the requirements of software development is due to the problem that it is so much more costly to develop hardware. Software development really only needs man-power and the hardware already available. However, hardware requires man and machine power. Research and testing are then also more expensive for hardware. The design in miniaturisation of hardware to create faster response times can partly be blamed on the software development though, this is because some hardware design is dependent on very specialised software applications. This shows that it is two sided because the speed of hardware and software development are relational to each other. So as one gets slower then the other will also be affected.
Although software development is being held back by more constraints in hardware development, a new trend is coming to light. It is now commonly expected by the computer industry that the rate of increase of CPU power is changing at an exponential rate. While in 1987 it took three years for the next generation CPU to be released, it is now as low as a year or less apart in the release of a new faster CPU. Computer experts say that the way the trend is going, by around the year 2050 software will not be able to be developed to utilise the complete capacity of the new hardware that will be developed then at any kind of the speed that it is currently at. However, within the last twenty years, and especially the last ten, the software has been held back by limitations in the PC hardware.