User confusion of prototype and finished system:
Users can begin to think that a prototype, intended to be thrown away, is actually a final system that merely needs to be finished or polished. (They are, for example, often unaware of the effort needed to add error-checking and security features which a prototype may not have.) This can lead them to expect the prototype to accurately model the performance of the final system when this is not the intent of the developers. Users can also become attached to features that were included in a prototype for consideration and then removed from the specification for a final system. If users are able to require all proposed features be included in the final system this can lead to feature creep.
Developer attachment to prototype:
Developers can also become attached to prototypes they have spent a great deal of effort producing; this can lead to problems like attempting to convert a limited prototype into a final system when it does not have an appropriate underlying architecture. (This may suggest that throwaway prototyping, rather than evolutionary prototyping, should be used.)
Excessive development time of the prototype:
A key property to prototyping is the fact that it is supposed to be done quickly. If the developers lose sight of this fact, they very well may try to develop a prototype that is too complex. When the prototype is thrown away the precisely developed requirements that it provides may not yield a sufficient increase in productivity to make up for the time spent developing the prototype. Users can become stuck in debates over details of the prototype, holding up the development team and delaying the final product.
Expense of implementing prototyping:
The start-up costs for building a development team focused on prototyping may be high. Many companies have development methodologies in place, and changing them can mean retraining, retooling, or both. Many companies tend to just jump into the prototyping without bothering to retrain their workers as much as they should.
A common problem with adopting prototyping technology is high expectations for productivity with insufficient effort behind the learning curve. In addition to training for the use of a prototyping technique, there is an often overlooked need for developing corporate and project specific underlying structure to support the technology. When this underlying structure is omitted, lower productivity can often result.
Best projects to use Prototyping
It has been argued that prototyping, in some form or another, should be used all the time. However, prototyping is most beneficial in systems that will have many interactions with the users.
It has been found that prototyping is very effective in the analysis and design of on-line systems, especially for transaction processing, where the use of screen dialogs is much more in evidence. The greater the interaction between the computer and the user, the greater the benefit is that can be obtained from building a quick system and letting the user play with it.
Systems with little user interaction, such as batch processing or systems that mostly do calculations benefit little from prototyping. Sometimes, the coding needed to perform the system functions may be too intensive and the potential gains that prototyping could provide are too small.
Prototyping is especially good for designing good human-computer interfaces. "One of the most productive uses of rapid prototyping to date has been as a tool for iterative user requirements engineering and human-computer interface design."
When to use
Because prototypes inherently increase the quality and amount of communication between the developer/analyst and the end user, their use has become widespread. Prototyping should be employed only when users are able to participate actively in the project. Both project managers and users involved in the project should have some prototyping experiences or, at a minimum, be trained in the purposes and use of prototyping.
If experimentation and learning are needed before there can be full commitment to a project, prototyping can be successfully used. Prototyping runs the spectrum from advanced computer modelling to writing it down on the back of a napkin. Actually, something in between usually works best, and using paper-based prototypes in a facilitated group design session was the actual approach for this financial services effort.
This approach lowers the cost and time involved in prototyping, allows for more iteration and gives project managers the chance to get immediate user feedback on refinements to the design. It effectively eliminates many of the disadvantages of prototyping because paper prototypes are inexpensive to create, managers are less likely to become attached to their work, users do not develop performance expectations, and best of all, paper prototypes do not depend on the system being available.
[GER00] Gerri Akers (2000), what is Prototyping,
http://www.umsl.edu/~sauterv/analysis/prototyping/proto.html
[BER09] Bernstein (2009), Importance of Software Prototyping, http://www.softpanorama.org/SE/software_prototyping.shtml
[ONE10] One Stop Testing (2010), Advantages of Prototyping, Prototype model,
http://www.onestoptesting.com/sdlc-models/prototype-model/advantages.asp
Rapid Application Development (RAD)
Rapid Application Development was a software development methodology introduced in the 1990s and presented in book form by information technology guru James Martin. A reaction to the then well-established methodologies which emphasised careful and prolonged requirements gathering before the actual software development began; Rapid Application Development encouraged the creation of quick-and-dirty prototype-style software which fulfilled most of the user’s requirements but not necessarily all. Development would take place in a series of short cycles, called time boxes, each of which would deepen the functionality of the application a little more. Features to be implemented in each time box were agreed in advance and this game plan rigidly adhered to. The strong emphasis on this point came from unhappy experience with other development practices in which new requirements would tend to be added as the project was evolving, caused massive chaos and disrupting the already carefully prepared plans and development schedules. Rapid Application Development methodology advocated that development be undertaken by small, experienced teams using CASE (Computer Aided Software Engineering) tools to enhance their productivity.
Link: http://www.oware.com/images/RAD.gif
Rapid Application Development advocates believed that the development of rapid prototypes was a good way to flush out customer requirements by gaining immediate feedback from the client. One of the problems that had been identified with other software development practices was that clients often didn’t really know what they wanted or didn’t want until they saw a practical implementation. It was through the process of customers commenting on an evolving application that new requirements were teased out. Usually, this would be seen as an unwelcome development which could play havoc with agreed schedules. With the Rapid Application Development methodology, however, it became a standard and accepted part of the development process.
With its emphasis on small teams and short development cycles, it is not surprising that, in Rapid Application Development doctrine, code reuse was also prized as a means of helping get the work done. This caused early Rapid Application Development adopters to embrace object-oriented languages and practices before they had really penetrated into the mainstream.
Today, Rapid Application Development as a formal methodology is no longer widely practiced. Some would argue, however, that it is a case of revolutionaries evolving into statesmen, and once boldly innovative thinking becoming the new orthodoxy. In its embrace of the object-oriented paradigm, and the use of software engineering tools to enhance programmer productivity, it was certainly ahead of its time. And in its emphasis on small teams, short, iterative development cycles and an avoidance of prolonged requirements gathering up front, it shares many similarities with the extreme programming or agile development methodologies which still remain in vogue.
As well as denoting a formal software development methodology, the phrase rapid application development became something of a marketing buzzword and was casually applied to a variety of software development products. Although hardly hardcore implementations of the methodology’s ideas, these products did incorporate some of its key concepts, for example, to facilitate rapid development, strong emphasis was placed on the idea of software re-use.
The notion of software components began to be nurtured. Supporters believed that complex software systems could be constructed largely by stitching pre-built software components together. In this grandiose vision, software components would be re-used from project to project within a company’s development team or even brought in from outside. In fact, it was hoped that a healthy market for third-party software components would develop, allowing even small companies to thrive by authoring niche software components designed to be used by others.
Although reality never quite fulfilled the aspirations of some of the visionaries, the ActiveX control and JavaBeans software component standards did acquire some degree of traction and a market for third-party code components written to these standards did emerge, even if it was never all that vigorous.
Another key element of the Rapid Application Development-life approach was visual programming. According to this concept, it should be possible to construct software with little or no knowledge of programming.
The ideal was that programs could be built by non-programmers hooking components together in some kind of workshop-like development application. Again, this ideal was never quite fulfilled, but visual development practices did become a standard part of the typical programmer’s toolkit and are now routinely used to develop some parts of software applications, while more traditional coding accounts for the rest. Graphical interfaces, for example, are now constructed visually more often than not, with programmers or user interface designers modifying the desired look of the user interface from within a visual editor and the Rapid Application Development tool then generating the appropriate code to create that look automatically. The automatically-generated code then forms a skeleton framework for the application as a whole which the software developers then build upon and edit by hand.
In common use today, the phrase Rapid Application Development has lost most of its original meaning, and even in the ranks of IT professionals, many would be unaware that it once referred to a formal software development methodology. Almost any software tool which is used in the creation of other software will be described in its marketing literature as something that facilitates Rapid Application Development. When used informally in this sense, the phrase Rapid Application Development usually indicates that the tool in question takes some of the burden from the programmer’s back by automatically generating part of the program code.
Today the software tools used by the majority of programmers to develop new software are called Integrated Development Environments (IDEs). Almost all of them include some Rapid Application Development features. When creating a new program, for example, the software engineer can indicate what kind of application it should be, such as a console application, a program with a graphical user interface or one that is database-driven. The IDE will then generate a base template of code which the programmer takes as a starting point for his or her own work.
Usage of RAD
The Rapid Application Development methodology was developed to respond to the need to deliver systems very fast. The RAD approach is not appropriate to all projects - an air traffic control system based on RAD would not instill much confidence. Project scope, size and circumstances all determine the success of a RAD approach. The following categorize indicates suitability for a RAD approach:
[GAN10] Ganthhead(2010), Rapid Application Development Process,
http://www.gantthead.com/process/processMain.cfm?ID=2-19516-2
[BPC10] BPC, Articles and Glossary, Rapid Application Development, http://www.bestpricecomputers.co.uk/glossary/rapid-application-development.htm
Dynamic Systems Development Method
DSDM is a framework based originally around Rapid Application Development (RAD), supported by its continuous user involvement in an iterative development and incremental approach which is responsive to changing requirements, in order to develop a system that meets the business needs on time and on budget. It is one of a number of ‘Agile’ methods for developing software and forms parts of the Agile Alliance.
DSDM was developed in the United Kingdom in the 1990s by a consortium of vendor’s and experts in the field of Information System (IS) development, the DSDM Consortium, combining their best-practice experiences. The DSDM Consortium is a non-profit and vendor independent organisation which owns and administers the framework. The first version was completed in January 1995 and published in February 1995. The current version in use at this point in time (April 2006) is Version 4.2: Framework for Business Cantered Development released in May 2003. DSDM Public Version 4.2 (www.dsdm.org) was made available for individuals to view and use in July 2006. However, anyone reselling DSDM must still be a member of the not for profit Consortium.
As an extension of rapid application development, DSDM focuses on Information System projects that are characterized by tight timescales and budgets. DSDM addresses the problems that frequently occur in the development of Information Systems with regards to going overtime and budget and other common reason for project failure such as lack of user involvement and top management commitment.
DSDM consists of 3 phases: pre-project phase, project life-cycle phase, and post project phase. The project life-cycle phase is subdivided into 5 stages: feasibility study, business study, functional model iteration, design and build iteration, and implementation.
DSDM recognizes that projects are limited by time and resources, and plans accordingly to meet the business needs. In order to achieve these goals, DSDM encourages the use of RAD with the consequent danger that too many corners are cut. DSDM applies some principles, roles, and techniques place throughout the entire project. This ensures that the project does implement the desired requirements in the desired fashion as was decided in the early phases of the project.
Link: http://assets.devx.com/articlefigs/17425.jpg
Critical Success Factors of DSDM
Within DSDM a number of factors are identified as being of great importance to ensure successful projects. First there is the acceptance of DSDM by senior management and other employees. This ensures that the different actors of the project are motivated from the start and remain involved throughout the project. The second factor follows directly from this and that is the commitment of management to ensure end-user involvement. The prototyping approach requires a strong and dedicated involvement by end user to test and judge the functional prototypes. Then there is the project team. This team has to be composed of skilful members that form a stable union.
An important issue is the empowerment of the project team. This means that the team (or one or more of his members) has to posses the power and possibility to make important decisions regarding the project without having to write formal proposals to higher management, which can be very time-consuming. In order for the project team to be able to run a successful project, they also need the right technology to conduct the project. This means a development environment, project management tools, etc. Finally DSDM also states that a supportive relationship between customer and vendor is required. This goes for both projects that are realized internally within companies or by outside contractors. An aid in
DSDM Model Limitations
- It is a relatively new model. It is not very common. So it is difficult to understand.
DSDM Model Advantages
- Active user participation throughout the life of the project and iterative nature of development improves quality of the product.
- DSDM ensures rapid deliveries.
- Both of the above factors result in reduced project costs
[FRE10] Free tutes(2010) Dynamic System Development Method (DSDM),
http://www.freetutes.com/systemanalysis/sa2-dynamic-system-development-method.html
Waterfall model
There are various software development approaches defined and designed which are used/employed during development process of software, these approaches are also referred as "Software Development Process Models". Each process model follows a particular life cycle in order to ensure success in process of software development.
Link: http://www.oddtodd.com/mw/clip_image003.gif
One such approach/process used in Software Development is "The Waterfall Model". Waterfall approach was first Process Model to be introduced and followed widely in Software Engineering to ensure success of the project. In "The Waterfall" approach, the whole process of software development is divided into separate process phases. The phases in Waterfall model are: Requirement Specifications phase, Software Design, Implementation and Testing & Maintenance. All these phases are cascaded to each other so that second phase is started as and when defined set of goals are achieved for first phase and it is signed off, so the name "Waterfall Model". All the methods and processes undertaken in Waterfall Model are more visible.
The stages of "The Waterfall Model" are:
Requirement Analysis & Definition:
All possible requirements of the system to be developed are captured in this phase. Requirements are set of functionalities and constraints that the end-user (who will be using the system) expects from the system. The requirements are gathered from the end-user by consultation, these requirements are analyzed for their validity and the possibility of incorporating the requirements in the system to be development is also studied. Finally, a Requirement Specification document is created which serves the purpose of guideline for the next phase of the model.
System & Software Design:
Before a starting for actual coding, it is highly important to understand what we are going to create and what it should look like? The requirement specifications from first phase are studied in this phase and system design is prepared. System Design helps in specifying hardware and system requirements and also helps in defining overall system architecture. The system design specifications serve as input for the next phase of the model.
Implementation & Unit Testing:
On receiving system design documents, the work is divided in modules/units and actual coding is started. The system is first developed in small programs called units, which are integrated in the next phase. Each unit is developed and tested for its functionality; this is referred to as Unit Testing. Unit testing mainly verifies if the modules/units meet their specifications.
Integration & System Testing:
As specified above, the system is first divided in units which are developed and tested for their functionalities. These units are integrated into a complete system during Integration phase and tested to check if all modules/units coordinate between each other and the system as a whole behaves as per the specifications. After successfully testing the software, it is delivered to the customer.
Operations & Maintenance:
This phase of "The Waterfall Model" is virtually never ending phase (Very long). Generally, problems with the system developed (which are not found during the development life cycle) come up after its practical use starts, so the issues related to the system are solved after deployment of the system. Not all the problems come in picture directly but they arise time to time and needs to be solved; hence this process is referred as Maintenance.
Limitations of the Waterfall Life Cycle Model
The waterfall model assumes that the requirements of a system can be frozen (i.e. basedline) before the design begins. This is possible for systems designed to automate an existing manual system. But for absolutely new system, determining the requirements is difficult, as the user himself does not know the requirements. Therefore, having unchanging (or changing only a few) requirements is unrealistic for such project.
Freezing the requirements usually requires choosing the hardware (since it forms a part of the requirement specification). A large project might take a few years to complete. If the hardware is selected early, then due to the speed at which hardware technology is changing, it is quite likely that the final software will employ a hardware technology that is on the verge of becoming obsolete. This is clearly not desirable for such expensive software.
The waterfall model stipulates that the requirements should be completely specified before the rest of the development can proceed. In some situations it might be desirable to first develop a part of the system completely, and then later enhance the system in phase. This is often done for software products that are developed not necessarily for a client (where the client plays an important role in requirement specification), but for general marketing, in which the requirements are likely to be determined largely by developers.
Advantages
The waterfall model, as described above, offers numerous advantages for software developers. First, the staged development cycle enforces discipline: every phase has a defined start and end point, and progress can be conclusively identified (through the use of milestones) by both vendor and client. The emphasis on requirements and design before writing a single line of code ensures minimal wastage of time and effort and reduces the risk of schedule slippage, or of customer expectations not being met.
Getting the requirements and design out of the way first also improves quality; it's much easier to catch and correct possible flaws at the design stage than at the testing stage, after all the components have been integrated and tracking down specific errors is more complex. Finally, because the first two phases end in the production of a formal specification, the waterfall model can aid efficient knowledge transfer when team members are dispersed in different locations.
[FRE10] Free tutes(2010), Waterfall Software Development Life Cycle Model,
http://www.freetutes.com/systemanalysis/sa2-waterfall-software-life-cycle.html
Spiral model
The spiral model is similar to the incremental model, with more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spirals, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.
Link: http://newton.cs.concordia.ca/~paquet/wiki/index.php/Image:Spiral_model.gif
Spiral Model Description
The development spiral consists of four quadrants as shown in the figure above
Quadrant 1: Determine objectives, alternatives, and constraints.
Quadrant 2: Evaluate alternatives, identify, and resolve risks.
Quadrant 3: Develop, verify, next-level product.
Quadrant 4: Plan next phases.
Although the spiral, as depicted, is oriented toward software development, the concept is equally applicable to systems, hardware, and training.
Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase.
Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral. In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.
Applications
For a typical shrink-wrap application, the spiral model might mean that you have a rough-cut of user elements (without the polished /pretty graphics) as an operable application, add features in phases, and, at some point, add the final graphics. The spiral model is used most often in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The US military has adopted the spiral model for its Future Combat Systems program.
The spiral model is mostly used in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The US military had adopted the spiral model for its Future Combat Systems program. The FCS project was canceled after six years (2003 - 2009), it had a 2 year iteration (spiral). FCS should have resulted in 3 consecutive prototypes (one prototype per spiral - every 2 years). It was cancelled in May, 2009.
The spiral model thus may suit small (up to $3M) software applications and not complicated ($3B) distributed interoperable, system of systems. Also it is reasonable to use the spiral model in projects where business goals are unstable but the architecture must be realized well enough to provide high loading and stress ability. For example, the Spiral Architecture Driven Development is the spiral based SDLC which shows the possible way how to reduce a risk of non effective architecture with the help of spiral model in conjunction with the best practices from other models
Advantages
Estimates [i.e. schedule, budget etc.] become more realistic as work progresses, because important issues are discovered earlier. Software engineers [who can get restless with protracted design processes] can get their hands in and start working on a project earlier. It is more able to cope with the changes that software development generally entails.
Limitation / Disadvantages
Risk of not meeting budget or schedule applied differently for each application highly customized limiting re-usability, Risk of not meeting budget or schedule.
[FAI09] Faisal Sikder (2009). Software Development Life Cycle (SDLC) Spiral Model http://faisalsikder.wordpress.com/2009/12/18/software-development-life-cyclesdlc-spiral-model/
[BOE86] Boehm B (1986), "A Spiral Model of Software Development and Enhancement", ACM SIGSOFT Software Engineering Notes", "ACM", 11(4):14-24