The problem domain concerns modelling a preliminary interface, which allows an operator to control a ‘Mars Explorer’ robot. The robot concerned will be used to explore caves which have been found on Mars. It will be controlled from earth using a display screen and a conventional keyboard. Information about what is ahead will be presented on the screen for the operator. The ‘Mars Explorer’ can either be instructed to move up, down, left or right, with an emergency button preventing forward movement of the robot. The only instructions transmitted to the operator are thus the directions of where the clear path lies. A separate signal indicates where there is no clear path, in which case the robot dictates the safest course of action.
Requirements
It is important at this stage we realise exactly what outcome; we expect from the system.
Ideally the user of this system requires the following:
- A competent interface which is simple to understand and use.
- A system which allows the full functionality of the core tasks to be performed, as described in the ‘Design Brief’.
- A safe system that is not hazardous.
- A system that is error prone and allows for further development.
The prototype interface needs to accommodate all these user requirements. If the final system does not fulfil the requirements of the user then its measure of success will be very poor. The requirements may change throughout the design process as a result of the characteristics of different users. As a result it is important that we understand all accounts for designing an effective interface for the human user.
Performance Requirements
‘User performance’ requirements will also have a vital impact upon the success of our ‘Mars Robot’ user interface. As a measure for being an effective interface, we need to have a record of the user’s performance. For example:
- Ease of learning
- Training time – How long does it take for the user to become accustomed to
the functionality of the system.
- Operation time
- User accuracy
In the case of the ‘Mars Explorer’, it will be important for the user to be able to measure their performance. This will then elicit weaknesses or strengths and allow for further developments in terms of training and for designers, to adjust the system to suit the user’s requirements even further. To allow for this, the following measurements will be extracted:
- The average response time over 30 forward movements – before and then after training has been provided.
- The accuracy in terms of the number of collisions – before and after training.
Specification
A User Interface is required which will allow an operator to control the ‘Mars Explorer’ robot and help it to avoid objects when flying. Concise experimentation using a command line interface and a text based GUI will be used to compare and measure the effectiveness of the user interface. Operator feedback will also be analysed to study and further improve the design. The deliverables for this task will be a prototype interface which allows the operator to perform the required functions to manoeuvre the robot and also a report which expands upon improvements made as a result of tests upon the human user.
Designing an effective Interface
Above all, an interface needs to be appropriate to the control function that it is to perform. The human user must be considered in physical, physiological and sociological terms before an interface is implemented. It can be perceived as being relatively easy to design an interface for one human but difficult to produce an interface for many users. This is a direct result of the diversity in ability, intellectuality and other characteristics of the human user.
The human characteristics themselves play a vital role in determining the design of an interface. It is important that human centred design is used to eliminate the possibility of the design becoming inadequate and of no use to the operator.
Designing an interface and then refining through experimentation allows us to make an existing interactive system even more effective. In the case of the ‘Mars Explorer’ we need to understand the user’s properties and their actual interaction with the system. Matching the user’s characteristics with the system will allow it to become agreeable, easier to work with and to further allow for the tuning of preferences.
Understanding Input Devices – THE KEYBOARD
The problem specification for this task states that a ‘conventional’ keyboard will be used to control the ‘Mars Explorer’ robot. The keyboard requires that the users select a position in a set of co-ordinates, and then exert physical pressure upon the key at the selected location to produce the desired effect. It should also be stated that a good keyboard provides you with some feedback of its correct operation. This immediate feedback allows us to perceive the keyboard as an output device as well. The robot operator will ideally have five control keys. These are:
- A control key to move ‘UP’
- A control key to move ‘DOWN’
- A control key to move ‘LEFT’
- A control key to move ‘RIGHT’
- A control key to activate the emergency stop feature
Other keys on the keyboard will also be used to navigate through the whole system. For example the user will be required to select predefined keys to progress from the ‘welcome screen’ to actually controlling the robot.
We have to decide upon which keys will actually be used to implement the control of the robot. At this stage we must pay crucial attention to the characteristics of the human user itself. The following issues must be assessed before we designate keyboard control keys:
- Comfort
- Ease of use
- Safety
- Critical Issues
To allow for comfort, the keys should not be positioned too closely that they can’t be selected accurately by a user. At this stage assumptions about the capability of the user can be made e.g. the size of the button shows how accurate the user is. Safety issues are also a concern, especially when repetitive movements can cause injury. In terms of the functionality of the whole system, disastrous situations such as initiating the ‘emergency stop’ button will also be an issue. For example, this particular button must not be easy to select, in that a user can select it by accident. In such a case, effective interface interaction can be implemented by locating this key away from the primary control keys.
Understanding – THE DISPLAY
A superior interface is one that is simple to comprehend and enjoyable to use. For any user, the interaction must be at a level where the user can clearly understand what they see on the screen. For example, instructions must be easy to follow, the layout must be precise and the content must be relevant to the type of system that is being modelled.
Layout
An important factor in creating a user interface is the layout of the actual information upon the screen itself. Whether the design is a command line interface or a text based GUI, the menu system, instructions for use and feedback responses must be displayed neatly so that the user interface is commended as being effective.
Displays should be simple. If they are untidy, full of bright colours and obscure fonts, then many users will interpret this complexity as reflecting the complexity of the system. Spaciousness is also a concern. Clustering information in small areas of the screen needs to be avoided. The more densely packed a display is, the harder it becomes for users to access the information that they want.
The user interface we will design, needs to take in to account that information processing will be critical, since the operator is required to respond to instructions and avoid collisions. This shows that information will need to be organised and displayed coherently to avoid misinterpretation which inevitably assists human error¹.
The transition from one screen display to another must be understood by the operator. The design of the layout should consequently avoid the operator of the robot making mistakes as a result of the layout being obscure.
Graphical User Interface (GUI)
In it simplest form a GUI refers to any interactive system that uses pictures or images to communicate information. ‘Graphical images have a greater intuitive appeal than text-based interfaces, especially if they are animated’². For the majority of users this can be considered true, since animated graphical interfaces are more enjoyable to work with. The use of such techniques may be beneficial in terms of quality and quantity of information conveyed. It can also improve user reaction towards the system as well.
The ‘Mars Robot’ is to be controlled using two different types of interfaces (as detailed in the ‘Specification’). One of these is a text based GUI. Comparisons with a command line interface will allow us to deduce which of the two interfaces provided the most efficient results.It will also be interesting to construe whether there is any significant improvement in user accuracy and average response time over thirty movements.
______________________________
¹As documented in ‘Philip Rae. The Study of Interactive Systems – 1991’. States that computer error is largely the cause of human error.
² ‘Andrew Pilot. Graphics Interfaces’ - 1989
Training
The user of an interface needs to know how to use the system appropriately or simply perform actions the way they are supposed to be performed. ‘The psychological basis of our design approach assumes that the user’s knowledge is layered. Thus, a user can know something at one level but not at another³. Referring to the task concerned, this statement can be considered true, since a user may have a vague notion of what needs to be done (e.g. to guide a robot and avoid obstacles), but does not actually know how to do it (e.g. which controls will perform the desired functions). In this case, the user possesses conceptual knowledge but does not hold the key physical knowledge on how to actually perform the task.
Training can be instigated by initially informing the user of what the task involves (e.g. providing them with conceptual knowledge). This can be done in the form of speech or a simple informative document. The operator of the ‘Mars Explorer’ can be shown a document (e.g. ‘Design Brief’) which enlightens them about the primary tasks of the operator and the functionality of the robot.
A multi-layer instruction for those who become accustomed to the system can be added. In the case of the ‘Mars Explorer’ an option will subsist where the operator can navigate past the control instructions menu and begin control of the robot. Unconfident users will have the choice to view the control instructions and gain physical knowledge regarding how to actually move the robot. The ultimate goal is to create an intelligent tutor, which allows you to navigate through the operating instructions saving time and making user interaction more efficient.
Prototype Review – STAGE 1
Having used Software Engineering techniques to produce the code for the
specification of this task, it now becomes apparent that we must test the ‘Mars Explorer’ robot and provide evidence that it meets the requirements of the user. The user interface needs to relate to the user requirements in order to be effective.
Prototyping is a sensible way to work and deal with humans and their uncertainties. Once the prototype has been built, it should be refined and then evaluated. The review should ultimately aim to derive objectives that we intend to meet. Ideally we want to discover:
- If we have covered the main points in the specification
- Does it meet the user’s requirements?
- Can the system be operated by the designated users?
If we can receive positive feedback for the questions above, then we can partly assume that an effective interface has been implemented.
______________________________
³ Jenny Preece, Yvonne Rogers, Helen Sharp, David Benyon, Simon Holland and Tim Carey. ‘Human Computer Interaction’. Addison-Wesley, 1994.
Summary of Results
Having produced the preliminary interface design for the ‘Mars Explorer’ robot, it was important to test the system and adjust it depending on user feedback. Using four user test cases, we can deduce improvements that can be implemented to the design.
The ‘User Questionnaires’ (See Appendices) provided useful results and extracted any flaws that were identified within the system. Refining the design, allowed the interface to further match the user’s requirements.
We can summarize the user responses and consequently the changes that need to be implemented below:
- The distinct alteration required, is changes to the keys that control the robot itself. Originally ‘Q’ and ‘A’ were used to move up and down respectively and ‘O’ and ‘P’ to move left and right. Three out of the four test cases stated discomfort of operating the robot using these control keys. A suggestion was made that it would be more efficient if the keys were closer and only one hand was used.
Clearly if the user is not comfortable using the specified control keys, the interface is not effective. The main objective is to locate keys closer and allow the user to operate the controls more freely.
- The second negative was the screen layout of the system. The main point was
that information was scattered, which became unclear to comprehend. The navigation from the menu screen to actually taking control of the robot needs to be clearer.
Suggested improvements to allow for this could include title bars to display clearly what the operator is viewing on the screen. Another idea could be to group similar or related items of data together.
- Thirdly, there was also mention of colour being added to the interface, which
suggests that users have an opinion on the appearance of the system. Adding colour ideally gives the system a more dynamic appearance and allows it to be modelled around real world entities. It would be interesting to derive whether adding colour to the design would improve user attitudes and efficiency in using the system.
- The survey also highlighted that, of those people who preferred using the text
based GUI, response times were quicker and accuracy was improved compared to before training. Nevertheless improvements such as enlarging the graphical display were mentioned.
By adopting a ‘formative’ approach and implementing the changes that resulted from the user survey, a more efficient interface can be created. This will then ultimately match the system to the user requirements.
Prototype Review – STAGE 2
Having implemented the changes to the design that were extracted during the user survey, testing was again done. The user who had the least efficient time was invited to control the robot again. However this time, the interface had been adapted to suit user requirements.
These changes can be summarised as:
- The re-allocation of the control keys to the number keys 1, 2, 3 and 4 on the right hand side of the keyboard, improved the overall performance of the least efficient user. Accuracy was greater and response time was quicker.
-
The display information on the screen for the user, was organised in a more coherent way, so that title bars indicated what was presented on the screen. For example, the instructions were titled with a header stating ‘Operator Instructions’ and the welcome page with ‘Welcome to the operators menu for the mars explorer’.
- The text based GUI was also improved by making the graphical representations of the arrows larger. This allowed users rate of information processing to be increased, thus allowing for a quicker response time.
Evaluation
The system that was designed aimed to control the Mars Explorer robot safely and securely throughout space. The idea was to have an operator performing the general functionality of the robot, ensuring that it avoided collisions with cave walls.
In designing such a system it was critical that human behaviour was compared and that the delivered system was one that adopted HCI principles to a point where the deliverable was an effective and complete user interface.
The effectiveness of the experiments I conducted were refined during the second stage of the prototype review. My aim was to design an interface which allowed an operator to successfully manoeuvre a robot with designated controls. I feel that I successfully accomplished this and produced a system which was enough to carry out all of the core tasks.
Improvement can always be sought. For example, given time, I could have tested a larger scale of people to see if this had any further impact upon the results. I could have implemented an interface which further complied with user requirements that were extracted from the user responses.
The main realisation and objective of this task has been realised, since the initial user requirements were achieved and the design was elaborated upon to further suit the robot operator.
Literature Review
‘Human Computer Interaction’ was the overview subject which was examined within this report. The components discussed in detail were largely related to the design and implementation of effective user interfaces.
Information upon this particular area was largely derived through journals that commented upon the effectiveness of human-computer interaction.
-
Human Computer Interaction. Addison-Wesley, 1994
-
Human Problem Solving. Prentice Hall, 1972
These two journals in particular contained useful information that related to the creation and adaptation of human user interfaces. The first by ‘Addison-Wesley’ detailed all HCI principles that are profound in the current working environment. In particular use was sections that detailed the informative approach that should be provided when a human user is required to perform a particular task. In principle a human user should be provided with clear and concise instructions on how to complete or go about starting a particular task. In examining its relevance to this report it can be seen as vital, since the objective for the user is to control a robot through space. The addition of mandatory instructions, which must be read, allowed the user to train themselves before they began any control of the robot. The direct advantage of this allowed the user to understand coherently the task ahead and consequently perform a lot better (i.e. to avoid all collisions with caves).
‘Human Problem Solving’ by Newell and H.A. Simon, was the earliest journal used. In saying that, this does not mean it was the least effective. The ideas of improving human cognition through tests and expanding the knowledge base were two topics that were of direct relevance. The capabilities and boundaries of human users were discussed and references to the ‘Human Virtual Machine’ provided ideas that allowed me to explore how efficiently humans process information. The notion of cognition accustomed this report directly to the users of the robot. The idea of gaining knowledge and processing that knowledge to initiate a desired effect, linked directly to the four test cases used in the experiments.
The World Wide Web references¹ were on a whole inadequate and provided only delicate definitions about HCI. In terms of usefulness, I was unable to derive information relating to the design of interfaces.
______________________________
¹World Wide Web references – See Bibliography
Bibliography
-
Jenny Preece, Yvonne Rogers, Helen Sharp, David Benyon, Simon Holland and Tim Carey. ‘Human Computer Interaction’ Addison-Wesley, 1994
- Benbasat, Izak; dexter, Albert S.; ad Todd, Peter (Univ. of British Columbia, Vancouver, B.c., Canada)
- Commun. ACM 29, 11 (Nov. 1986), 1094-1105.
-
William M.Newman and Michael G.Lamming. ‘Interactive Systems Design’. Addison-Wesley, 1995
-
Newell and H.A. Simon. ‘Human Problem Solving’. Prentice Hall, 1972
-
A.G. Sutcliffe. ‘Human-Computer Interface Design’. Macmillan, Basingstoke, 1995
-
Philip Rae. The Study Of Interactive Systems – 1991
-
Andrew Pilot. Graphics Interfaces’ – 1989
- http://www.ida.liu.se/labs/aslab/groups/um/hci/
- http://www.hcibib.org/hci-sites/