The level of predictability relates to the ease with which a user can determine the outcome of future interactions based on what has taken place so far. It is distinct from, but draws a parallel with the deterministic nature of computers. Predictability, in a sense, is about the computer’s deterministic nature but from the user’s perspective – enabling the user to take advantage of the determinism.
Synthesizability is a measure of how easily the user of a system can assess the effect of past actions and how these actions led to the current state – informally this corresponds to the user asking ‘how did I get here?’ It is the reflecting of internal state changes, which is key in enabling the user to build a mental model of the system. It is related to another principle based upon immediate and eventual honesty.
The familiarity or ‘guessability’ principle encompasses the extent to which a user’s experience and knowledge from ‘the real world’ and other computer-based interactions can be usefully brought to the interface in hand. For a new user, familiarity measures how effectively the positive transfer of experience can be brought to the system. Familiarity can be thought of as the level of correlation between a user’s knowledge and that knowledge required to interact productively with the system. It is linked to the concept of consistency in so much as it can be thought of as ‘consistency with respect to past real-world experiences’. The use of analogies such as ‘the desktop’ has historically proven to be a useful tool in this area.
Generalisability refers to the level of support offered by the interface for extending specific interaction knowledge to previously unencountered situations / applications. It can be thought of a measure of how predictive the system is and is linked closely to the principle of consistency. For the purpose of looking solely at a VCR interface, the concept of generalisability across applications can be interpreted as generalisability across different user tasks (i.e. other than setting the record timer) and hence falls beyond the scope of this document.
Finally, the principle of consistency is concerned with the similarity, especially in terms of input / output behaviour, arising from similar situations or task objectives. Consistency is always relative to another element of interface design – e.g. consistent button usage conventions or consistent screen formatting. It also forms an underlying component in other usability principles. It is worth noting at this point that consistency can be a hindrance when applied incorrectly or in order to support a task, which intrinsically includes inconsistencies. Such an example could be found in the design of first typewriters’ keyboards. Having letters arranged consistently from ‘a’ to ‘z’ fails to take into account their inconsistent usage distribution. This leads to a jammed mechanism or premature hand fatigue, both attributable to frequently used keys lying adjacent to one another.
Background / Method
In order to evaluate the VCR interface, I shall run through the task as described in the owner’s manual. This serves merely to present the output as given on the screen and narrow down what parts of the interface play a role, according the scope of the user task being investigated. In applying these principles I shall scrutinise the dialogue elements of the interface components comparing them with the usability principles. I will run this procedure several times, the first of which is intended to get a feel for the flow of the interaction and the general scope of the system. Subsequent passes through will allow me to focus on specific interface elements while knowing how they fit into the whole picture.
For the sake of completeness, any prerequisites for using the VCR timer in this way are listed here:
- Correct and working set up of television and VCR. It is also assumed that any necessary tuning has already taken place.
- The time must be already correctly set on the VCR. A maladjusted clock will not prevent the recording from working but instead creates a gap between reality and the system’s model and will hence record at the wrong time.
- A recordable (i.e. with the write-enable tab in place) tape must be inserted into the machine.
By applying this set of principles, I hope to be able to raise specific usability problems. It may then be of some use to rate the severity of these problems or offer possible solutions or workarounds but I feel this goes beyond the scope of this document.
Background / Method cont’d
Evaluation and Conclusion
In total nine problems have been noted but some of these problems manifest themselves in many stages of the task. The question I ask myself first is whether this evaluation technique has revealed all of the potential usability studies? I suspect ‘No’ is the resounding answer to this. I shall discuss two potential reasons for this.
I consider it highly unlikely that I alone have managed to successfully locate all potential problems lurking in this interface. To start with, I have considered neither flexibility nor robustness in this appraisal. More importantly, I find it an unreasonable suggestion that one man alone can find all difficulties a user could conceivably encounter. Having more people undertake this exact study would surely expose a greater range of problems but the question then lies in how many of these evaluators should be used. Clearly one individual is insufficient for a serious study and conversely, a thousand people, while being able to reveal a far greater percentage of the total problems in existence, is far too many to be practical. There lies here a trade off in how many evaluators to use.
Even if we were to use an ‘optimal’ number of evaluators (whatever that figure may be), I suspect that this evaluation method would still fail to reveal 100% of problems. In order to highlight this, I gave given much thought the nature of any problems that may evade detection with this method. It is very difficult to accurately generalise about this across varying interfaces. I would however question whether a specific usability problem’s concealment is in the nature of the problem itself, or whether it is a shortcoming of the method in use, in failing to raise the problem. So whilst it is very useful having this set of principles, it could be argued they are best used as guidelines and should not be expected to locate all of the problems since their application is so inherently subjective.
It is interesting to note that the interface permits a recording schedule total duration to exceed that possible on a videocassette. I have not found out what happens when this occurs but I find it hard to pigeonhole this problem given these principles. This led me to consider whether applying these principles do indeed find usability problems. With or without them, I am confident that an evaluator would have located the exact same set of issues. With this in mind, it could be argued that these principles merely help to categorise a usability issue. By categorising problems, we can structure them and pontificate on them in order to build a framework in which to discuss them. As it would appear, even this framework is ill defined since there is both overlap and ambiguity. Evidently, this is a different matter for actually finding problems.
In concluding I bear in mind that for this evaluation, I have applied only a subset of all usability principles and consequently I can only speculate about the results obtained when including the concepts of flexibility and robustness. It seems hard to apply these principles in such a way as to find all problems. Despite there problems of vagueness and overlap, they immediately make good common sense when taken in context of an example; they are indeed useful to know.
This said, upon selecting any menu item the problematic black background does revert to blue.