Heuristic Evaluations: Too Costly? Heuristic Evaluation is an Expensive Add-on To Systems’ Development Essay

Heuristic Evaluations: Too Costly? Heuristic Evaluation is an Expensive Add-on To Systems’ Development Essay

A heuristic evaluation is a usability testing method which is used to identify design problems in user interfaces. Evaluators judge the design of the interface against a list of usability principles (heuristics). While heuristic evaluations are generally one of the most informal methods of usability inspection and therefore less expensive, they are not without their cost.

Generally, a heuristic evaluation is extremely difficult for an individual to perform as one person is unlikely to identify all of the usability problems with an interface, meaning that a number of evaluators will be needed to effectively evaluate the interface which in turn will lead to increased costs. Nielson (no date) has shown that single evaluators identified just 35% of an interfaces usability problems. So, how many evaluators is enough? The answer depends on 1) a thorough cost-benefit analysis and 2) the situation or environment in which the system will ultimately be used. Naturally, systems which are mission-critical require a more intensive (and expensive) evaluation with a larger number of evaluators (Nielson & Landauer, 1993). Past research has shown that five evaluators could find up to 75% of an interfaces problems, with little further benefit from an increased number (Nielson, no date).

Early research into the cost required to add human factors elements to the development of software was around $128,330 (Mantei and Teorey, 1988). This sum is surprisingly large and is well beyond the total budget for usability for many smaller companies. However, some research has shown that this estimate is wholly inaccurate, and should not be believed, warning smaller companies against believing even this sum (Tognazzini, 1990). Though the heuristic analysis is just one of many usability engineering methods, and is one of the more inexpensive methods available, there could still be significant costs.

There are two things to consider here though which will significantly affect the cost.(replace 😉 This financial cost is based on the costs incurred when applied to a small to medium project (around 32,000 lines of code) and also, more importantly, based on the character-mapped applications prevalent in the late 1980s and early 1990s, rather than todays ubiquitous graphical user interfaces, which have much higher development costs, especially if you consider that much of the code in a graphical interface is for the user interface. For both of these reasons, one can safely assume that the evaluation costs for projects today will be significantly higher.

Indeed, though these estimated costs are now some 15 years old, they are still likely beyond the scope of most smaller companies even today, meaning that much of the software in use today has been written by companies who are not able to invest even modest amounts of money in an evaluation.

Furthermore, and as an addition to the cost of the evaluators, an effective heuristic analysis uses more than one participant or subject, preferably from different backgrounds as past experience has shown that different people identify different problems (Nielson, no date). As a general rule, recommendations suggest the use of around three to five evaluators as research has shown that the amount of information gained with more does not outweigh the cost (Nielson, no date).

It is generally known that many companies rarely use the usability engineering methods suggested by academia (Nielson, 1993; Whiteside et al., 1988) in industry. British studies by Bellotti (1988) suggest that software developers often dont use usability engineering nor are quick to embrace heuristic evaluations as Human-Computer Interaction evaluation methods are often seen as intimidating and unnecessarily complex. Because they are seen as complex, the perceived time needed to set up and perform such evaluations can be seen to be too great and thus the financial cost is also seen as higher than generally need be.

However, usability engineering need not be as complex and thus as expensive as is generally thought. Earlier I discussed the potential costs of usability engineering where I gave a reported figure of $128,330. This figure was not only for heuristic evaluations, but for a more in-depth analysis. As discussed, this cost is generally well beyond the budget of many small- to medium-sized companies, but there is a method available to these companies which will allow them to conduct heuristic evaluations at a fraction of this cost.

The Discount Usability Engineering method (Nielson, 1989, 1990, 1993) is an inexpensive usability engineering method which is based on three usability techniques:

Simplified thinking aloud
Heuristic evaluation

If this discounted usability method is used rather than a more in-depth method, the cost savings can be significant, with Nielson (no date) reporting savings in the region of fifty per cent, giving a total cost of a little over $65,300. Significant cost savings can be made in many ways, and, though the result may be that the equipment used doesnt look as impressive as with a large budget, the results will still be valid.

The use of paper mock-up rather than a software interface designed is now a common and accepted method of prototyping an interface. The most important factor here is the speed at which the interface can be tested, redesigned, retested and so forth. Indeed, past research has shown that when interface problems were identified using prototyping, they were the least expensive to repair (Opermeyer, no date).

When it is necessary to use a software interface, free or low-cost packages can be used for the initial evaluations. It is possible to create very good user interfaces written in readily available HTML editing software packages which will enable the evaluators to test the general look and feel of the yet-to-be-developed software.

In the early stages at least, evaluators can use a minimum number of participants. As discussed earlier, the use of between three and five participants will give data which is extremely useful. The use of just three participants has been shown to identify around sixty-five per cent of an interfaces usability problems (Nielson, no date).

These three discount methods alone can account for a reported saving in excess of $30,000. However, again it is worth stressing that at the time of this research being conducted graphical interfaces were still being developed, and thus one can assume that these costs and savings relate to character-mapped displays rather than graphical interfaces. Thus while the development costs will be much higher, so will the savings should discount usability methods be used.

Because it is a discounted method, a heuristic evaluation typically has few rules or guidelines. While one can use as many guidelines as desired, past research has narrowed the number of guidelines down to just a handful. During the course of significant testing and research, Nielson developed and proposed just 10 principles or guidelines for an evaluator to follow (Molich and Nielsen, 1990; Nielsen and Molich, 1990), reducing the potential cost considerably. These principles are:

Visibility of system status
Match between system and the real world
User control and freedom
Consistency and standards
Error prevention
Recognition rather than recall
Flexibility and efficiency of use
Aesthetic and minimalist design
Help users recognise, diagnose, and recover from errors
Help and documentation

While conducting any form of usability analysis of an interface will undoubtedly increase the cost of development by several tens of thousands of pounds in the short term, research has shown that in the medium- to long-term, cost savings significantly higher than the initial investment can be made in the form of reducing the amount of redevelopment needed, or after-sales support. Nielson has reported that the ratio of costs against savings is as high as 1 to 48, meaning that for an initial investment of $10,500 in a heuristic evaluation will lead to savings of around $500,000 (Nielson, 1994). This efficiency has also been supported by independent research (Jeffries et al., 1991). However, even with efficiency on this scale, perfect cannot be guaranteed and all of the problems with an interface are unlikely to be found.

As can be seen from this discussion, adding heuristic evaluations to a systems development will lead to increased costs. However, though these costs can be and are significant, they can be reduced to a minimum without sacrificing the point of the evaluation. An initial investment can lead to increased savings and profits, but these savings are spread over the life of the system rather than in one go. Many companies look only one year ahead for cost savings and increased profits, whereas the savings from a heuristic evaluation will be over a number of years. Because companies look year on year at profits, an investment of $65,000 will be seen, and generally is seen, as too costly an investment for most.


Bellotti, V. (1988). Implications of current design practice for the use of HCI techniques. In Jones, D.M. and Winder, R. (Eds.), People and Computers IV. Cambridge University Press, Cambridge, U.K., 13-34.
Jeffries, R., Miller, J. R., Wharton, C., and Uyeda, K. M. 1991. User interface evaluation in the real world: A comparison of four techniques. Proceedings ACM CHI’91 Conference (New Orleans, LA, April 28-May 2), 119-124.
Mantei, M. M., and Teorey, T.J. (1988). Cost/benefit analysis for incorporating human factors in the software lifecycle. Communications of the ACM 31, 4 (April), 428-439.
Molich, R., and Nielsen, J. (1990). Improving a human-computer dialogue, Communications of the ACM, 33, 3 (March), 338-348.
Nielsen, J. (1989). Usability engineering at a discount. In Salvendy, G., and Smith, M.J. (Eds.), Designing and Using Human-Computer Interfaces and Knowledge Based Systems, Elsevier Science Publishers, Amsterdam. 394-401.
Nielsen, J. (1990). Big paybacks from ‘discount’ usability engineering. IEEE Software 7, 3 (May), 107-108.
Nielsen, J. (1993). Usability Engineering. Academic Press, Boston, MA.
Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods. John Wiley & Sons, New York, NY.
Nielson (no date). How to Conduct a Heuristic Evaluation: www.useit.com/papers/heuristic/heuristic_evaluation.html. Visited January 2006
Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI’90 Conf. (Seattle, WA, 1-5 April), 249-256.
Nielsen, J., and Landauer, T. K. 1993. A mathematical model of the finding of usability problems. Proceedings ACM/IFIP INTERCHI’93 Conference (Amsterdam, The Netherlands, April 24-29), 206-213.
Nielsen, J., and Molich, R. (1990). Heuristic Evaluation of User Interfaces, Proc. ACM CHI’90 Conf. (Seattle, WA, 1-5 April), 249-256.
Overmyer, S. P (no date). Revolutionary vs. Evolutionary Rapid Prototyping: Balancing Software Productivity and HCI Design Concerns. Center of Excellence in Command, Control, Communications and Intelligence (C3I), George Mason University, 4400 University Drive, Fairfax, Virginia.
Tognazzini, B. (1990). User testing on the cheap, Apple Direct 2, 6 (March), 21-27. Reprinted as Chapter 14 in TOG on Interface, Addison-Wesley, Reading, MA, 1992.
Whiteside, J., Bennett, J., and Holtzblatt, K. (1988). Usability engineering: Our experience and evolution. In Helander, M. (Ed.), Handbook of Human-Computer Interaction, North-Holland, Amsterdam, 791-817.

Leave a Reply

Your email address will not be published. Required fields are marked *