<%@ include file="/includes/header.htm"%>

Misuse Cases for Better System Design
by Mark Stein



Because they have a way of bringing security, safety, and non-functional requirements, as well as design trade-offs to the surface, misuse cases in requirements analysis and system design lead to better system design.



Computers have ingrained themselves into almost every facet of business today. Be it email, word processing, data storage, reporting, scheduling, or some other task, companies are more and more dependent on computers, and the software they run. So when a system fails, the effects reverberate throughout a company.

The failure can be a natural occurrence such as hardware failure, or a software glitch. It could also be intentionally caused, the result of a hostile attack. In either case the company suffers. The effects could be bad publicity, loss of consumer confidence. The effects could also be monetary.

The monetary cost of software failures can be staggering; $59.5 billion annually was the number in a 2002 study, commissioned by the National Institute of Standards and Technology ("Software disasters...", 2004). The UK's Department of Trade & Industry estimates that viruses, hacking, and computer misuse have cost companies billions over the past year, with the average computer incident costing about £120,000 ("Computer hacking…", 2004).

Unfortunately, the numbers of computer security threats are increasing. And so are businesses growing dependency on computers. That means that the impact of system failure on a company will continue to increase. In order to help minimize the risk of system failure, software needs to have a more robust design. One thing that can help is the application of misuse cases in software design.


Why Misuse Cases

The further a fault progresses through the stages of the software lifecycle, the more it costs to fix that fault. The cheapest place to fix a fault would be in the beginning, at the requirements phase (Schach, 2005, pp13-14). The easiest way to fix a fault is to eliminate it before it happens. One way to catch possible faults is with misuse cases.

A use case model, used in the requirements gathering process, shows a software product's interaction with the user of that software. Anything that interacts with the system, including users, is referred to as Actors. The action of the software, a function that it performs, is called the use case. When UML to diagram a use case model, an actor is represented by a stick figure, and a use case as an oval with its process labeled. UML Use case diagrams also allow for "include" and "extend" relationships (Schach, 2005, p. 504). Use cases are also documented by text.

The idea of a misuse case was suggest by Guttorm Sindre and Andreas Opdahl, as a way to elicit security requirements, in a paper aptly titled Eliciting Security Requirements by Misuse Cases. A misuse case is essentially a negative scenario use case (Sindre & Opdahl, 2000). Instead of showing what the software should do, a misuse case shows what a system shouldn't allow.

A misuse case model is more or less an inverted use case. Instead of an actor, there's a mis-actor, someone who will misuse or attack the system. Rather than a use case, there's a misuse case, a possible hostile attack or misuse of the system (Sindre & Opdahl, 2001). Systems engineer Ian Alexander, who's written multiple papers on misuse cases sums it up best; "A Misuse Case is simply a Use Case from the point of view of an Actor hostile to the system under design." (2003)


Using Misuse Cases

Figure 1. A simple UML misuse case model

Let's examine a sample misuse case. Figure 1 shows a simple use case for electronic voting. A voter (the actor) goes and casts his vote (use case). A hacker (mis-actor) goes and changes who the vote was cast for. To counter this, an additional use case of giving a paper receipt is added. Another misuse case, change the vote tally is added in response to that.

The first thing to notice is that in UML diagrams, misuse cases are modeled using the same symbols as use cases, only with reversed, or different colors. Misuse cases are also documented with text, which is important because it allows more a more detailed explanation (Sindre & Opdahl, 2001).

Also important to notice is the way the misuse case interacts with the use case. Unlike a regular use case diagram, there really isn't much of a need for a misuse case only diagram. Instead, the two are used together showing their interaction. A misuse case will infringe on a use case, which will spawn another use case in response to the misuse case. The process then continues to repeat. For purposes of this paper, a misuse case model will be diagram of use cases interacting with misuse cases.

Like a use case, a misuse case diagram has "include" and "extend" relationships. But when adding misuse cases to a use case diagram, three more relationships are used; threatens, "mitigates" and "prevents". Threatens indicates the relationship between the misuse use case and the use case it threatens. "Mitigates" and "prevents" are relationships between use cases, and the misuse cases they counter. Sometimes the countering use case will eliminate the threat; if so, it has a "prevents" relationship. But sometimes, the best a countering use case can do is to reduce the threat risk; in that case, it has a "mitigates" relationship (Alexander, 2003).

When using misuse cases, the danger arises that a use case created to counter a misuse case doesn't conflict with other use cases, or system requirements (Alexander, 2002). For example, with our electronic voting example, a solution to counter the hacker might be to make the system stand alone, accessible only through direct physical contact, with its data transferred by physically removing the machines data storage device. However, if there were a requirement that the system had to remotely upload votes to a central computer, the two use cases would conflict. Because of that, two more relationships between use cases must be factored into misuse case modeling; "aggravates", and "conflicts with" (Alexander, n.d.).

Unfortunately, there's no way to capture every single misuse case for a system. That's partly because there's no way to possibly predict every negative scenario (Alexander, 2003). Also, because cause-effect relationship, each countering use case can create another misuse case; the process can turn into a never-ending loop. At some point it has to be decided that the most prominent misuse cases have been addressed, and an end to the process is called.

Because there's no way to capture every misuse case, sometimes it's better to come up with a general counter use case, as opposed to a specific counter use case. For example, it may not be possible to figure out every fatal error a mis-actor could cause. Rather than have several specific error handlers for specific fatal errors, it might be better to have a general-purpose fatal error handler. Such a use case could handle several misuse cases in a general way (log error, save data, notify admin, shutdown system, etc.).

The best way to come up with misuse cases is by brainstorming. Working in a group, take each use case in a system, and think about how it can be exploited or misused. Ask designers where what parts of the system they think is the weakest (Hope & McGraw, 2003). After coming up with that list, come up with solutions to mitigate or prevent the misuse case. Then repeat the process. Alexander compares it to a strategy game like chess or Go; constantly trying to guess the opponents next move and counter it before it happens (Alexander, 2003).


What are Misuse Cases Good For?

So a misuse case is a negative scenario representing a potential attack or misuse of a use case. It's impossible to counter, or predict all of them. What good are they?

It turns out misuse cases are useful for a number of things. The most obvious being security. When designing a system, you can't have a use case called "make system secure" (Hope & McGraw, 2003). Security isn't a single process; it's advanced planning, or a reaction based on a given set of circumstances. In order to implement security, you have to figure out what is insecure, and why. That can't be accomplished by thinking positive. By definition, a misuse case is a way of negative thinking. Using misuse cases naturally leads to better system security.

Another possibility for misuse cases is developing safety equipments. Not all negative scenarios are intentionally hostile acts. Sometimes stuff just happens and something fails (though acts of sabotage or terrorism may cause system failure too). While there may not always be a mis-actor, a failure of a system component is something that a misuse case can help plan for (Alexander, 2003). For example, look at one of the most sophisticated machines built, the space shuttle. While misuse cases per se weren't around when it was designed (misuse cases were first presented in 2000), the idea of negative scenarios was still applied. When designing it, engineers not only had to gather requirements for what it should do, but they had to come up with requirements for what would happen if a component failed. Throughout the history of the space shuttle program, we've seen where items such as fuel cells have failed, and thanks to planning, redundant systems were on hand to replace them. With the Columbia accident, we also see that not all misuse cases can be predicted.

Use cases are good for functional requirements, but not so useful in showing non-functional requirements; that is "requirements that are not explicitly Functional" (Alexander). Misuse cases, on the other hand, lend themselves to defining non-functional requirements (NFR). We've already discussed how misuse cases help elicit two types of NFR, security and safety, but there are other types of NFR. Engineers sometimes refer to NFR as "-ilities", such as portability, reliability, maintainability, usability, etc (Alexander, 2003); all things that when can help be identified when misuse cases are applied.

Misuse cases are good for finding exceptions (Alexander, 2003). A system queries a database but a table is empty; a user types letters where a number is supposed to go. Both are simple misuse cases that could cause a system exception in not taken into consideration. As mentioned earlier, exceptions are an area where general misuse cases may be better than specific ones. For example, assume a web application where a user submits a form. Rather than have a specific handler for each exception when validating the form info, there could be one generic handler that rejects the form and has the user fill it out again.

One other area where misuse cases can help is in Trade-Off Analysis. With both economic, and engineering implication, Trade-Off Analysis is a study of requirements or design approaches, and their advantages/disadvantages. Misuse cases allow for a visual representation of complicated structures. They illustrate the various relationships (threatens, mitigates, prevents, aggravates, and conflicts with,) between use and misuse cases allowing for decisions (trade-offs) to be made between requirements/design approaches (Alexander, 2002).


Misuse Cases in the Real World

Let's take a look at how misuse cases could have been applied to prevent (or mitigate) a problem that occurred in the real world. Back around 2000-2001, the Internet, and e-commerce were really starting to take off. Being somewhat new technology, there wasn't a preponderance of commercial off the shelf (COTS) software available; instead, companies went out and built their own systems. As systems were developed, hostile actors set out to exploit the systems to their own advantage. In 2001, it was reported that an estimated one third of e-commerce shopping cart applications were vulnerable to a form of e-shoplifting, price altering (Lorek, 2001). That's where, through a variety of methods, a hostile actor is able to change the purchase price of an item.

Knowing what we now know about use and misuse cases, let's apply them to the situation. Our basic use case will be that a shopper purchases an item online. More specifically, the shopper adds an item to their shopping cart, then "checks out", which submits the shopping cart to the server. In a perfect world, with no hostile actor, this would be enough. But there are hostile actors out there, and we know they're interested in e-shoplifting. So we'll include a misuse case in the mix; a mis-actor will try and alter the price of an item.

We now have an initial use and misuse case. Let's look a little more in detail to see how the misuse case could be enacted. We'll make the assumption that at some point in the order/checkout process parameters were passed via a HTTP get method. That's where parameters are appended to the URL string, and are visible in a web browser's location bar (figure 2). This is a common technique that is useful when the user may want to bookmark something. But it is also open to a technique called Parameter Tampering ("The Ten Most Common..."), where the hostile actor changes the value of one of the parameters and resends the request to the server. For our example, the parameter that was changed was one that contained the price.

Figure 2. Passed parameters visible in a browser location window.

To counter the misuse case of the hacker applying parameter, we'll append the use case so that the shopper now submits their order via an HTTP post method. This method passes the information in a way that it cannot be tampered with so easily. Our problem is solved… or is it. The ever-persistent hacker, again using only their browser chooses the "view source" command. A quick glance of the code reveals that the html form has a hidden variable that stores the price. The hacker saves the page source code, changes the price variable, and resubmits the resaved page. This is known as Hidden Field Manipulation ("The Ten Most Common...").

Once again, we must come up with a use case to counter the misuse case. We see that we can't pass the price parameter in the URL string, and we can't store it as a hidden variable on the html page. Instead, this time, we choose to store it in a cookie. Our hacker counters by a technique called Cookie Poisoning ("The Ten Most Common..."). Since cookies are only text files, stored on the shoppers hard drive, a simple text editor, such as notepad, would allow the hacker to change the price value stored in the cookie. Once again, they've managed to lower the price of an item.

Each of the attacks has been successful because the hacker has been able to somehow alter the price when it's transmitted to the server. We'll counter with one more use case; this time we don't pass the price from the web browser to the server. Instead, we only pass the item id, and the application looks up the price in the database.

We could keep on going with the process. Have we prevented price altering? No, the hacker could always hack the database, or come up with other methods. But we have successfully mitigated the threat by eliminating three easy, and well-known methods.

That's the way that misuse cases work. Use cases are countered with misuse cases in a repeating process, until the situation is prevented, or mitigated to a point deemed acceptable. As you progress through the process, security, and other non-functional requirements, as well as trade-offs become more defined.



It's not enough that systems are built to work; they should be built not to break. Use cases approach things from a positive perspective. They capture functional requirements and what a system is supposed to do. But alone, they offer little in the way of non-functional requirements, or telling what a system shouldn't allow to happen. That's where misuse cases come in.

The idea of using a negative scenario in conjunction with a use cases leads to a more rounded visualization of what a system should be. System weaknesses are hopefully recognized, and either strengthened, or at least somewhat protected. The end results being a more robust system, less susceptible to failure or security breaches, both saving a company money, and its reputation.


Alexander, I. (2002, September). Initial Industrial Experience of Misuse Cases in Trade off Analysis. IEEE Joint International Requirements Engineering Conference, 9-13. Retrieved November 9, 2004 from http://easyweb.easynet.co.uk/~iany/consultancy/misuse_cases/misuse_cases_in_tradeoffs.htm

Alexander, I (2003, January). Misuse Cases Use Cases with Hostile Intent. IEEE Software. Retrieved on November 9, 2004 from http://easyweb.easynet.co.uk/~iany/consultancy/misuse_cases_hostile_intent/misuse_cases_hostile_intent.htm

Alexander, I. (no date). Misuse Cases Help to Elicit Non-Functional Requirements by Ian Alexander. Retrieved November 9, 2004 from http://easyweb.easynet.co.uk/~iany/consultancy/misuse_cases/misuse_cases.htm

BBC News World Edition (2004). Computer hacking 'costs billions'. Retrieved on November 12, 2004 from http://news.bbc.co.uk/2/hi/business/3663333.stm

Hope, P & McGraw (2003). Misuse and Abuse Cases: Getting Past the Positive. Retrieved on November 12, 2004 from http://www.computer.org/security/v2n3/bsi.htm

Lorek, L. (2001). Tag You're Hit. EWeek. Retrieved November 26, 2004 from http://www.eweek.com/article2/0,1759,1242592,00.asp

MSNBC (2004). Software disasters are often people problems. Retrieved on November 12, 2004 from http://www.msnbc.msn.com/id/6174622/

Sanctum. The Ten Most Common Application-Level Hacker Attacks. Retrieved on November 26, 2004 from http://www.sanctuminc.com/pdf/The_10_Most_Frequent_Hack_Attacks.pdf

Schach, S. R. (2005). Object-Oriented & Classical Software Engineering (6th ed.). New York, NY: McGraw-Hill.

Sindre, G. & Opdahl,, A. (2000, November). Eliciting Security Requirements by Misuse Cases, Proceedings of TOOLS Pacific 2000, 120-131, 20-23 November 2000

Sindre, G. & Opdahl,, A. (2001, June). Templates for Misuse Case Description, Proceedings of 7th Intl Workshop on Requirements Engineering, Foundation for Software Quality (REFSQ'2001), June 2001. Retrieved November 9, 2004 from http://www.ifi.uib.no/conf/refsq2001/papers/p25.pdf

<%@ include file="/includes/lower-nav.htm"%> <%@ include file="/includes/copyright.htm"%>