ComplianceOnline

Writing Software Specifications for Medical Devices

  • By: Dev Raheja, New Product Management Consultant
  • Date: August 16, 2011
Webinar All Access Pass Subscription

Software intelligence in most products is a function of several systems working together to produce properties and behavior different than those of components. The disciplines of gathering such intelligence are often missing. That is why typical software has many missing and ambiguous requirements. The purpose of this article is to show how to capture unknown requirements.


Approximately 80% of the dollars that go into software development are spent on finding and fixing failures. This process is very inefficient and costly. For a robust design, the opposite is required, that is, 80% of dollars should be spent on preventing failures so that life cycle costs are dramatically reduced. The need for a cost reduction is obvious, because a fault fixed during the concept stage costs only a small fraction compared to the warranty costs and the cost of downtime for fixing it later. This technique allows the product development teams to see the forest from the trees by capturing the complexity of software requirements. Lack of this macro vision is bound to result in many costly failures that sneak up on you.

     
  Webinars related to this topic  
     

 


Risks in Software

There are three types of risks: the risk of the known, the risk of known unknowns, and the risk of unknown unknowns. The known risks are understood through past history and customer statements of needs. Unfortunately customers are not aware of over 60% of the potential requirements. This is the experience of this author working with many organizations. They recognize the need for a requirement only after they “don’t get it”. This is particularly true for software specifications. Many companies attempt to make use of lessons learned, but most do not have formal and verifiable protocols. Some known risks can be identified through tools such as Failure Mode and Effects Analysis, Fault Tree Analysis, and Even Tree Analysis. Therefore some progress is being made in handling the known risks. The other two risks are significant, but there is no significant progress.


Risk of Known-Unknown

These risks are known somewhere, but not known to the specification writer. They can be discovered by unanimous user surveys without any fear of blame. Users include service personnel who fix user complaints. Users can be asked to report on their own experiences and the experiences of co-workers.   Data on a major airline below at a FAA/NASA workshop [1] shows the extent of unpredicted failures:
 

  • Number known to FAA:   About 1% or 130
  • Number actually in airline files:  About 2% or 260
  • Problems reported confidentially by employees:  About 13,000 


Risk of Unknown-Unknowns

These risks are usually unpredictable with the tools we have today. The reason is that the systems are too complex. No longer are we dealing with one mechanical system which can perform and stand alone. The software typically interacts with several systems, resulting in thousands of possible interactions for an aircraft system and some automotive applications.

The key point is that we are dealing with a system made up of several systems. The interactions are unbounded. We cannot know how the system-of-systems will behave by knowing only the behavior of individual systems. Tweaking one system without the knowledge of inter-system behavior is doomed to failure. The unknown -unknown risks are the result of lack of knowledge of the interactions and associated behavior of the system-of-systems. Altering the behavior of any part affects other parts and connecting systems.

We need to capture macro-level interactions before we write the software specification. This view comes from the System Performance Specification, which is then translated into an interactions matrix as explained in this paper. A cross-functional team from interconnecting systems must be involved in writing the specification.

To discover unknown-unknowns we need to go beyond FMEAs. ISO 14971 on risk management of medical devices suggests Preliminary Hazard Analysis and Fault Tree Analysis as some tools. The aerospace industry uses Event Tree Analysis, Sneak Circuit Analysis, and System Hazard Analysis also.

Writing Software Specifications

Each interaction allocated to software becomes a software function. The composite of all the functions then becomes the “Software Performance Specification.” Such a specification describes “what” the software system will do, not “how.” The “how” portion belongs to the next level, which is “Software Design Specification.” Note that the software performance specification includes reliability, maintainability, the user interface, prognostics requirements, diagnostics requirements, and response to errors by the user. Specifications should include positive as well as negative requirements.

Positive requirements

Positive requirements are those that come from customers and from the knowledge of interface requirements. Included are also non-requirements that may require historical reference and maintenance of the data base and structure. Requirements should have clarity so that they are not subject to interpretations. A cruise control function may be subject to interpretation when defined as: “The ventilator shall regulate patient‘s breathing rate within the specified limits. This looks clear, but is not. It does not specify constraints on the speed of breathing. The quality of acceleration of breathing also is subject to interpretation.

Negative requirements

These describe the dangerous things that the system must not execute. Example of such a requirement: “There shall be no sudden failure of the ventilator at any time”; one may say that this is common sense. It should be obvious to every software engineer. Wrong! The system requires a lot of thinking to prevent the software from going into these safe states. Special tests are required to make sure such events cannot happen. On August 1, 2005 A Malaysian Airlines Boeing 777 plane climbed to 3,000 feet and almost stalled. The pilots were able to disengage the autopilot but the auto throttles refused to disengage. As the pilots tried to land, the primary flight display gave a false low air speed warning. The display also warned of a non-existent wind shear [2]. Such is the nature of unknown-unknowns. By clearly identifying the negative requirements in each specification, the risk of unknown-unknowns is lowered.

If there are hundreds of interactions, a likely scenario, it will be prudent to at least identify safety critical interactions based on knowledge of safety critical functions. Such interactions can be highlighted in the specification. Such a partitioning prevents corruption from interfacing systems. If possible, the hardware in which the software is embedded should also be highlighted for safety.

The constraints

Constraints are just as important. These are boundaries for safety, reliability, modifiability, fault recovery, and operational limits. In addition, there should be brainstorming with questions such as:

  •     How should the system respond to false, invalid, and the absence of inputs?
  •     Should a critical function be clock-driven or event-driven?
  •     Limits on waiting for input and processing

Brainstorming should include the conditions under which the software can:

  • Fail  to perform a function reliably
  • Fail  to perform a function safely
  • Fail  to perform a function when needed
  • Perform a function when not needed, such as deploying an air bag in a car when there is no accident
  • Perform functions that are not in the specification
  • Fail to stop a task at the right time
  • Lose input or output
  • Fail to execute a function or a task
  • Have intermittent behavior
  • Get corrupted from an operating system
  • Fail  from an incorrect request by a user
  • Have incomplete execution
  • Be unable to execute critical interruptions
  • Be unable to fail safely
  • Be unable to respond safely when hardware fails (a recall included cars in which the door would open but not close in cold temperatures. Should software limit the speed in such conditions?)


Brainstorming on interactions should also include the system response to situations such as:

  • EMI/RFI
  • Coding errors
  • Logic errors
  • Input/output errors
  • Data handling
  • Definition of variables
  • Interface failure
  • Failed hardware
  • Communication failure
  • Power outage
  • Omissions in the specification
  • Corrupted memory
  • Insufficient memory
  • Operational environment
  • Loose wires and cables



Measurability is Just as Important
 

This includes the ability to verify and validate before the product is sold in the market. A requirement such as “the software reliability shall be higher” is not verifiable, nor can it be validated before a product is released. The industry has simply not developed good design qualification tests to predict reliability during the software development phase. Also, since the software specifications often miss over 60% of the potential requirements (you will discover this quickly when you brainstorm on the interactions of the system-of-systems), does it make sense to predict reliability? A current reliability test for a new design such as measuring fault density (for example: the number of software bugs per thousand lines of code) is invalid, because the test itself is usually flawed. It is flawed because many requirements are usually missing in the specifications.

Generally validation implies testing. The quality of the test determines the quality of the validation. Verification refers to indirect measurement or assessment. Each requirement must be verifiable or capable of being tested. Verification may consist of:

  • Demonstration: A functional verification by observing events through the random exercise of software. It includes appropriate drivers, interrupts, or integrated hardware to verify that all requirements are met safely.
  • Inspection: Visual examination of code or the logic design to verify the absence of mistakes and develop confidence in meeting the requirements.
  • Analysis: Verification may be accomplished through tools such as Software Failure Mode and Effects Analysis, Fault Tree Analysis, Software Sneak Circuit Analysis or brainstorming.
  • Simulation: Includes mathematical models, and random responses through tools like Monte Carlo simulation
  • Similarity: Consists of satisfying verification requirements that are previously satisfied through other programs.
     

CONCLUSION

Writing software requirements cannot be accomplished without comprehensive system thinking. The system-of-systems interactions have to be understood and creative brainstorming has to take place. The project is likely to be ahead of schedule if these actions are taken; because fixing multiple requirements later costly and time consuming.


REFERENCES

[1] Speech: Dr. Douglas R. Farrow, Fifth International Workshop on Risk Analysis and Performance Measurement in Aviation sponsored by FAA and NASA, Baltimore, August 19-21, 2003.
[2] Jayaswal Bijay, and Patton Peter., Design for Trustworthy Software, Prentice Hall, 2007.


 

Best Sellers
You Recently Viewed
    Loading