4 min read

Plan Verification & Validation Early in the Lifecycle

Plan Verification & Validation Early in the Lifecycle

After watching the movie “Deepwater Horizon” [Deepwater], I observed the catastrophic consequences when critical testing is skipped. The film illustrates how skipping routine and critical tests led to disaster, a situation reflected in many real-world instances such as NASA mishap reports [Larson et al, 2009]. These reports reveal failures throughout the lifecycle, often due to missed tests.

Early identification of such defects is possible with modern modeling and simulation techniques. Unfortunately, most systems engineering and architecture work still rely on basic tools like Microsoft Office, which, while excellent for documentation, are insufficient for engineering modeling and simulation.

Verification and Validation (V&V) are traditionally conducted in the later stages of the system lifecycle to ensure that the requirements developed early in the lifecycle have been met. This blog discusses how employing V&V techniques early in the lifecycle, including simulation, can enhance the probability of program success by identifying errors early in development and preparing for subsequent V&V activities.

 
The Lifecycle Model

Systems engineers are likely familiar with the lifecycle V-Model [Forsberg, 1991]. Our version is shown in Figure 1. This diagram illustrates the lifecycle phases, from concept development to the specification and construction of system components. It also shows the integration of these components into a complete system, alongside the V&V activities ensuring the system meets its requirements.

What is MBSE _ V model

Figure 1`. The Lifecycle V-Model

 

The “V” shape emphasizes the parallel V&V activities as the system is decomposed. For instance, during architecture development, it is crucial to derive acceptance criteria as the basis for operational test and evaluation (OT&E) and transition activities. A draft test and evaluation master plan (TEMP) should be produced during this phase, though it is often neglected or done inadequately.

In addition to insufficient test planning, architecture development frequently involves creating a series of drawings in limited languages like Systems Modeling Language (SysML). While the nine SysML diagrams are necessary, more is needed to capture all system information, such as risks, decisions, costs, and other parameters.

This work often focuses solely on operations, neglecting maintenance and potential failure modes. Moreover, these drawings are static depictions, which can harbor significant logic errors that only become apparent later in the lifecycle. Discovering errors late is costly and can lead to program cancellation [Albrecht, 2017].


Deriving Requirements from Scenarios

During architecture development, functional requirements may not be developed sufficiently to derive the necessary verification requirements for test planning. Scenario analysis is often used to model system operations (and maintenance), helping identify functional requirements. The challenge lies in developing a comprehensive set of scenarios to ensure complete functionality and effective requirements management.

Years ago, I developed an approach using a test matrix concept to develop scenarios for architecture analysis. This involves creating a matrix juxtaposing scenarios with their characteristics, similar to a test matrix. Figure 2 provides a simplistic example of such a matrix.

Scenario Matrix Example

Figure 2. An Example of a Scenario Matrix


After identifying these scenarios, models for each scenario are created. Figure 3 shows an Action Diagram for the first scenario in the table above [Dam].

Scenario 1 Action Diagram Model

Figure 3. Example of Scenario 1 Action Diagram Model


These models help derive the system's functional requirements. Various tools can automate this process by reading the functional elements and their relationships to physical entities, producing requirements and documentation for system specification. To ensure model accuracy and avoid building errors in the logic, further analysis using simulation techniques is necessary.


Application of Simulation

Simulation is a familiar V&V method, alongside analysis, inspection, demonstration, and testing. Early in the lifecycle, simulation techniques can be applied to models developed from scenario analysis.

Discrete Event Simulation (DES) uses a mathematical/logical model of a physical system that portrays state changes at precise simulated time points [Albrecht]. Unfortunately, many modeling tools lack embedded DES capabilities, necessitating the redevelopment of models in simulation tools which may result in discrepancies.

Fortunately, some tools, like Vitech’s CORE and SPEC Innovations’ Innoslate, integrate modeling and DES capabilities. Figure 4 shows the execution of a model using DES.

Discrete Event Simulation Output from Scenario 1

Figure 4. Example of Discrete Event Simulation Output from Scenario 1

Given uncertainties in timing, failure paths, resource variability, and other factors, DES alone may be insufficient. Combining DES with Monte Carlo simulation can better estimate the range of system capabilities. Monte Carlo simulation, involving repetitive trials, accounts for uncertainties by varying points within distributions during simulations. Figure 5 shows Monte Carlo simulation results for the same scenario as Figure 3.

Monte Carlo Simulation Output from Scenario 1-1

Figure 5. Example of Monte Carlo Simulation Output from Scenario 1


The Time Tree Map provides the mean and standard deviation of each process step, while the Time Bar Chart shows the distribution of runs within specified time bins. Running the simulation more times increases accuracy and confidence [Driels, 2004].


Test Planning

Early test planning is crucial, especially for operational tests requiring specialized ranges, equipment, and expert participation. These tests, costly and scheduled years in advance, must maximize value. This drives the need for early planning. Figure 6 illustrates capturing test expectations and results in a modeling tool.

Example of Capturing Test Plans and Results

Figure 6. Example of Capturing Test Plans and Results

 

Future V&V

Combining requirements, models, and V&V test plans enables potential automation of the V&V process. Simulations against test cases using well-defined performance parameters, including time, resource usage, and cost, can track progress from initial estimates to final test results.

The software industry’s integrated development environments (IDEs) facilitate automated testing; systems engineering requires similar IDE capabilities. This blog's techniques and planning approaches move towards this ideal state.

 

Employing V&V techniques and planning early in the lifecycle offers significant project benefits. Early error detection saves costs and enhances safety once the system is deployed. Implementing these techniques and planning requires understanding V&V needs which encourages the involvement of V&V-skilled personnel early on and throughout the lifecycle.

 


References

[Albrecht] Introduction to Discrete Event Simulation, M.C. Albrecht, P.E., January 2010, p. 11  (accessed at http://www.albrechts.com/mike/DES/Introduction%20to%20DES.pdf on        2/9/2017).

[ASSE] Applied Space Systems Engineering, Wiley J. Larson, et. al., editors, McGraw-Hill          Companies, Inc., 2009, p. 387 provides a good summary of NASA Mishap reports and the V&V contributing factors.

[Dam] DoD Architecture Framework 2.0, Steven H. Dam, Ph.D., ESEP, SPEC Innovations, 2014, p. 144.

[Deepwater] For an in-depth explanation see https://www.washingtonpost.com/news/achenblog/wp/2016/09/29/deepwater-horizon-movie-gets-the-facts-mostly-right-but-simplifies-the-blame/?utm_term=.34ca084b11e5 (accessed 2/8/2017).

[Driels] “Determining the number of iterations for Monte Carlo simulations of weapon effectiveness,” Morris R. Driels, Naval Postgraduate School Thesis, April 2004.

[Forsberg] Forsberg, K. and Mooz, H., "The Relationship of Systems Engineering to the Project Cycle," First Annual Symposium of the National Council on Systems Engineering (NCOSE), October 1991.

[Scenarios] “Intelligent Operational Scenarios – a Strategy for Cost-Saving Scenario Selection,” Steven H. Dam, Ph.D., presented at the July 2007 INCOSE International Symposium.

[Tools] Two such tools are Vitech’s CORE and SPEC Innovations’ Innoslate.

V&V Using Innoslate's Test Center Webinar

V&V Using Innoslate's Test Center Webinar

Want to sit back, relax, and listen? Watch the webinar recording!

Read More
Verification and Validation Guide for Data-Driven Systems Engineering

Verification and Validation Guide for Data-Driven Systems Engineering

This guides provides approaches for planning and executing Verification and Validation for Data-Driven Systems Engineering.

Read More
Verify & Validate the System Lifecycle Webinar

Verify & Validate the System Lifecycle Webinar

Don't feel up to reading? Watch the recording! What is Verification & Validation (V&V)? The common definitions for V&V are: verification is “the...

Read More