Quality Assurance in Business Simulation Design
The software craft aspect of business simulation design - exploring how design issues and complexity impact quality, the types of errors, quality management and testing.
Home > Quality Assurance
My early career in manufacturing involved me in quality management. This was followed by my designing leading-edge computer systems, working for a computer company and authoring two books on Data Processing [1 & 2]. This coupled with actually using business simulations on company training means that I feel that assuring design quality is vital.
This page summarises a paper I presented at the 2014 ABSEL Conference that was nominated for the Best Paper Award - as the only non-academic nominated I was very pleased!
Click to download the full paper
Designing business simulations models is especially challenging because it is a creative process where the software will be used by a wide spectrum of users usually with little or no experience of the simulation software. (Click on the diagram to get more information about the topic.)
Business Simulation models are complex software and there are several aspects - model size, arithmetic complexity, cyclomatic complexity, structural complexity and dynamic complexity. (Click on the diagram to get more information about the topic.)
Business Simulations suffer the same errors faced by all software. But their nature - the usage issues - amplify the impact of the errors. (Click on the diagram to get more information about the topic.)
Quality management is much more than testing. It is affected by the choice of modeling language and requires good documentation, structure, methodology and verification support together with defensive programming and refactoring. (Click on the diagram to get more information about the topic.)
Quality management builds quality in during design and prepares for the final stage - software/simulator testing. I use the following testing methods but their use depends on the modeling language you use. This is because the language used means that some testing methods are difficult or impossible. (Click on the diagram to get more information about the topic.)
Designing a business simulation is a creative, agile, iterative-incremental process. This means that it is:
Unlike traditional data processing design there is a very considerable creative side to business simulation. This leads to conflict between creative design and engineering design and this is amplified by the personality (emotional) traits of the designer.
Designing a business simulation is an iterative-incremental process where the design emerges over time. This tends to lead to poor quality and documentation. As the design emerges and evolves, the model is revisited and added to - changes that can cause problems in other parts of the model.
There are several issues with the use of the simulation and these are:
Designer is not User
Unlike many spreadsheet budgetary planning uses, the designer of the business simulation model is not its user - the user is the tutor or a participant. Consequentially, time can be wasted as the user learns the user interface and the user may not fully understand the algorithms and question the results.
Unlike data processing software where entries can be constrained participants have freedom to make radical and unusual decisions - decisions that stress the software. For example, learners from one company stopped serving markets that they saw as difficult - even though this was financially foolish and when designing the simulation I did not think that this would be done. As a consequence, I had to ensure that having no sales in a market sector did not crash the simulation. (It is interesting that this action was embedded in the corporate culture and eventually, in the real-world, the company dropped out of markets - something that destroyed the company!
Finally the participants (and the tutor) become emotionally involved. When thing are working properly this is good but if things go wrong it leads to disengagement (and anger) - for both the participants and the tutor using the simulation.
My simulation models range in size from 271 statements (for a very simple simulation) to 2127 statements (for a complex simulation) with the total simulator (software) ranging in size from 10,000 to 12,000 statements. Because I use a platform most (85 to 95%) of the software exists. As I create the simulation model using a high level language rather than by using a spreadsheet this reduces the model size substantially (by about 80%). Using existing software (the platform) and, where possible, existing models reduces the amount of code created very substantially and as the likelihood of errors and the need for testing correlated with code size this improves quality and reduces design time.
This is the complexity of the formulae used to replicate the business. These range from, reasonably simple, accounting and operational calculations (white-box models) to sophisticated and complex economic and market models - models that are non-linear and difficult to visualise.
Besides calculations (algorithms), a simulation model is logically complex with different decisions causing different actions. This means that, depending on the decisions made and the current business system the processing paths differ. An analysis of twenty business simulations showed that, typically, there is a new path for every five statements. With my models ranging in size from 271 to 2127 statements this means that the number of paths ranged from 36 paths to 434 paths - each of which must be followed correctly. On average. I've found that for every five lines in the model, there is one extra path.
Here the concern is that calculations are done in the right sequence and decisions are properly linked to results. This is important because business activities such as sales, production, materials flows, cash flows occur in parallel, interact and intertwine.
Dynamically business simulations are feedback systems similar to servomechanisms and this can lead to instability or erratic results. To explain, commonly decisions may not impact results immediately. For example, information about a price cut may take time to penetrate the market. As a result, participants may over correct by cutting the price further. Then, having cut too far, they increase price. The delay clouds the understanding of the link between the decision and results (causing the business simulation to wobble and crash like a novice cyclist.) Additionally, for some simulations there are feedback loops embedded in the logic. For example, when developing a distribution management simulation, I built in a customer service model. Here, if participants stocked-out this caused a fall in service image that reduced sales - something that eventually caused overstocking and later stock-outs. This caused so much consternation during an alpha test with the client that the model was removed!
These are the computer language equivalent of the human language spelling and grammatical errors. Happily, most development platforms flag syntax errors as the software is created and this means that, generally, these errors do not cause problems.
Here a bug or combination of decisions cause the simulator to crash. Even if the problem can be resolved, trust will be destroyed and, usually, both the tutor and the participants will feel that the simulation is flawed. This is a major problem because of the emotional involvement in the simulation and the freedom participants have with their decisions.
Here the wrong calculations are done (due to algorithmic complexity), the wrong logic path is followed (due to cyclometric complexity) or calculations are done in the wrong order (due to structural complexity). Logic Errors are insidious as they may not be apparent until a participant questions the results and on finding that the results are wrong the trust in the simulation is lost causing participants to question prior results and become emotional.
It is a truism to say that quality can not be tested into any product and this is especially true for business simulations because of their size and complexity. Therefore it is vital to manage quality throughout the design - I discuss this here.
Using a design methodology is vital to ensuring quality and that the simulation is designed to time, cost and purpose.
Model and Data Structure
Here the simulation model structure (architecture) has a major impact on quality. Of particular importance is having clear separation of data, the model and, if appropriate, decision entry and reporting routines. The model is separated into a series of sub-models (objects) that clarify the logic, are easily tested and can be reused.
Putting data and parameters in separate structures ensures that when necessary (during calibration and updating) changes are properly implemented.
This has a major impact on quality assurance as it impacts testing (White Box Testing and Code Inspection), structure, documentation and the size of the simulation model. For example, for spreadsheets Code Inspection is difficult and error prone, White Box Testing very difficult, in code documentation has issues, and it can be difficult to structurally separate sub-models, data and reports. Further, compared to BASIC, spreadsheet models are much larger.
Learn more about modeling languages
This involves designing the model so as to minimise problems - especially run-time exceptions and logic path errors. For example dividing by zero usually causes a run-time exception and the software crashing. Likewise, the way decimal numbers are stored can cause difficulties in conditional logic.
Defensive programming involves a deep understanding of problems causes - something developed from experience.
This involves revisiting the software during development, to improving it - as such it parallels how an author revisits and hones his or her novel. The iterative-incremental design process leads itself to refactoring as you return to sub-models to review and add to them. But there is risk that changes in one area will impact other areas.
Documentation is central to all types of software design. For business simulations is a need for external (paper documentation), online documentation (information available when the simulation runs) and in-model documentation (documenting individual algorithms and logic statements).
Good design practice means that these forms of documentation are done throughout design in parallel to model creation rather than after design (although the simulation creator may find this tedious).
Here additional reports are created that are used to support the design process - to verify and calibrate models. There can be a significant number of these. For my Training Challenge simulation 64% of the total reports were to support design (with 36% being used during simulation use). The impact of this is more obvious since Training Challenge had a total of 210 reports - with 134 to support design and only 76 used during simulation use.
Beyond this, my simulation platform can switch the model into design mode that documents variables and on-line help and in doing so helps testing and documentation
The professional design of a simulation builds in quality assurance and management at every step and requires a deep understanding of software design practice coupled with software design experience.
Black Box Testing
This involves checking that the decisions create the right results but without access to the internal working of the software. This is like a doctor diagnosing an illness based on visual inspection without access to X Rays, blood tests or even asking questions etc. In my experience it as the software equivalent of hunting the shell game's pea!.
This is a basic testing method and because one does not have access to the internal working of the software is very time consuming and error prone.
White Box Testing
This involves exploring the internal working of the software - seeing how data changes and exploring logical paths.
The ability to do this depends on the modeling language and development platform - my approach facilitates White Box testing
This involves reading through the model's source code (checking individual calculations and paths). Also, the modeling language used impacts the ease or difficulty of Code Inspection.
Code Inspection is a tedious activity and because of model size it may not be very rigorous but even so it is crucial and the language I use and my design approach is key.
Structural Soundness Testing
The cyclomatic and structural complexity of business simulations mean that it is necessary to extend testing to check the structure of the model - in particular processing sequences, carry over between periods and between models and parameter initialisation.
I build my simulations as a series of sub-models (objects) and this helps Structural Testing.
Dynamic Stability Testing
Here you are ensuring that the simulation is stable and does not over respond to decisions. It involves checking the internal feedback loops, response delays and, if appropriate, stochastic responses - activities that are helped by a deep understanding of systems and industrial dynamics.
Of the testing methods, it is the most difficult as it relies on experience and systems dynamics knowledge.
Piloting and Verification
The testing described above, is done by the designer, however there is a need to pilot the simulation with the tutors who will use it. I start this session getting the tutors to participate in the simulation and then spend an equal amount of time discussing issues. For simulations with a duration up to a day, I've found that a single pilot is adequate although for one day simulations I like shadowing the tutors on their first live run. For longer simulations you may need two or three pilots to verify the simulation and participant & tutor documentation.
Most recent update: 29/05/15
Hall Marketing, Studio 11, Colman's Wharf, 45 Morris Road, London E14 6PA, ENGLAND
Phone +44 (0)20 7537 2982 E-mail firstname.lastname@example.org