Cognitive Problems and Solutions
During use there are problems with discussion, reflection, learning and engagement that can be solved by suitable meta-composition.
Home > Meta-Composition Overview > Cognitive Prompts and Reflection Triggers
In the early days (1970 through mid 1995) the technology meant that I ran business simulations for clients and by 1995 I had done this more than 2000 times. This experience lead me to see and understand several problems associated with use that need to be taken into account when designing a business simulation
When using the simulation it is possible that instead of going round the experiential learning cycle each period participants miss out on or spend too little time on Reflective Observation and Active Conceptualisation.
What causes this? There are several reasons - insufficient time, inappropriate complexity and focus on making their simulated business a success rather than on learning. There is significant and continuing pressure to shorten durations and this can result in shortening a business simulation's duration to the extent that there is insufficient time for learning. Allied to this is where the participants' prior learning, experience and knowledge is insufficient the simulation will be too complex. Further, complexity may be seen by the designer as desirable. Finally, participants become very engaged in the simulation as they compete with others and lose sight of the purpose - learning
How can it be overcome? Structures (Cognitive Prompts and Reflection Triggers) can be built into the simulation to cause participants to step-back and reflect. Irrelevant features (Cognitive Noise and Cognitive Clutter) are not included in the simulation as dealing with these lengthens the simulation and cloud learning.
These are pre-planned tasks and textual statements designed to introduce new issues and tasks. They are designed to get learners to prepare for and discuss specific topics and issues and are particularly useful for simulations where the participants use the simulator themselves.
Cognitive Prompts can be positioned:
The example below is the prompt occurring in my PriceWize simulation directly before participants move on to plan their second set of decisions.
These are generated by the simulation depending on and reacting to results where the situation suggests that learners should step-back, reflect and discuss issues and future actions. Besides this, reflection triggers can be used to impact engagement.
Reflection Triggers are positioned:
Reflection is triggered directly after decisions are entered as part of the decision screening process. Here, they flag unusual and inappropriate decisions (where inappropriate decisions are those where the current business situation means that the decision may cause a major problem).
Reflection is triggered about results based on an analysis of business performance as illustrated below.
As illustrated below, no comments are made where the results fall in the acceptable range. Where the results fall in the questionable range, short fuzzy textual comments are made but where results fall in the problem area the comments should be more prescriptive.
Reflection Trigger Spectrum
Where the simulation is used directly by the participants, the reflection triggers are provided directly to the participants. Where the simulation is used by the tutor (tutor mediated) the reflection triggers are provided to the tutor who can decide whether or not to use them to challenge participants.
This is caused by inappropriately ambiguous or granular decisions and results and an inappropriately complex economic progression. When designing my PriceWize pricing simulation I decided not to incorporate an economic progression that had a seasonal sales pattern, a weekly sales pattern and random sales fluctuations. Also, although prices could be set on a market sector-by-market sector basis basic costs were the same for each sector.
Cognitive Noise adds to decision-making and result analysis time (cognitive load) without adding to learning (cognition) and, often hides the links between decisions and results.
This is caused by incorporating models, decisions and results that are not directed towards the learning goals and are irrelevant. The figure below shows the relationship between the simulation (A+B) and learning needs (B+C). Clutter (irrelevant decisions, results and models) is area A. Area B is the decisions, models and results that are directed towards learning needs and Area C is the learning that needs to be provided but is not provided by the simulation.
When designing my PriceWize pricing simulation I considered allowing participants to change capacity but then decided that this was not needed to get participants thinking about capacity use and how this impacted price decisions and profits. Additionally, I had a single capacity level rather than a separate capacity level for each market sector and did not calculate market share, assumed that sales were made for cash and production was done just-in-time meaning that participants did not have to concerned with Working Capital
How might cognitive noise or clutter occur and how might a designer justify incorporating them? A common justification is that because they are part of the real-world and are justified by increase realism (but do not explain why realism per-say is good). Also, commonly, designers of software have a tendency to add features because they are cool, righteous, phat or neat and so are valuable and noble and fulfill the designer's self-actualisation needs rather than the learners' or learning needs. The solution is simple - base the design on defined learning needs and throughout the design ask how the new decision, model or result adds to learning and does this add reasonably to duration?
Most recent update: 29/06/15
Hall Marketing, Studio 11, Colman's Wharf, 45 Morris Road, London E14 6PA, ENGLAND
Phone +44 (0)20 7537 2982 E-mail email@example.com