Continuing My Weekly Series of Book Excerpts
(from Experiential Learning: Simulation)
A simulation doesn’t have to be “real” to be successful as an experiential learning tool. What has to be real are the feelings it stimulates in the participants, for feelings are what drive learning. Indeed, too much realism can be harmful to the goal of learning. If the simulation is too close to the participants’ real situation, they be unable to be objective about what they did and what resulted from what they did.
The following sections offer some other effects you may experience if your simulation hits too close to home.
Too accurate detail
Scaling difficulties aren’t the only reason, but they’re a more than adequate reason, why we don’t attempt to do direct simulations to obtain meaningful measurements. We’re not looking for numbers. It’s not answers we seek, but insights. Consequently, when you design a direct simulation, don’t waste your time trying to find “exact” information about the factors in the simulation.
There are other reasons, as well, for not attempting to use a more realistic project. It’s possible to do that, and we have done it often for software projects. Usually, though, it takes more time than people are willing to devote to “a class,” but even more important, it usually generates far more data than we can analyze effectively in hours or even days. But we have done such simulations, for example, in a semester-long college class in software development. Teams in such classes have developed useful software tools that became successful commercial products. One team even developed a hardware computer that executed a high-level language as its machine language.
Too embarrassing
When teaching within a real organization, there may be a much stronger reason to avoid too much accuracy in a direct simulation. If the simulation reveals something embarrassing, anyone who might look bad will try to discredit the entire learning cycle.
A case in point: We were engaged to conduct a week-long workshop for a university’s IT department, to help them improve the quality and timeliness of their in-house-developed software. On Tuesday, we conducted a four-hour simulation of their development process. One of the inventions showed the bad effects of management’s unwillingness to follow some simple suggestions by the testing team.
Both sides–the testers and the developers–were excited to learn how these management actions, or inactions, affected quality and schedules. The participants were so excited, we were sure the final three days of the workshop would result in significant productive changes in the way they developed software. But nothing ever happened.
(from Experiential Learning: Simulation)
A simulation doesn’t have to be “real” to be successful as an experiential learning tool. What has to be real are the feelings it stimulates in the participants, for feelings are what drive learning. Indeed, too much realism can be harmful to the goal of learning. If the simulation is too close to the participants’ real situation, they be unable to be objective about what they did and what resulted from what they did.
The following sections offer some other effects you may experience if your simulation hits too close to home.
Too accurate detail
Scaling difficulties aren’t the only reason, but they’re a more than adequate reason, why we don’t attempt to do direct simulations to obtain meaningful measurements. We’re not looking for numbers. It’s not answers we seek, but insights. Consequently, when you design a direct simulation, don’t waste your time trying to find “exact” information about the factors in the simulation.
There are other reasons, as well, for not attempting to use a more realistic project. It’s possible to do that, and we have done it often for software projects. Usually, though, it takes more time than people are willing to devote to “a class,” but even more important, it usually generates far more data than we can analyze effectively in hours or even days. But we have done such simulations, for example, in a semester-long college class in software development. Teams in such classes have developed useful software tools that became successful commercial products. One team even developed a hardware computer that executed a high-level language as its machine language.
Too embarrassing
When teaching within a real organization, there may be a much stronger reason to avoid too much accuracy in a direct simulation. If the simulation reveals something embarrassing, anyone who might look bad will try to discredit the entire learning cycle.
A case in point: We were engaged to conduct a week-long workshop for a university’s IT department, to help them improve the quality and timeliness of their in-house-developed software. On Tuesday, we conducted a four-hour simulation of their development process. One of the inventions showed the bad effects of management’s unwillingness to follow some simple suggestions by the testing team.
Both sides–the testers and the developers–were excited to learn how these management actions, or inactions, affected quality and schedules. The participants were so excited, we were sure the final three days of the workshop would result in significant productive changes in the way they developed software. But nothing ever happened.
On Wednesday morning, half the students–the development team–simply didn’t show up. When
we investigated, we learned that the development manager had forbidden his employees to attend
the remainder of the class. When we confronted him about his decision, he said, “It’s dangerous to
let developers and testers get to know one another. If they become friends, the testers will not want
to embarrass the developers by finding their bugs.”
Unnecessary detail
Paradoxically, realism often interferes directly with learning from a simulation. For some years, as part of our consulting, we used a simulation of a software development project in which the class was to organize and produce an actual software tool. We were hoping the participants would discover many project pitfalls–information they could apply directly on their jobs.
To add realism to the project, we added a number of the pitfalls we were hoping they would learn to recognize and perhaps avoid. For instance, one of the learning leaders acted as the top executive above their project, and every fifteen minutes or so, he would call a “progress meeting.” We were hoping they would learn to appreciate how much such frequent meetings could disrupt a project. We were pleased to see how well these meetings actually disrupted their project.
But after a few classes, we began to see a pattern in the invention phase. Yes, the participants recognized how much frequent status meetings disrupted the project, but they said, “If management didn’t keep interrupting us to check on progress, we would have done fine. You prevented us from succeeding.”
In other words, they were saying, “We don’t have to learn something or change our behavior. We need our management to change.” If the class involved their executive management, that might have been a good result, but that was not the result we were seeking.
We decided to remove that detail from the simulation. The learning leader who played the role of executive management called no meetings at all. But guess what? The project was still messed up– late and out of control. In the invention phase, they said, “If management didn’t keep interrupting us to check on progress, we would have done fine.”
But this time, they couldn’t blame their learning leaders. It was their own teammates who kept checking status, so now they had to look inward to discover what it was about status meetings that needed to change. By not trying to insert so much detail in the simulation, we allowed the pitfalls to arise from the participants’ own behavior–behavior they could actually do something about.
In realistic simulations, less is more. It’s more because it’s more realistic to let their failures (or successes) arise from their own behaviors, not from intervention by the learning leaders. Once we learned this principle, we began pruning our simulations to the lowest level of leader participation. For instance, they argued that they would have been more successful without specification changes we were introducing randomly. Such changes are a familiar obstacle to anyone who has ever developed software, but again, by adding this detail, we simply gave them another excuse–another way to avoid responsibility for their own unproductive behavior. When we stopped introducing such spec changes, their own developers keep thinking of “nifty features” to introduce into their own product.
Unnecessary detail
Paradoxically, realism often interferes directly with learning from a simulation. For some years, as part of our consulting, we used a simulation of a software development project in which the class was to organize and produce an actual software tool. We were hoping the participants would discover many project pitfalls–information they could apply directly on their jobs.
To add realism to the project, we added a number of the pitfalls we were hoping they would learn to recognize and perhaps avoid. For instance, one of the learning leaders acted as the top executive above their project, and every fifteen minutes or so, he would call a “progress meeting.” We were hoping they would learn to appreciate how much such frequent meetings could disrupt a project. We were pleased to see how well these meetings actually disrupted their project.
But after a few classes, we began to see a pattern in the invention phase. Yes, the participants recognized how much frequent status meetings disrupted the project, but they said, “If management didn’t keep interrupting us to check on progress, we would have done fine. You prevented us from succeeding.”
In other words, they were saying, “We don’t have to learn something or change our behavior. We need our management to change.” If the class involved their executive management, that might have been a good result, but that was not the result we were seeking.
We decided to remove that detail from the simulation. The learning leader who played the role of executive management called no meetings at all. But guess what? The project was still messed up– late and out of control. In the invention phase, they said, “If management didn’t keep interrupting us to check on progress, we would have done fine.”
But this time, they couldn’t blame their learning leaders. It was their own teammates who kept checking status, so now they had to look inward to discover what it was about status meetings that needed to change. By not trying to insert so much detail in the simulation, we allowed the pitfalls to arise from the participants’ own behavior–behavior they could actually do something about.
In realistic simulations, less is more. It’s more because it’s more realistic to let their failures (or successes) arise from their own behaviors, not from intervention by the learning leaders. Once we learned this principle, we began pruning our simulations to the lowest level of leader participation. For instance, they argued that they would have been more successful without specification changes we were introducing randomly. Such changes are a familiar obstacle to anyone who has ever developed software, but again, by adding this detail, we simply gave them another excuse–another way to avoid responsibility for their own unproductive behavior. When we stopped introducing such spec changes, their own developers keep thinking of “nifty features” to introduce into their own product.
Almost but not quite
But realism isn’t the only problem with detail. Just providing detail itself may mess up the learning value of a simulation. The more details you supply, the more it seems to invite the participants to blame slight flaws in some detail you do supply. It’s as if the participants say, “It all seemed so real that we were thrown off course when this detail (pick one) wasn’t quite right.
Even if that detail was essentially unrelated to the learning goals of the simulation, it seems to provide an excuse for failure–and this an excuse for not learning.
But realism isn’t the only problem with detail. Just providing detail itself may mess up the learning value of a simulation. The more details you supply, the more it seems to invite the participants to blame slight flaws in some detail you do supply. It’s as if the participants say, “It all seemed so real that we were thrown off course when this detail (pick one) wasn’t quite right.
Even if that detail was essentially unrelated to the learning goals of the simulation, it seems to provide an excuse for failure–and this an excuse for not learning.
No comments:
Post a Comment