Okay, continuing my effort to answer the question from my previous post, we start with this revision: Given the wide variety of technologies now available, how would you go about recommending the appropriate Learning Model, Instructional Model, Delivery Model, and Assessment Model for any given training or educational need?
First, you need to know exactly what you're trying to accomplish, not just for the learner, but for the organization as well. So I'm going back to the ADDIE approach to instructional design, starting with A: Assess the need. To get to the right model, you need a Discovery. Often, time is of the essence, and a formal research project is out of the question. So my own customized take on several standard methodologies, designed for speed and accuracy, goes something like this:
Discovery Process
1. Get the Story. This is not science; it's Journalism 101. What is the business problem, what is causing it, and what do people believe will solve it? Who, what, where, when, how, and most importantly, why? At this stage, you're strictly about interviewing those who should know, gathering what is likely 90% opinion. But a few really good questions up front can move you closer to the right solution with more efficiency than any single thing you can do at any later time.
Business example: "The employees are not getting the job done," becomes, with good questioning, "The salespeople don't know how to sell our latest product," which becomes, "Our front-line sales are off 20% over the last three quarters and we think it's because our new salespeople do not understand the product."
University example: "We need our online MBA revised in order to compete with Big For-Profit University," becomes, with further questions, "Our courses are highly idiosyncratic and uneven, one to the next," which becomes, "Our faculty never quite pulled together as we would have hoped, so we don't have our best product out there."
2. Get the Data. Still not science; still Journalism 101. Check the facts. Whatever actual data you can find, whether it's sales numbers, marketing responses, employee retention, whatever sheds light on the story, use it. You also want to talk with key stakeholders for some of that softer but powerful input--dispatches from the front lines, both personnel and customers. Many times their input has been colored and filtered by the time it reaches the stakeholders higher up.
Business Example: "We think our new salespeople don't get it," becomes, once you see the data, "Salesperson attrition has climbed by 15% and managers are hiring anyone they can find," which in turn becomes, after interviews, "We are attracting neither customers nor employees since we launched our new product."
University Example: "Our faculty are not pulling together," becomes, "Student complaints and course drops are high and rising," which becomes, "We have never had a detailed budget or a solid design plan for the MBA," which becomes, "Every faculty member wants it their own way, while the students want consistency."
3. Write the Story. Maybe I should have just been a journalist. But now, you document what you know in a narrative. You're still the reporter, so make it both factual and pointed--and short. If you've been consciously and obviously focused on reality and not politics, you will have become the expert--an objective "outsider" with a clear grasp of what's really going on.
4. Publish the Story. Maybe I've taken the journalism metaphor too far. True, you are not really publishing it, but you are getting it out to the stakeholders so that they can see and verify your sources, facts, and conclusions.
5. Create the Equation. Here's where it pays off. Assuming you get the nod of approval for the story, now you get to the science. Or at least, the math. My equations look something like this:
If A + B = C, and we need C + X, then A + B + Y = C + X.
This is not really algebra, but a mathematical metaphor. A and B are data-backed realities, the facts of the story that are leading to the current poor result, which is C. So A + B = C defines the current problem. In the business case, A is a change to a downmarket product, B is a mental/emotional disconnect between the brand promise and the new product, and C is the loss of loyalty among employees that is hurting sales. In the university case, A is the inconsistency of the program course to course, B is the desire of students to have the consistency they signed up for, and C is the dropout rate.
Now for the solution: C + X is the current state C plus whatever is necessary for success, the X factor. In the business case, X is the return on investment now improved by alignment of brand and employees. In the university case, X is the improved retention rate and associated dollars.
So what is Y? (Drum roll) Y is what you're adding to the equation through learning. Y is your business, your baby, the learning answer to the organizational problem (Trumpet fanfare). In the business case, Y is what you do to change the perception of the employees about the new product. In the university case, Y is the redeveloped MBA, with a specific budget, and with general consensus. So whatever you're currently doing, add Y into the mix on the front end, and you'll get the solution you need:
A + B + Y = C + X. Current inputs A and B plus new learning initiative Y equals current state C with improvement X.
This equation can be written in any way necessary, in order to get to the solution. Though this is a metaphor, I should point out that this equation can carry the punch of real math if you can add the actual dollars to it--even in broad brush strokes or ballparks. Show what the current state is costing, and what the end state will save/generate.
6. Confirm the Equation. This step is the reason for the previous step. By putting the problem into a formula, you can focus on it objectively, get stakeholders to agree not only on the problem, but on the solution. And on the ROI of the solution. When you get confirmation here, everyone knows the kind of dollars that are at stake. You need this on paper, in black and white (or black and red, to be more symbolically precise), because the solution you build is going to cost something.
7. Build and Implement Y.
Hey, we're finally ready to start looking at the best Learning Models! So what did you gain from your Discovery that allows you to make that all-important Learning Model decision?
I'll get to that (I hope) next time.
Tuesday, April 26, 2011
The best instructional design models for today...
"What are the best instructional design models for eLearning today?" I was asked this recently, and it gave me pause. It's a hard question to answer, because the question is somewhat... tangled.
Considering that the art and science of instructional design has been around since World War II and the "Training Within Industry" initiative, and considering that lesson design has been shown to directly impact learning since at least Robert Gagne in the 1960s, I continue to be impressed by how few of the basics are broadly understood.
So let me unmuddle the question just a bit, then provide an answer. First, the term "instructional design model" refers to the process used to design instruction. The big kahuna of ID models is ADDIE, of course--Assess, Design, Develop, Implement, and Evaluate. When you want to build instruction, if you do each of these things in order, and do them carefully and well, you will likely end up with a serviceable result. But ADDIE flies at 30,000 feet and leaves much detail obscured, so there is plenty of room for pilot error. So I like to blend ADDIE with Rapid Prototyping, in which you quickly get learning chunks out to a slice of the target audience and let them provide feedback. It can only help, and sometimes it can save the day.
But my inquisitor in this case was actually unconcerned about instructional design models. It turned out the question was really about "learning models." A Learning Model is focused solely on what the learner does, in what order, when, and how. It has implications for everything else--instructor, technology, assignments, content, assessments--but it's not primarily focused on them. It defines a standard process for the learner. I put the Learning Model at the center of all instructional design because its heart and soul is the one thing that really matters... the learner.
Every learning event has a Learning Model, even if no one has defined it, and even if it's not very good ("Death By PowerPoint" is a common one, though not a personal favorite). But even the good ones can be all over the map, from variations on the case method to apprenticeship to more standard classroom lesson approaches. A good starting place for understanding what a Learning Model is would be Robert Gagne and his "Nine Events," which started the whole focus on structuring learning in order to improve outcomes, and which still holds a place of high esteem, because what he developed still works.
So, the best Learning Models for today... what are they? That answer has to be filtered through both the Instructional Model and the Delivery Model. And what are they? Well, to clarify terms, an Instructional Model defines the standard process and activities of the instructor, including such basics whether or not the instructor is human (self-paced online learning, for example, does not require the "instructor" to have pulse or respiration). It is to instructors what a Learning Model is to learners. Every public speaker has heard of at least this one: "Tell them what you're going to tell them, tell them, then tell them what you told them."
The Delivery Model then defines the platform, whether it is face-to-face (F2F), technology-enhanced, online, or any combination of the above. It also defines which specific technologies, by brand name, are being used to achieve the desired results. Hybrid models are popular, but not always possible. In fact, most technology platforms severely limit the Learning and Instructional Models. The unhappy result is that often the Delivery Model, which should be the last one chosen after the other models are defined, wags the dog.
And speaking of the dog, the answer to my questioner's question should also be shaped by the Assessment Model, the manner and mode of determining how effective the given learning opportunity has been. This could arguably be the most important model of them all, but in order to avoid that discussion I like to include it as a subset of the Learning Model. For this critical component, like many other practitioners I go straight to Donald Kirkpatrick's classic "Four Levels" of evaluation. Unlike the rest of the Learning Model, the Assessment Model can be, and too often is, completely absent. Those who pay little attention to Learning Models often ignore assessment as well (So if "Death By PowerPoint" is the Learning Model, I say don't bother with Assessment. The dead are notoriously poor test takers.)
So now, if I were to restate the question in its untangled form, I would put it like this: Given the wide variety of technologies now available, how would you go about recommending the appropriate learning, instructional, delivery, and assessment models for any given training or educational need?
Ah! That's a question I like!
But now I'll need another post to answer it.
Considering that the art and science of instructional design has been around since World War II and the "Training Within Industry" initiative, and considering that lesson design has been shown to directly impact learning since at least Robert Gagne in the 1960s, I continue to be impressed by how few of the basics are broadly understood.
So let me unmuddle the question just a bit, then provide an answer. First, the term "instructional design model" refers to the process used to design instruction. The big kahuna of ID models is ADDIE, of course--Assess, Design, Develop, Implement, and Evaluate. When you want to build instruction, if you do each of these things in order, and do them carefully and well, you will likely end up with a serviceable result. But ADDIE flies at 30,000 feet and leaves much detail obscured, so there is plenty of room for pilot error. So I like to blend ADDIE with Rapid Prototyping, in which you quickly get learning chunks out to a slice of the target audience and let them provide feedback. It can only help, and sometimes it can save the day.
But my inquisitor in this case was actually unconcerned about instructional design models. It turned out the question was really about "learning models." A Learning Model is focused solely on what the learner does, in what order, when, and how. It has implications for everything else--instructor, technology, assignments, content, assessments--but it's not primarily focused on them. It defines a standard process for the learner. I put the Learning Model at the center of all instructional design because its heart and soul is the one thing that really matters... the learner.
Every learning event has a Learning Model, even if no one has defined it, and even if it's not very good ("Death By PowerPoint" is a common one, though not a personal favorite). But even the good ones can be all over the map, from variations on the case method to apprenticeship to more standard classroom lesson approaches. A good starting place for understanding what a Learning Model is would be Robert Gagne and his "Nine Events," which started the whole focus on structuring learning in order to improve outcomes, and which still holds a place of high esteem, because what he developed still works.
So, the best Learning Models for today... what are they? That answer has to be filtered through both the Instructional Model and the Delivery Model. And what are they? Well, to clarify terms, an Instructional Model defines the standard process and activities of the instructor, including such basics whether or not the instructor is human (self-paced online learning, for example, does not require the "instructor" to have pulse or respiration). It is to instructors what a Learning Model is to learners. Every public speaker has heard of at least this one: "Tell them what you're going to tell them, tell them, then tell them what you told them."
The Delivery Model then defines the platform, whether it is face-to-face (F2F), technology-enhanced, online, or any combination of the above. It also defines which specific technologies, by brand name, are being used to achieve the desired results. Hybrid models are popular, but not always possible. In fact, most technology platforms severely limit the Learning and Instructional Models. The unhappy result is that often the Delivery Model, which should be the last one chosen after the other models are defined, wags the dog.
And speaking of the dog, the answer to my questioner's question should also be shaped by the Assessment Model, the manner and mode of determining how effective the given learning opportunity has been. This could arguably be the most important model of them all, but in order to avoid that discussion I like to include it as a subset of the Learning Model. For this critical component, like many other practitioners I go straight to Donald Kirkpatrick's classic "Four Levels" of evaluation. Unlike the rest of the Learning Model, the Assessment Model can be, and too often is, completely absent. Those who pay little attention to Learning Models often ignore assessment as well (So if "Death By PowerPoint" is the Learning Model, I say don't bother with Assessment. The dead are notoriously poor test takers.)
So now, if I were to restate the question in its untangled form, I would put it like this: Given the wide variety of technologies now available, how would you go about recommending the appropriate learning, instructional, delivery, and assessment models for any given training or educational need?
Ah! That's a question I like!
But now I'll need another post to answer it.
Subscribe to:
Posts (Atom)