Whitepaper
The Instructional Design of Video
08/29/2019
Submitted by:
CACI INC. FEDERAL
Mr. Roger L Smith
Instructional System Designer
Training and Warfighter Readiness / IMI Solutions Team
999 Waterside Dr., Suite 700
Norfolk, VA 23510
Phone: (407) 692-7865
Email:
[email protected]
Table of Contents
1. Background 1
2. Challenge 1
3. Solution 1
Learning Levels 2
Interactivity Levels: 3
Level 1 Interactivity 3
Level 2 Interactivity 3
Level 3 Interactivity 4
Media Selection: 5
4. Results 6
Video Media Selection and Implementation 6
5. Conclusion 8
Background
Over the last 20 years, researchers continue to test and validate principles of designing adult education materials. These principles establish the field of instructional design. For instructor-led training or E-learning, to be effective the media must follow principles of great instructional design and not trends, fads or perceptions of learning needs.
Challenge
As educators, we are challenged within our organizations to provide efficient training that will maximize learning outcomes.
As defined in MIL-HDBK-296123A and in TRADOC Pamphlet 350-70-12, effective instructional design analysis will determine if IMI and or, video media is the most effective approach and will yield the greatest outcomes in support of the training objectives. This is not an intuitive process and should not occur based on personal preference, or perceived trends.
The selection and design of instructional videos is a systematic and qualitative processes based on sound instructional design principles. This step is often skipped when the primary focus is on trends and the implementation within distance learning environments rather than the effectiveness, or efficiencies in maximizing the learning outcomes. The intent of this document is to focus on video media selection in ensuring that instructional videos will support the attainment of the desired learning objectives, balanced with efficient design, development and sustainment of the distant learning media.
In the end we strive to design:
Effective (meaningful, memorable and measurable)
Efficient (design/development cost, implementation, execution)
Distributed (removable media, online, computer based, paper based)
Instruction (fulfills the training need)
Solution
For this to be accomplished we will need to focus on the following key domains:
Learning Levels
Interactivity Levels
Media Selection
Learning Levels
The Analysis, Design, Development and Evaluation (ADDIE) process model, is often the core of discussion for Instructional Design; however, before we begin to understand the process, or the media that would support the objectives, we must first clearly understand each type of training objective and the applicable training need, method and technique used to determine the most effective and efficient instructional approach. Using ADDIE as an example, this occurs during the Analysis phase of ADDIE.
The following is an overview in identifying the types of objectives and the selection and design of applicable media.
To assist in analysis, designers will use instructional models to maximize training outcomes. Since there are currently hundreds of instructional models; for the sake of this paper, we will only discuss briefly the primary Benjamin Bloom’s Taxonomy initially published 1956 and revised many times since and with many adaptations. However, the core model is still exceptionally applicable.
Bloom’s taxonomy essentially defines that not all objectives are the same. Different objectives focus on different performances and outcomes. Different types of objectives require different types of strategies. Instructional design is made easier by assigning learning objectives to different categories. Each category leads to different classes of human performance and requires different sets of instructional conditions for effective learning. By correctly identifying the learning level of an objective, strategies can then be developed to ensure that the content is presented clearly and appropriately for E-learning, and that the instruction will be effective.
For this paper we will only discuss three primary domains of learning to include:
Knowledge – Objectives at the knowledge level represent the lowest level of learning outcomes in the cognitive domain. Examples of the objectives at this level are recognize or identify common terms, specific facts, methods and procedures, basic concepts and principles.
Comprehension- These learning outcomes go one step beyond simply recalling material. Examples of objectives at this level include understanding facts and principles, interpreting verbal material, charting and graphing, estimating the future consequences implied in data and justifying methods and procedures.
Application - This may include the application of such things as rules, methods, concepts, principles, laws and theories. Examples of objectives at this level are applying concepts and principles to new situations, applying laws and theories to practical situations, solving mathematical problems, constructing graphs and charts and demonstrating the correct use of methods or procedures.
Interactivity Levels:
Level 1 Interactivity
[Passive (Fact Learning, Rule Learning)]
This is effective for an overview knowledge lesson, provided in a linear format (PowerPoint/Page turner). Level 1 should be used to introduce an idea or concept, or for situational awareness. The design will typically include graphics, customer provided video and audio segments.
* MIL-HDBK-296123A Perception (Encoding), Perception of sensory stimuli that translate into physical or mental performance. – Meaning - orientation, or rope memorization.
Examples: Facility Safety orientation, Soft Skills interpersonal, Policy-Procedures-Protocols
Level 2 Interactivity
[Limited Participation (Fact Learning, Rule Learning, Process Learning)]
This involves the interactive engagement; this is also considered in design for performance-based objectives and tasks. This level provides information as with level 1 and allows the student control to demonstrate comprehension. This may be through a lesson scenario, or practice exercise utilizing screen icons and or, other peripherals, or training aids. Typically, level 2 is used for non-complex operations and maintenance lessons. Simple task-lists and emulations or simulations are presented to the user. As an example, the user is requested to rotate switches, turn dials, adjust, or identify and replace a faulted component as part of a procedure.
* MIL-HDBK-296123A - Learning and demonstrating the ability to perceive the normal, abnormal, and emergency condition cues associated with the performance of an operational procedure. Situational Awareness of operational condition cues.
– Meaning performance/task list based: Show it-Try it-Do it.
Examples: Pre/post checks, start/shut procedures, build/break procedures
Level 3 Interactivity
[Complex Participation (Procedure Learning, Discrimination Learning, Problem-Solving)]
This is the application and recall of complex information comprising of the execution and or, evaluation of levels 1 and 2 knowledge and allows the learner an increased level of control over the lesson scenario/practice exercise. Video, graphics, or a combination of both is presented simulating the operation of a system, subsystem, or equipment to the user.
Operation and maintenance procedures are normally practiced within level 3 scenarios. Multiple software branches and rapid response are provided to support remediation. Emulations and simulations are an integral part of this presentation. This may also include complex developed graphics, and/or clip art, and video and audio clips.
* MIL-HDBK-296123A - Learning and demonstrating the mental preparedness to make decisions by generating the results expected upon completion of prioritized strategies or tactics in response to normal, abnormal, and emergency cues associated with the performance of an operational procedure, and the ability to generate new actions in response to abnormal or emergency cues.
– Meaning Choose Your Own Adventure, this is typically a scenario where the learner must walk through multiple levels of correct and incorrect learning paths to practice the comprehension of levels 1 and 2 reducing attrition and strengthening confidence in knowledge synthesis.
Examples: A digital/simulated representation of the interior of a control station/cockpit and includes applicable functioning gauges, switches and dials, the following Level 3 scenario example may include the following: The MASTER WARNING indicator is illuminated, and flashing, auditory alarms are screeching within the cockpit, the smell of burning plastic is present, the visual displays are lost in the haze of smoke…what do you do? (The learner would then have free control to select many learning paths and resulting remediation) – Further example: if the learner selects the Eject handle, the resulting action could include death with remediation prompts to possibly dawn the oxygen mask with OBOGS and extinguish the electrical fire then assess the environment.
Media Selection:
Media selection analysis must evaluate general and specific criteria, including instructional, student, and cost aspects for each delivery technology (or instructional medium) to ensure that the most appropriate media is selected for specific education or training objective. It is important to understand the intended instructional environment for the appropriate implementation method. Synchronous vs. Asynchronous.
Synchronous learning environments support live, two-way oral and/or visual communications between the instructor and the student. This is typically lecturing, over-the-shoulder, workgroups, round tables, or technical exchanges. The implementation for synchronous could be live or virtual in use of facilities of virtual / tele-conferencing. Asynchronous learning environment exists when communication between the instructor and the student are not in real-time.
Although asynchronous is not ‘live’ training, the advent of artificial, or reactive emulations may provide the augmentation of near realism of a synchronous environment. For example, using video snippets based on reactive decision loops may lead a learner though multiple paths of corrective remediation. This could be a learner interacting with a group of decision trees that the learner would interact with eliciting the applicable video-based reactions.
Imagine the following design example: A video depicting an office setting, near end of work day, light are going off co-workers are leaving, the panning video slowly focusing in on Jill (a co-worker) who asks you “What are you doing this wee…”, ~ shot’s ring out, people are running two of your co-workers are wounded crawling on the floor, Jill is screaming at you Yelling “WHAT DO WE DO!!!”, a screen overlay displays, the learner is prompted to react by selecting multiple guess questions. By selecting the correct answer, the learner progresses down the correct path, until the next decision point. Selecting the wrong path may induce corrective feedback.
Conversely, if the objective is how to fill out a leave request form, video emulation would not be applicable. This is where media selection must be evaluated against the: time in training, effectivity and the cost required for development, implementation and sustainment.
Results
Video Media Selection and Implementation
As with the video-based examples defined previously, there are significant considerations that should be evaluated. Such as cost and effectivity. Is a level 3 emulation needed for level 1, or level 2 knowledge? So, when is video applicable? That depends. Level 1 knowledge may be acquired effectively through documentaries, or non-interactive / passive video tasks (as depicted in YouTube). This is most effective in learning general knowledge without the necessity to practice or be required to demonstrate comprehension. Videos may be powerful visual aids in executing the task in the real-world dust and grime.
The following is a real example with the F-35C Joint Strike Fighter. A deficiency in training was discovered when Navy maintainers attempted to load the KOV-32 module and would encounter issues accessing the panel. Technical videos were created depicting how operational air vehicles with the dust and grime could be serviced. These were effective recorded video tasks that showed clear instruction beyond 3D, or simple stills. However, video alone as the only method of instruction may become detrimental when used in an operational, or deployed environment. Due to lack of accessibility.
If the knowledge is not acquired or practiced and video is only used as in quick reference, the rate of attrition significantly increases as this knowledge is not practiced by the learner until needed on the live system/environment. This is a primary issue with using video as reference training only (as-in YouTube), is the assurance of a connected environment, or access to a system where the information resides. For example, Joint Tech Data (JTD) is only as concurrent since the last time the JTD was synced on the Portable Maintenance Aid (PMA). There is a critical issue when connectivity may not be available in deployed environments limiting the access to concurrent instruction / JTD.
If this information is acquired and practiced during pre-deployment, the learner will carry this knowledge when needed. So long as the elapsed time of operation does not exceed training attrition. Technology and connectivity is expanding, bandwidth is improving, the limitations that once restricted the use of video is depleting. However, if there is a video resource dependency for operational application, then there will always be a critical mission need for accessing it.
Additionally, safeguarding this information may add in another level of implementation costs and considerations. Classified, or sensitive procedures would not be readily available in unclassified environments (flight-lines/flight decks). For example, low observable maintenance procedures, mission planning, or certain crypto procedures would not be video recorded, or available for use in an unclassified environment.
Here is a rough scenario of this: Imagine a maintainer approaching an air vehicle to perform a JTD task, his/her PMA is down, or the JTD is not detailed enough to perform the task and he/she recalls the theoretical videos he watched in the SCIF for training, the maintainer would need to return to the SCIF (hopefully the same onboard SCIF that has the same videos), where the maintainer could watch said videos, and return attempting to recall the 15 minute mark in the video of what to do next.
For a hypothetical discussion, let’s assume the hanger is classified and a classified PMA hosts the videos. The Air Vehicle is loaded with the Portable Memory Device (PMD) encrypting the air vehicle for deploying to an unclassified environment. In flight the pilot identifies BAT1 is illuminated requiring a classified procedure to reload crypto once the battery has been changed out. The classified PMA hosting the videos are not located at the unclassified hanger, SIPRNet is not available at this location. If a maintainer was trained at the appropriate level to support the maintenance task, the operational knowledge would be independent of the dependency of the video, or connectivity.
Video implementation should be considered with redundancies in comprehension. Show the task, procedure, or concept. Then if applicable, enable a learner to practice this knowledge through simulation, part-task trainers, or any other applicable instructional aid. Time of training and time of operation are exceptional considerations as well. A massive deficiency in training is the lack of reinforcement increasing attrition rates. Therefore, having distributed learning is vital for refresher training. Videos may augment this need as quick rehearsal, or reinforcement. However, video training should not be the only method of instruction.
When considering the costs of selecting video as the training media along with hosting and delivering video the cost of designing, developing and scripting should all be factors of consideration. This should be balanced with the necessity to the outcome of the behavior being modified. For example, if the objective is to identify how to fill out and submit leave requests, is it necessary to script narration, environment and actions as well as pay an actor/actress to sit at a desk and fill out a form? If we performed these same tasks in a 3D environment, we would reduce the costs of personnel; however, would increase the costs of modeling, lighting and animation. In this instance the form could simply be scanned in, the interactive text boxes integrated over the scanned in form with a task list step-by-step instruction available in the margin. The learner may practice and reset the form as often as needed. The F-35 Joint Strike Fighter did exactly that when training the off-board admins.
The Autonomic Logistic Information System (ALIS), is essentially a compilation of administration forms all interoperable through the entire air system from supply, to prognostic health management, to what candy the pilots prefer. Rather than de-stimulating a learner to the point of visual exhaustion in tech-data, the use of interactive screens enabled the learner to practice within a safe non-production environment.
The costs of video production should be considered in categories of personnel required, location (permits if needed), recording and editing equipment, audio video engineers, availability of equipment (including vessels, or vehicles) and overall time required for performing the task in relation to time of instruction. All factors need to be considered in costs associated to selecting video for media.
As for sustainment, concurrency of the equipment is critical to operational goals. Hypothetically, if the learner had access to hours of video training for the Crazy 10 (AN/CYZ-10 Data Transfer Device) and not the latest Simple Key Loader (SKL) fill devices, the learner would need on-job-training when deployed. If the fill device videos were not redeveloped the videos and training would not be applicable for an operational learner. If the same training was IMI with images, on-screen text and resource manuals the images could be swapped out, the text sustained and distributed on media, or accessed online.
The following is a real use case. Republic of Korea Air Force (ROKAF) procured the PC-21 for their Basic Wings Training. Basic pilot and maintainer procedures were recorded as micro-learning modules. The training had to be delivered prior to operational readiness. Over 300 hundred hours of video were recorded on-sight with a 15-person crew over a span of three months. The video was recorded, edited compiled and delivered. A failure in communication occurred when a newer PC-21 variant and a different paint scheme was discovered two weeks before initial operating commencement.
Nearly all delivered video was recaptured and re-edited with an additional four months on-site. Sustainment through consistent evaluation must be another key factor when considering video-based learning.
Conclusion
Video media has powerful capabilities when instructional design is applied. It should not be the only method of instruction, nor should it be excluded for emphasis, fidelity and clear operation guidance.
The Instructional Design of Video
This proposal includes data that shall not be disclosed outside the Government and shall not be duplicated, used, or disclosed—in whole or in part—for any purpose other than to evaluate this proposal. If, however, a contract is awarded to this offeror as a result of—or in connection with—the submission of this data, the Government shall have the right to duplicate, use, or disclose the data to the extent provided in the resulting contract. This restriction does not limit the Government's right to use information contained in this data if it is obtained from another source without restriction. The data subject to this restriction are contained in all sheets.
This proposal includes data that shall not be disclosed outside the Government and shall not be duplicated, used, or disclosed—in whole or in part—for any purpose other than to evaluate this proposal. If, however, a contract is awarded to this offeror as a result of—or in connection with—the submission of this data, the Government shall have the right to duplicate, use, or disclose the data to the extent provided in the resulting contract. This restriction does not limit the Government's right to use information contained in this data if it is obtained from another source without restriction. The data subject to this restriction are contained in all sheets.
Request for Information i
Use or disclosure of data contained on this sheet is subject to the restriction on the title page of this proposal.
Request for Information 3
Use or disclosure of data contained on this sheet is subject to the restriction on the title page of this proposal.