COMPETENCYBASED
LEARNING
IN 2018
Prepared By:
Brent Smith (CTR)
Mike Hernandez (CTR)
Jerry Gordon (CTR)
This work was supported by the U.S. Advanced Distributed Learning (ADL) Initiative W900KK-17-D-0004. The views
and conclusions contained in this document are those of the authors and should not be interpreted as representing
the official policies, either expressed or implied, of the ADL Initiative or the U.S. Government. The U.S. Government is
authorized to reproduce and distribute reprints for Government purposes.
1
DISTRIBUTION STATEMENT A. Approved for public release.
TABLE OF CONTENTS
1. Executive Summary ................................................................................................................................. 1
2. Introduction .............................................................................................................................................. 3
3. Competency Based Learning and Readiness ........................................................................................... 5
4. Competency in the Total Learning Architecture ...................................................................................... 7
4.1 Enabling Complex Systems with Competency ..................................................................................... 8
4.2 Authoritative Sources and Authoritative Storage .............................................................................. 10
4.3 How Competency Enables Complex Systems .................................................................................... 10
4.4 Moving from Level 2 to Level 4 of the TLA ....................................................................................... 11
5. Distributed Acyclic Graphs as the Expression of Digital Competencies ............................................... 14
6. The Way Ahead ..................................................................................................................................... 15
6.1 Next Steps .......................................................................................................................................... 16
6.2 Conclusions and Recommendations .................................................................................................. 17
7. Appendix A - Concepts and Definitions ................................................................................................ 18
8. Appendix B – Technical Standards and Specifications ......................................................................... 26
LIST OF FIGURES
Figure 1. Traditional Training and Education
3
Figure 2. Operators and Components as a Joint Cognitive System
4
Figure 3. Competency Frameworks
5
Figure 4. The Path to CBL
6
Figure 5. Competency and Skills System (CASS)
7
Figure 6. TLA Services to support Competencies and Credentials
9
Figure 7. TLA Adoption Level 2 - Federation of LRS Supporting a Learner Profile
11
Figure 8. Adding Competency Management
12
Figure 9. TLA Architecture with Two LRS Federations
13
Figure 10. Relating a Framework to Learning
14
Figure 11. KSAO Roll-Up Competency
21
Figure 12. Credential Roll-Up
23
Figure 13. O*NET Content Model
27
Figure 14. CDTL Class Structure
28
1
1. EXECUTIVE SUMMARY
Although technology is front-and-center in today’s military operational environment, people are the
foundation of our strength. An assessment on the Future Operational Environment1 by the U.S. Army
Training and Doctrine Command (TRADOC G-2) through 2050 underscores the rapid societal changes
spurred by breakneck advances in Science and Technology (S&T) and how these changes will impact the
art of warfare. Virtually every S&T advance intersects with other technologies, thus increasing the speed
of innovation and adoption across the entire S&T portfolio. This convergence allows our adversaries to
close the technical gap our military has typically enjoyed and increases the speed of human interaction and
cognition required to successfully execute the mission. The human element is paramount. The ability of our
personnel to maintain a higher level of creativity, problem-solving, and out-of-the-box thinking is our
greatest asset and our biggest differentiator. Therefore, the way we educate and train our personnel directly
impacts our future readiness.
Military education and training encompass many different schools, universities, and training programs
designed to foster technical, professional, and leadership skills in military service members. Historically,
there has been a separation between the education and training communities across the services. Education
occurs incrementally and involves grappling with ambiguity while thinking and reflecting about the
concepts being learned2. Training is linked to readiness and offers opportunities to apply an individual’s
knowledge, skills, and abilities in a manner that provides immediate feedback and progress measurement.
Within the current context, training and education also have different reporting structures, motivations, and
logistical requirements such as fuel, personnel, and the access to the appropriate environments or
equipment.
Future warfighters must rapidly prepare and adapt to function in an increasingly volatile and complex
environment. Where current assessment methods judge knowledge retention for education and skill
proficiency for training, there is a significant gap between those metrics and operational readiness and
performance. Future force management and readiness metrics must directly link preparations to operations.
The class of metrics used to do this is referred to as competencies – a measure of how well someone can
use their acquired skills and knowledge to complete task(s). To enable the future learning ecosystems, all
the ways to assess the ability of a person will need a digital representation. The on-going research on
Experience API (xAPI) provides the capability to capture any transaction of learning. The Advanced
Distributed Learning Initiative (ADL) is continuing to explore how those xAPI transactions can be trusted
as evidence from an authoritative source.
Competency Management Systems (CMS) are the brokers of trust. That trust is based on the validity of
evidence, which consists of pre-existing competency frameworks that capture the requirements of a role
and is then aligned to frameworks to generate assertions. The ADL Initiative has identified a series of
technical capabilities that current learning capabilities will integrate in the future to become variants of the
Total Learning Architecture (TLA). The ADL Initiative has invested in CMS prototypes and instantiated a
TRADOC G-2, “The Operational Environment and the Changing Character of Future Warfare”, Army Capability
Integration Center (ARCIC), Campaign of Learning Technical Report http://www.arcic.army.mil/App_
Documents/The-Operational-Environment-and-the-Changing-Character-of-Future-Warfare.pdf
2
J. Johnson-Freese, “The Reform of Military Education: Twenty-Five Years Later” Orbis, Volume 56, Number 1,
Winter 2012, http://www.dtic.mil/dtic/tr/fulltext/u2/a570086.pdf
1
1
reference implementation in fiscal year 2018 (FY-18) within a controlled environment. In FY-19 The ADL
Initiative will expand its testing of Competency Based Learning (CBL) and its role within the TLA through
multiple research projects with the intent for transitioning a capability for Government-wide use. The
projects include partnering with the Air Education and Training Command (AETC) to validate CMS within
their environment. That effort will involve multiple industry partners and support from a DoD lab.
This report presents an early glimpse into the technical complexities of migrating the DoD towards a
competency based educational system in support of the next generation of talent management / talent
development to maximize readiness. The report will also provide information on the Competency &
Credential work to date by the ADL Initiative, and the work expected in FY-19. Appendix A includes
information about concepts and definitions of CBL. It is advised for readers unfamiliar with CBL to review
the last section before reading the total document.
2
2. INTRODUCTION
Personnel readiness to execute missions is at the center of Department of Defense’s (DoD) human capital
management strategy. Under current practices, the Uniformed Services provide enough training and
education (T&E) for personnel to reach a minimum level of proficiency. Credentials are used to codify
proficiency for detailing and evaluating personnel. In many cases, these credentials do not expire. However,
in high reliability and regulated communities such as nuclear power, healthcare, or aviation, outside
accrediting authorities require on-going demonstration of individuals’ proficiency.
Within the DoD, entry-level initial training includes formal schools, eLearning courses, and some practical
lab work. In the field, service members primarily learn from each other and their own experiences,
augmented by performance support and on the job training to maintain their proficiency. Studies show that
70-90% of actual learning happens on the job. However, the methods for capturing individual skills and
their contribution to mission success are disconnected and only documented by freeform narratives in
evaluation reports. The lack of well-defined performance indicators and performance metrics (competency
models), the lack of a fine-grained view into how credentials apply to the work environment, and the lack
of ability to correlate/coordinate key performance variables across a career all limit the predictive utility of
current T&E systems’ contribution to readiness. Historically, only after a disaster or a systemic problem
has occurred is the analysis between readiness and T&E performed.
CBL encompasses all the knowledge, skills, attitudes, and other aptitudes, abilities, motivations, and traits
(KSAOs) required of a service member in their operational environments. CBL is analogous to Competency
Based Education but acknowledges military training as a separate endeavor from education. It provides an
approach towards collecting and collating performance and learning data such that time spent in formal
T&E is optimized, and the relationship between learning, human performance, and mission effectiveness
is made explicit.
The traditional model of education, born in the industrial age with a one-size-fits-all approach for T&E. It
places an instructor who utilizes a curriculum developed by Instructional Systems Designers (ISD) to
transmit instructional information to a student. The student undergoes an assessment to verify receipt and
processing of information. This receipt and processing of information is subject to individualized tailoring
by the instructor. If the information loss is minimal (a passing grade is achieved), the student is certified
and transferred to the work environment. Learning continues on-the-job through their own experiences, the
mentorship of peers, and their familiarity with the challenges of the work environment. However, if those
elements are captured at all, it is only in periodic job reviews and personnel evaluations. The curriculum
may be updated periodically, but a significant operational failure is often necessary to initiate any
significant change in andragogy and/or heutagogy. The gap between learning (training and education) and
operational performance induces risk to force readiness and risk to mission. This is depicted in Figure 1.
Figure 1. Traditional Training and Education review and update lifecycle
3
The CBL model views human operators and the systems they interact with as a Joint Cognitive System
(JCS)3, where communication occurs between the human operators and the different systems/components
(including other operators) they interface with. The requirements for these communication channels are
codified in the competency frameworks that represent the collection of jobs, tasks, conditions, standards,
and other relationships that occur between human operators and system components. These communication
channels are mediated by the usability of each system and the experience of the operator. Analysis of these
communication channels provide insight into improving the effectiveness of the channels, either by
improving usability, or by scheduling learning activities to provide just-in-time training through a feedback
mechanism. Thus, learning activities become part of the improvement process between the human operator
and the component interface. This relies on the creation of performance indicators and performance
measurement of operational systems that sit outside of traditional T&E.
In practice this is a complex web of concepts, as there are many operators, components, and competencies,
as well as the different types of data that exist along a continuum of time constants. Migrating toward this
method from the current credential-based model requires normalization of human performance data into a
trusted and consistently metered data representation. This normalization must, therefore, reduce the
complexity of the many-to-many relationships between competency, work, JCS components, metrics, and
human operators. The finer grained data, at a much higher sampling rate, however, enables optimization of
human performance and concomitant mission effectiveness in a way not currently possible, and only
performed after something drastic has happened. This is depicted in Figure 2.
Figure 2. Operators and Components as a Joint Cognitive System - In many of today’s organizations, technology
has become so pervasive that it functions as a frequent enabler of human performance. By analyzing these channels
of communication, we enable learning as a Continuous Feedback System.
3
4
Woods, D. D. & Hollnagel, E. (2006). Joint cognitive systems: Patterns in cognitive systems engineering.
3. COMPETENCY BASED LEARNING AND READINESS
CBL promotes the concept of a competency framework comprised of competency objects that represent the
relationship among the KSAOs of an individual performing a job. As shown in Figure 3, a competency
framework is a collection of competencies that provide clarification about successful performance in
different roles at different levels throughout the organization. A competency framework organizes the
competencies needed to perform successfully in a work setting, such as a job, occupation, or industry. The
competency framework can be used as a resource for developing curriculum and selecting training
materials, identifying licensure and certification requirements, writing job descriptions, recruiting and
hiring workers, and evaluating employee performance.
Figure 3. Competency Frameworks - A competency framework is a representation of the competencies required for
a certain role within an organization. They are comprised of competencies and sub-competencies that can be shared
across frameworks. Although shown as a hierarchy above, they are not necessarily linear.
In the industrial model, KSAOs are substituted with Learning Objectives which are taught in a Program of
Instruction (POI). The POI typically includes several formative assessments, followed by a summative
assessment (whether in test form or as a subjective evaluation by a qualified person) to measure learning
transfer from the course to the student. The system is based on trust in the observers and instructors to
properly evaluate and capture records. Once the final grades have been tallied, many details are lost.
Moreover, the completion of the POI is often used to justify the conferral of a credential, after which point,
the credential is the only record on file.4 Accreditation is a key component of trust in the education field’s
interpretation of competency. Accrediting bodies engender trust through open processes and an incentive
structure that ties high standards to profit or growth.
In the CBL model, the competency frameworks establish a clear linkage between individual KSAOs and
organizational performance (e.g., Talent Management / Talent Development). These frameworks, in turn,
are built on trust. Assertions against the framework are only as valid as the evidence that supports them.
The data strategy for every competency must align the observable performance indicators for each of the
4
5
Regulations typically require archival of grades to provide transcripts, but this information is neither portable with
the servicemember, or readily available.
KSAOs encapsulated into each competency. Some evidence will be derived from T&E systems, while the
large preponderance of evidence will likely reside in the operational environment. Even though a lot of this
data already exists, the availability of this data across the DoD enterprise is limited due to legacy technology
architectures and organizational stovepipes of information. A coordinated data architecture and digital
transformation process must realign these organizational firewalls to be successful in implementing CBL.
Digital competencies are the currency of human performance. Therefore, assertions must be based on a
trusted, evidentiary record that documents the situational context of the interactions taken by the human
operator, and the state of the systems being interacted with in the workplace. A collection of assertions
provides an estimation of individuals’ capability to meet mission, which are validated against operational
experience.
Figure 4. The Path to CBL - Stated simply, evidence of performance from trusted agents provides assertion of
competency. Competencies evaluated against Operational data inform inferences of the relationships (weights)
between competency and readiness, such that, over time, we can measure the competencies of an organization and
make reliable predictions of its state of readiness.
Over the past decade, the DoD has made rebuilding the readiness of the military force one of their priorities
and has outlined this priority in a key strategic guidance document. However, no comprehensive plan has
been established to coordinate this effort5. Currently, the policy of blind trust in credentials introduces risk,
because credentials obfuscate the actual state of a person’s competency. Recent disasters in the 7 th Fleet6
have been attributed to lack of training, inadequate practice while on-board, and lack of sleep among a
litany of other issues. These causes were driven by cultural failings to accurately report readiness. These
cultural issues were intensified, since there were no systems in place that shared the actual measurements
of performance and state of the ships that were not dependent on human interpretation.
5
6
6
Military Readiness GAO-16-841, p.6 https://www.gao.gov/assets/680/679556.pdf
7th Fleet GAO report https://www.gao.gov/assets/700/695911.pdf
Through DoDI 1322.26, the DoD recognizes xAPI as the vehicle for capturing human performance data
from any system. By extending the capture of data from T&E systems to operational systems, CBL can be
used to evaluate round-trip estimates of readiness. Figure 4 provides a theoretical path to reaching CBL in
the DoD. This requires a radical rethinking of the data silos present by our current organization, which is a
problem common to any digital transformation. By gathering and correlating these data sources, we can
build towards a predictive system for readiness. This predictive system can also provide early demand
signals for the overall human capital supply chain management problem of military personnel.
4. COMPETENCIES IN THE TOTAL LEARNING ARCHITECTURE (TLA)
Within the context of the TLA, the Activity Stream(s) stored in the Learning Record Store (LRS) provide
the evidence upon which competency assertions are made. The Competency and Skills System (CaSS) is a
TLA component that generates rich and traceable data about learning experiences and how they relate to
skill proficiency, ultimately resulting in a certified set of credentials for the associated KSAOs. Each role
in an organization is required to perform a set of jobs. Each job has its own set of competencies needed to
perform the job effectively. By defining these competencies, policy can enable a standardized approach to
performance that is clear and accessible to everyone within an organization.
Competency frameworks are expressions of the work environment. Many factors that affect the nature of
each framework include the native language of its learners, goals of the assessments, availability of software
engineering support, and the role of accrediting bodies. All these factors drive customization, and more
importantly creativity in this space. CaSS addresses a reality of CBL where organizations will develop their
own frameworks and may never converge on a single standard. For example, this is evident in how nuclear
power personnel and aviation personnel are certified for their work. Even though they work in the same
service and on the same platform, it is unlikely that all facets of their work can be aligned under one
competency framework.
Figure 5. Competency and Skills System (CASS) – The 2018 TLA Reference Implementation used CASS to create,
manage, and share information about competencies with other TLA components.
7
CaSS ingests multiple different competency frameworks and aligns the different attributes about each
competency to provide a common language and translation method for defining competencies, the
associated evidence of attainment, and the different relationships that intertwine with each competency. It
implements a well-defined set of attributes and rule structures based on Schema.org terms that are
encapsulated using linked data. CaSS also stores assertions and evidence over time to record the progression
of a learner pursuing competencies within a framework. Key to establishing assertions of competency from
an Activity Stream is the reconciliation between the Actor in an xAPI statement (stored in an LRS) and a
persistent Learner Profile.
As shown in Figure 5, the 2018 TLA reference implementation, CaSS pulls xAPI statements from the LRS
and checks the Verb to determine whether the statement claims any evidence of competency. For statements
that infer competency, the ObjectID from the xAPI statement is compared to the Activity Registry to
determine which competencies a statement is evidencing. For 2018, all content was treated the same and
incremental increases in competency were assigned for each completed activity. The 2019 reference
implementation will include the “weighting” of evidence based on the metadata stored in the Activity
Registry that describes each activity. However, this approach was not supported by the recommender.
The primary output from CaSS is a series of Mastery Estimates for an individual over time. This data is
managed within the competency service layer and written to a machine-readable learner profile that other
TLA systems can access. The CaSS project also investigated blockchain technologies for managing some
elements of competencies. A proof of concept was deployed to the Ethereum Blockchain to validate that
assertions of competency into a Learner Profile from CaSS can be recorded in perpetuity on a Blockchain.
4.1 Enabling Complex Systems with Competency
This section provides a theoretical scale for measuring the complexity of a TLA implementation. The five
TLA adoption levels shown in Figure 6 build upon the basic eLearning and classroom pipelines commonly
in use today. This scale is focused on architecture and does not prescribe how implementations are
instantiated. The determinations throughout this paper are based on the critical role CBL plays in maturing
any learning ecosystem to the point of providing adaptive and intelligent capabilities to its learners.
Current training and education environments include a wide range of learning activities that are not fully
connected. Learning management systems (LMS) and other digital technologies (e.g. simulations, handheld learning devices) supplement the student-teacher relationship. Most instruction is linear, composed of
terminal and enabling learning objectives (or equivalents), and planned in terms of a constrained schedule
for student attainment of minimum standard to satisfy the award of credentials.
As a learning system implementation matures to the first TLA adoption level, the introduction of xAPI and
an LRS increases the richness of analysis and decision support. Activity tracking data (e.g., xAPI) pulls a
myriad of performance data out of any range of learning activities and aggregates the data for analysis using
commercially available tools. The xAPI and LRS can normalize classroom data with data collected from
an LMS, simulators, on-the-job work experiences, or any other network-connected Learning Record
Provider (LRP) into a data store that is accessible for review.
TLA adoption level 2 incorporates dashboard visualizations tailored to specific roles within the TLA.
Instructors, students, instructional designers, administrators, and leaders see performance data and have
better understanding of past performance when making decisions. This affords instructors the opportunity
to facilitate interventions based on data from outside traditional classroom or eLearning environments.
8
Figure 6. TLA Services to support Competencies and Credentials - Within the TLA, an activity stream (xAPI)
captures what the user is doing and which learning activity the learner’s experience is being tracked. The xAPI
statements record predefined experiences related to learning within each learning activity. These statements are
stored in an LRS as shown in Level. This empowers detailed analytics based on a learner’s performance across
numerous LRSs as shown in Level 2. The TLA can also incorporate a competency management capability as shown
in Level 3. This must be populated with a complete framework that describes the role and requirements of a learner,
retains enough data about their progression, and includes processes or functions to ensure that included data can be
trusted. As more data are collected about learners, learning activities, and competencies, the TLA is afforded the
ability to adapt the sequencing and delivery of different, yet related learning experiences that are tailored to the needs
of everyone. Eventually, the TLA will tie into other services and systems (e.g., Resource Management Systems) to help
determine the availability of different learning activities outside the traditional boundaries of Distributed Learning
(DL). This enables a data architecture that supports lifelong learning.
The promise of TLA adoption level 3 is using this information to augment the instructional design of
educational and workplace interventions based on pursuit of competency. Competency management allows
for additional customization of the learner path, and most importantly, learners are freed from the linear,
schedule-constrained, course-centric model. Within a CBL system, a learner can benefit from learning
compression – completing course modules versus entire courses, focusing specifically on the elements of
competencies they are lacking.
The TLA adoption level 4 includes additional data richness in the competency and activity-tracking services
to enable adaptation. Adaptation services provide more tailored experiences for a learner and the ability for
the TLA to optimize its approaches across a student population. Adaptation includes anything from content
recommenders to intelligent systems or intelligent tutors. A TLA adoption level 5 fully leverages machine
learning, providing integrated macro- and micro-adaptation for planning a continuous-learning path, but
also for optimizing training resources. In practice, these will likely remain manual activities, but the use of
the integrated data strategy in the lower level implementations should greatly enhance the Electronic
Decision Support Systems (EDSS) available to aid in those decisions.
9
4.2 Authoritative Sources and Authoritative Storage
Authoritative Sources and Authoritative Storage as formal designations of a given capability are seen in
DoD network architectures today. Systems or agents that have these titles are critical components of trusted,
actionable data moving throughout the DoD. Authoritative Sources are considered the trusted reference
system that generates specific data. Authoritative Storage is the location where a record is held in reference
to an Authoritative Source. The Naval Information Application Product Suite (NIAPS)7, which is a
collection of tools that support the Fleet at sea, is a good example of such a record. On-board, the NIAPS
hosted applications are the authoritative source and storage point for the local Commander. They may have
information that is more up to date than the record available to the US Navy on shore. This reality does not
make a NIAPS server on-board authoritative beyond the ship. Those servers are subject to bandwidth and
operational restrictions that makes their updates unpredictable. The US Navy views the NIAPS servers on
shore as authoritative source and storage point since they are the most-timely and accurate in respect to all
ships using NIAPS.
4.3 How Competency Enables Complex Systems
At TLA adoption level 3, the presence of a Competency Management System represents a paradigm shift
within an organization and its learning ecosystem. The use of competency frameworks allow an
organization to accept non-structured data into their estimation of human performance. To do this, business
process, instructional design, systems acquisition and, most importantly, culture must change. Instructional
design within the DoD tends to follow the processes identified in MILHDBK 29612-2A Instructional
Systems Development/Systems Approach to Training. This handbook enables procuring and maintaining
Shareable Content Object Reference Model (SCORM) compliant distributed learning. Across DoD, the
ability to capture learner data equally across all systems does not exist. At level 3, ISDs within the DoD
will require new standards, processes, and guidance to align their content within more complex systems.
ISDs will also require processes and tools to build frameworks and align activities within the work
environment that can generate assertions. The ISDs must then be backed by policy that will allow them to
ensure these assertions are met within the engineered systems. The current Systems Engineering Technical
Review (SETR)8 processes exist to meet functional requirements for training systems and operational
systems. SETR must include additional processes and support to look at driving a competency framework
through human interactions with systems. Functional Requirements for Competency is a new concept to
SETR but must be addressed. xAPI can capture the activities of Subject Matter Experts (SMEs) and
learners, but systems will need to be constructed that can leverage that data during their total life-cycle.
The current Defense acquisition process fails to populate all the data available regarding human
performance into a common data layer within its networks. Weapon systems process tremendous amounts
of data and depend on the interaction of dozens of personnel. The potential magnitude of evidence that
work systems can generate can drive competency frameworks to become true models of human
performance. To move DoD communities to a TLA adoption level 4, analysis is required to identify the
barriers blocking the instrumenting of all systems to generate performance data.
7
8
NIAPS https://www.public.navy.mil/spawar/NAVY311/Pages/NIAPS_Info_Center_FAQs-Basics.html#Q1
https://www.dau.mil/cop/se/DAU%20Sponsored%20Documents/SETR%20Process%20Handbook%20v1.0%20201
50206.pdf
10
4.4 Moving from Level 2 to Level 4 of the TLA
As envisioned, each school, simulator system, or other collection of learning activities will establish its own
LRS. The xAPI specification enables federated LRSs, which results in managed authoritative data about
each learner. Each unique LRS, through permissions, enables a trusted relationship that hold statements
about an individual’s performance that can roll up into an aggregated learner profile as shown in Figure 7.
The transactional layer represents all xAPI data. This xAPI data either promulgates to an authoritative LRS
or remains within local systems. The concept of internal vs external statements is centered on how critical
the statements are to capturing training records for credentials. Some LRSs and systems may be allowed to
issue assertions that provide credentials to users.
Activity Providers
LRS
xAPI Statements
Temporary
Data
Aggregation
xAPI Statements
xAPI Statements
Learner Profile
xAPI
LRS
xAPI
Authoritive
LRS
Federation
xAPI
LRS
xAPI
(LMS, Schoolhouse,
Simulator, etc.)
xAPI Statements
LRS
xAPI
LRS
xAPI
xAPI Statements
Figure 7. TLA Adoption Level 2: Federation of LRS Supporting a Learner Profile - xAPI statements are captured
in an LRS that can be associated with any learning activity, or collection of activities. These are federated data sources
that can be rolled up into different levels of granularity.
Each of the connected LRSs in Figure 7 collect data at varying levels of granularity. A flight simulator
may collect fine-grained statements about the learner inside a simulated scenario, while an LRS associated
with a school may track the student at a much higher granularity (e.g., formative/summative evaluations,
transcripts). In this capacity, all learning records are preserved in the system in which they are created.
For a competency management service to be instantiated within the TLA, another LRS must exist that
collects statements rolled up from the transactional layer. This is due to the Kafka streaming architecture
set up for the 2019 TLA reference implementation. As shown in Figure 8, each LRS at the learning
transaction layer will stream statements into a common LRS. The competency management service does
not care about every statement a learning system makes about the learner. Instead, the CMS is more
interested in when key events happen that assert something important about the learner. Assertion
statements are captured in a different federation of LRSs that preserve competency data. Ideally, this
arrangement would allow legacy systems that do not align to competency frameworks (legacy LMS,
simulators, and observed human performance) to change through attrition.
11
In the future, all systems used to provide evidence of competency, both T&E and operational, may have an
LRS, business logic, and authoritative data sources identified. The learner profile service will draw on
multiple, authoritative sources to present a snapshot what an individual knows at any given point in time.
It can also act as a source for other systems to base decisions from such as simple adaptive systems and
scheduling systems. It can act as the authoritative source for multiple authoritative storage points under this
model. This would afford the DoD a trusted source but allow the services to maintain authority over their
own data.
Kafka Streams
Figure 8. Adding Competency Management - The addition of a competency management system enables assertions
against specific competencies held inside any number of competency frameworks. Assertions are made from the
federated LRS data that collect performance information from T&E and operational systems.
Figure 8 shows the connection from the transactional LRSs sending statements through the Kafka stream
to an LRS that aggregates statements from multiple learning activities and feeds them to a CMS. The CMS,
in turn, infers competence and writes the mastery estimates to the learner profile. In the learning ecosystem,
legacy authoritative LRSs would not be accessed by the learner profile once this level of maturity is reached.
The CMS can still capture and trust appropriate legacy asserted xAPI statements but would also scan its
other contents and compare this to trusted frameworks. At scale, it is important to preserve interoperability
between LRS federations, competency frameworks, and learner profiles.
12
This level of TLA adoption provides a level of maturity where complex systems can provide the basis for
applying machine learning to human performance. The T&E and operational systems that have been
instrumented to report performance can generate a large amount of data that can be analyzed to issue
credentials, track assertions that indicate skill decay, and capture when managers revoke credentials due to
behavior. The LRS Service, which captures all authoritative data from activity providers, is the point where
other TLA components can view evidence. The competency manager leverages that evidence, aligns it to
a competency framework, and generates trusted assertions in the form of xAPI. That trusted xAPI data is
then stored in the Trusted LRS. The Trusted LRS provides portability of competency data since it is now
in a standard format (xAPI) and allows TLAs to change out competency systems as necessary without
losing their historic records.
Temporary
Data
Aggregation
xAPI
Learner Profile
Pull
Trusted
LRS
Push
xAPI
Competency
Manager
Activity Providers
LRS
xAPI
Statements
xAPI
LRS
Push
xAPI Statements
xAPI Statements
LRS
Pull
(Optional)
xAPI
xAPI
xAPI Statements
Recommender
Push
Interface
LRS
Kafka Streams
LRS
Pull
xAPI Statements
xAPI Statements
xAPI
Authoritive
LRS
Federation
xAPI
LRS
xAPI
(LMS, Schoolhouse,
Simulator, etc.)
LRS Service
Learner
Figure 9. TLA Architecture with Two LRS Federations - Loosely based on the 2018 TLA reference
implementation, this graphic shows the inclusion of a recommender that has access to both the learner profile and the
transactional LRS data store.
Figure 9 presents a mature version of the future learning ecosystem. This is based on the 2018 TLA
reference implementation There is a high-likelihood that some systems (e.g., intelligent tutoring systems,
recommenders, simulators) will need direct access to the transactional layer instead of only accessing the
authoritative data stored in a learner profile. This instantiation can aggregate learner data from all types of
systems in a way that allows a recommender to provide just-in-time feedback to an individual based on
their actual performance.
The evolution towards this level of TLA adoption requires change across all the institutional barriers across
the DoD. Changes will be required in the policies and processes we use to acquire systems such that they
can be instrumented to publish key performance indicators that can be aligned to competencies. Lastly, the
massive undertaking of creating frameworks, aligning evidence, and gaining access to that evidence is not
trivial.
13
5. DISTRIBUTED ACYCLIC GRAPHS AS THE EXPRESSION OF DIGITAL COMPETENCIES
The Institute of Electrical and Electronics Engineering (IEEE) Learning Technology Standards committee
(LTSC) has defined a Data Model for Reusable Competency Definitions (RCD) under their 1484.20.1-2007
standard. This is a critical data point for discussion of competency within the TLA standards-oriented
ecosystem. The IEEE defines an RCD as “any aspect of competence, such as knowledge, skill, attitude,
ability, or learning objective.”
Using this as a guide, any formal learning opportunities in use today would preserve the Enabling Learning
Objectives/Terminal Learning Objectives (ELOs/TLO) structure in place by mapping those ELOs/TLOs to
one or many RCDs within many frameworks. As shown in Figure 10 below, the nodes within a competency
framework can have many relationships that are bi-directional in nature. From this perspective, competency
is not strictly hierarchal, and a single job may align with multiple competencies. Experience has shown that
many frameworks are structured both in hierarchy and in time. Time represents two variables within CBL:
the linear progression of how traditional education is scheduled and the rate that competency degrades in
an individual.
Mathematically, the RCD is expected to behave as a Distributed Acyclic Graph (DAG). In graph theory, a
DAG structure consists of nodes that are connected to each other with edges. Each node in the graph is
analogous to a competency object and the edges define the relationship between them. Each node may have
numerous edges connected to it and the relationships are directed in that they have a direction. A→B is not
the same as B→A. Acyclic infers that the relationships are non-circular and that when moving from node
to node by following the edges, you will never encounter the same node a second time.
Figure 10. Relating a Framework to Learning - IEEE 1484.20.1 standard for Reusable Competency Definitions
provides the foundation for a competency framework that can be used to align learning content, formal and informal
training opportunities, and work experience.
Attaining mastery is the goal of individuals being managed within a competency framework. RCDs can be
affected by a variety of assertions under a given set of standards and conditions. Two individuals may
experience different working conditions for their given role, but the related framework must track mastery
without any biasing. Most competency frameworks in industry and academia start with a single content
object and extend to other learning activities to support its pursuit academically. Content objects usually
include media, courseware, and scheduled on-the-job training.
14
This approach accelerates the obtainment of a credential but does not allow informal learning and work
experiences to be included in the calculation. Skill decay mapping is also decoupled when the focus is on
content since the updating of content is highly dependent on funding cycles in the DoD. Content is part of
logistics9 and its updates are driven by two factors: schedule or workplace problems. Competency assertions
offer the opportunity to drive updates to available formal and informal learning based on actual human
performance.
The concept of utilizing performance data from operational systems to predict competency is relatively
new. This is substantially different than traditional educational pipelines. In those pipelines, assertions are
scaled to fulfilling a credential, which typically includes numerous competencies. Competencies are
independent of time and may be completed piecemeal due to the experiences of individuals. The varying
experiences of individuals will initially require a predetermined structure of their “weight” in terms of the
overall framework. Weighting for a single competency should reflect its complexity in relation to
supporting competencies and is highly dependent on the context of how a learner applies this competency
to their job. The alignment of evidence to competencies is highly contextual as well and will require
weighting to quantify the different assertions being made.
A person lacking competency in basic concepts will struggle with more complex tasks. Different learning
activities infer different levels of competence at different times within the continuum of learning. Thus, the
weighting becomes a multivariate equation that involves the contextual weighting between related
competencies, the weighting and currency of evidence, and the weighting of assertions that increase or
decrease over time based on the context of the evidence. This would also afford competencies with multiple
weights that reflect their context in a job or task.
6. THE WAY AHEAD
A 2016 Institute for Defense Analysis (IDA) report,10 identified six key research areas related to
competency in practice within an organization. The report’s conclusions are repeated below:
Credentialing. Credentialing is one accepted means of verifying personnel qualifications that correlate
with successful employment. In addition, credentialing must address employers’ changing requirements
and individual workers’ changing skill levels over time. There are numerous credentials, both digital and
paper-based, that need to be aggregated and accounted for. This is more of an engineering challenge than a
research challenge.
Automation. The development and use of competency systems is resource intensive. More efficiency in
implementing a competency-based system of personnel management is essential to facilitate widespread
use. Automation is needed to support the generation of metadata, calculate the weighting of evidence,
measure its context, calculate the roll-up relationships of different competency relationships based on the
context of the job or role its being applied to, and generate competency frameworks from existing and
disparate types of text-based documentation already available within the DoD.
9
NAVFAC Logistics guidance, see section 4.7.
https://www.navfac.navy.mil/content/dam/navfac/Expeditionary/PDFs/NAVFACINST4081-1A-Encl1.pdf
10
IDA, Review and Assessment of Personnel Competencies and Job Description Models and Methods, Belanich J et
al, 2016 IDA Document D-5803
15
Taxonomy. Diverse structures and categorization taxonomies are used across organizations and
occupational databases to describe competencies. This leads to difficulties in comparing people or roles.
Additional research is needed to understand how controlled vocabularies might be used to describe different
attributes of a competency.
Granularity. Inconsistency is common in the level of detail used to describe a specific position, role, or
broader occupation within competency systems. Such variations complicate decisions about which
competencies to use. Further, different learning activities can generate different amounts of meaningful
data..
Validity. The relevance of competencies to roles and performance is not well substantiated. Content
validity is the most common form of assessment but is less robust than assessment that depends on
quantitative performance evidence.
Tailoring. The process of adapting general competency structures and databases to suit organizational
missions and functions can take extensive effort and present many difficulties. Organizations can study
what others have done or extract from generic databases of competencies to leverage existing resources.
However, even such initial steps are resource intensive.
In the two years since this report was delivered to the ADL Initiative much of its content is still true. There
are numerous competency systems in existence, but many barriers to connecting them. The work across
DoD and civilian sectors to afford interoperability among these data silos is critical. A competency system
is dependent on the validity of the frameworks, data flows, and evidence in the surrounding environment.
The system should service, align, and preserve all necessary frameworks it services, rather than translating
them into a specific language and discarding their previous data. This facilitates traceability over time and
across frameworks thus enabling records portability. This data also needs to be immutable, validated,
preserved, and ultimately shared with other learning systems that have a need for it. From an organizational
perspective, additional research needs to be performed around the intersection of individual and team
competencies.
CBL will also benefit from organizational vocabularies and classifications of competencies that can be
attached as metadata to each competency object. This allows for a controlled vocabulary and descriptors
that can be aligned across organizations. Metadata vocabularies might include descriptors that inform
whether a competency includes psycho-motor skills, fine motor skills, cognitive skills, skill decay, or
relevant environmental factors that affect or inform the description of a competency. This information is
already collected in a typical Job Duty Task Analysis (JDTA) but is often discarded as instructional content
is developed.
6.1 Next Steps
In FY2019, the ADL Initiative will pursue multiple activities related to CBL and maturing the overall TLA.
1.) The ADL Initiative has partnered with Air Education and Training Command (AETC) and
USALearning to conduct a research effort (Air Force Learning Services Ecosystem (AFLSE)/TLA
Baseline project). This project supports the inclusion of CBL and other TLA technologies into the
AFLSE. A competency framework is being created for an Air Force Specialty Code (AFSC), TLA
components will be integrated into AFLSE systems, and evidence will be collected from
authoritative sources over a 6-month period.
16
2.) The ADL Initiative has prioritized the FY19 topic areas for the Broad Agency Announcement
(BAA) to include a focus on defining the technical underpinnings of competencies, establishing a
robust metadata strategy, and understanding the interplay between learner profile data, authoritative
sources, and services.
3.) The ADL Initiative staff is continuing work on the 2019 TLA reference implementation and its
migration to a data streaming architecture that continues research into federated LRSs and
associated business rules, the implementation of a competency DAG, and the establishment of a
modern learner profile.
4.) The Competency and Skills System (CaSS) is being hardened and stabilized for use in the
AFLSE/TLA baseline project and potential transition to the USALearning/ADL Initiative learning
warehouse. Authoring tools are being matured to better evaluate the associated level of effort for
defining competencies, creating frameworks, and aligning evidence repositories.
5.) The ADL Initiative and USALearning are working through the Combatant Commanders Exercise
Engagement and Training Transformation (CE2T2) program to take an enterprise look at the
numerous DoD policies that need to change to support CLB within the DoD. This project is also
collecting requirements across the services to enable a universal DoD learner profile and is
exploring the concept of federated governance for authoring and maintaining competency
frameworks across the numerous domains associated with the DoD.
6.2 Conclusions and Recommendations
CBL is the bridge to having a trusted and informed prediction of how service members can perform in their
jobs. This data affords a new view of the warfighter in general and enables the DoD to better prepare its
service members for any potential mission. It has the potential to align the different organizations across
the DoD responsible for training and education, human resources/personnel, and the operational work
environment. This alignment brings the potential for increased readiness and a long-term cost savings by
optimizing the way our workforce does, trains, practices, and executes their jobs.
The migration to CBL provides strategic insights into overall human performance within a JCS. As
individual competencies are evaluated, trends can be identified that expose faults in organizational
workflow, system usability, and training & education. A big challenge with current systems is the
disconnected pools of data stored across a learner’s lifetime and across institutional boundaries. Those
disconnected resources hold great potential but are currently barriers to realizing the insights that CBL can
provide to workforce planning and talent management, mission effectiveness, and force readiness.
While the technology challenges associated with the migration to CBL are not trivial, the change to culture,
policies, and budgets is huge. The underlying concepts of CBL are complex, difficult to describe, and
require time to manifest a return on investment. Access and permissions need to be granted to pull
evidentiary data from operational systems, and acquisition strategies need to be revised to align with an
overall DoD data strategy that links performance outcomes with mission readiness to enable future planning
and decision making.
With the different learning modernization programs going on across the DoD, we are afforded the
opportunity to lay the groundwork for intelligent CBL systems, federated data governance, and data
portability that is required to maintain our global security posture.
17
APPENDIX A - CONCEPTS AND DEFINITIONS
Assertions: This is a foundational concept to Competency Based Learning. Assertions are defined by
Webster’s Dictionary as a “statements of fact or belief.” In terms of competencies, an assertion is the
expression of a trusted fact or belief about a learner’s proficiency in specific KSAOs associated with a
competency. Assertions are typically proffered with corresponding evidence that infers a specific
competency for a specific learner. Ideally, assertion evidence is immutable and persistent so that it cannot
be repudiated. This evidence should be interoperable between learning ecosystems to allow different levels
of sharing and mutual recognition of competency data across domains (e.g., military and civilian). This is
exemplified in the relationship between the Defense Health Agency (DHA) and the Medical Departments
within each service. This is also demonstrated in the relationships between the different elements of the
Intelligence Community (IC) where teams frequently come together to collaborate.
Assessment: Assessment is the collection and evaluation of evidence that can be used to make assertions
of competency. Many learning activities (e.g., reading a book) contribute to an individual’s proficiency in
a specific competency, but formal assessments are considered to have a higher confidence value in terms
of evidence. In The T&E world, these may come from capstone training events or any number of formative
and summative testing events. In the operational world, these come in the form of job reviews, after-action
reviews, or other trusted, authoritative source. Thus, assessments are the driver for the evidence behind an
individual’s proficiency within a given competency and in a given context.
The context is critical for translating relevance from the different perspectives between a pure learning
environment and an actual work environment. Context helps define proficiency levels for different models
of skill acquisition (e.g., Dreyfus and Dreyfus11). Adjudication of the assessment data within the overall
competency framework is maintained by a competency management system and is captured in an assertion.
In the current paradigm, institutions provide a credential (e.g., degree) that rolls up all assessments, but
loses details of learners’ experiences. Competency Management Systems could take in evidence from
workplace efforts of individuals to maintain proficiency ratings according to accreditation standards.
Competency Based Learning (CBL): Competency Based Learning is focused on the application of skills
and knowledge rather than on recollections of abstract concepts12. Mastery is demonstrated through
performance where learners apply the required KSAOs they need to do their job. The current model of
education within the DoD places trust in the formal training each service member receives with little data
collected about learning that occurs on the job. Competencies must be defined for all the different jobs and
roles performed. These must be tied into a framework that accommodates technical skills, as well as
professional, behavioral, functional, or even organizational skills. Key performance indicators for each
competency should be identified and tied to mission outcomes and the systems/components used to achieve
those outcomes. For example, aviation maintenance records often identify what tasks are being performed
and by who. Using this information, a great deal of evidence is presented on the aviation mechanic’s level
of technical competencies by providing a ledger of the tasks they’ve performed.
11
Dreyfus, Stuart E.; Dreyfus, Hubert L. (February 1980). "A Five-Stage Model of the Mental Activities Involved in
Directed Skill Acquisition"
12
Gervais, J. (2016). "The operational definition of competency‐based education". The Journal of Competency-Based
Education. 1: 98–106. https://onlinelibrary.wiley.com/doi/full/10.1002/cbe2.1011
18
Competencies are defined as networks of KSAOs that collectively define a set of related performance
elements, along with any collected evidence that the learner has achieved mastery of these elements at some
gradated level of performance (journeyman, master, etc.). A competency framework embodies this network
and establishes the relationships between those elements. In traditional education, competencies are pursued
by each learner according to a prescribed curriculum. This is both culturally driven and necessary from a
practical, resource-management perspective. As seen in planning for its Ready Relevant Learning (RRL)
program13, the US Navy is migrating a Sailor’s initial training phases into a shortened format, then
developing new instructional-design techniques to shift the remaining learning requirements to on-the-job
training with performance support aides.
Competency: The DoD defines a competency as “an observable measurable pattern of knowledge, abilities,
skills, and other characteristics that individuals need to successfully perform their work.”14 This conceptual
definition does not specify the supporting characteristics that makes competencies actionable across a
learning ecosystem, like an implementation of TLA. To that end, the ADL Initiative proposes an expansion
on the previous definition: competencies are trusted, discrete, and encapsulated representations of KSAOs
and other characteristics at the individual level necessary for performing a role within a specific context.
Competencies can be assessed and are severable under a specific context. Severability varies depending on
how the KSAO is trusted and demonstrated within a network of competencies. Competencies must be
functional on a global scale to evolve with learners over time. For example, soft-skills, like timemanagement, are critical during individuals’ development and subsequent careers, but rarely are a
component of their formal records. A model of how competencies relate as a network is a requirement for
maximizing the return of a competency service within a TLA implementation.
Competency Object: An umbrella term used to describe any abstract statement of learning or performance
expectations, and information related to the statement. Statements can be learning outcomes, competencies
per se, learning objectives, professional roles, topics, classifications/collections, etc. A competency object
might include the following:
•
•
•
•
•
•
A unique identifier, competency name, and detailed information
Descriptions of required activities and behaviors (pre-requisites, physical requirements, reading levels,
ethics, etc.)
Associated Knowledge, Skills, Attitudes, and Other facets (KSAOs)
Enabling Learning Objectives and Terminal Learning Objectives (ELOs/TLOs)
Tasks, conditions, and standards
Assessments, certifications, qualifications
Competency Framework: A competency framework is a collection of competencies that provide
clarification about successful performance in different roles at different levels throughout the organization.
Competency frameworks are a network of KSAOs (or other characteristics) that can have many nonexclusive relationships. These frameworks can be associated with other constructs outside of a required
learning outcome. A competency framework organizes the KSAOs needed to perform successfully in a
work setting, such as a role, occupation, or industry, or within a pure learning setting such as an academic
13
"Ready Relevant Learning (RRL) Official Website." U.S. Navy Hosting. Accessed May 14, 2019.
https://www.public.navy.mil/usff/rrl/Pages/default.aspx.
14
DoD Competency Definition, DoDI 1400.25, June 2017
http://www.esd.whs.mil/Portals/54/Documents/DD/issuances/140025/140025_vol250.pdf
19
or vocational-training institution. The competency framework can be used as a resource for developing
curriculum and selecting training materials, identifying licensure and certification requirements, writing
role descriptions, recruiting and hiring workers, and evaluating employee performance.
The creation of competency frameworks requires extensive planning and investment. There is no
universally consistent way for creating a competency framework. Many existing frameworks are
represented by text that is scanned as Portable Document Format (pdf) files. They tend to not be easily
searched, are not syntactically understandable to most computers, and do not have any capacity for
automated configuration management as the frameworks evolve over time. A complete framework should
include relationships to potential or allowable evidence (in the form of experiences or assessments from the
work environment) that correlate to a demonstration of achievement for each competency by providing
trusted evidence of mastery.
Competency Management System: A competency management system provides the foundation to
strategic talent-management practices such as workforce planning, acquiring top talent, and developing
employees to optimize their strengths. The ability to preserve granular data on KSAOs and maintain
traceability between captured evidence and the state of an individual offers significant improvement over
the current credential-oriented practices in talent management. Effective and automated competency
management can create a real-time and predictive inventory of the capability of any workforce. Competency
management is expected to provide these key capabilities:
•
Enriched understanding of expected behaviors and performance. The quickest path to improving
performance starts by knowing the target performance.
•
Improved talent management. The mastery estimates about competence inform leadership about
current and future capability. Data and analytics about employees’ skills and knowledge are used by
the competency management system as evidence to predict or assert confidence about a specific
competency object.
•
Optimized talent development. Enables the process of establishing training goals and plans that link
to individual goal attainment, career planning, and succession planning. It also facilitates a personalized
learning plan by aligning content with competencies specific to an individual’s role within the
organization.
•
Enhanced talent pipeline. Competency management enables on-demand information about an
individual’s competency mastery and readiness to move into next-level or other critical roles. An
individual’s Career Trajectory is derived from their prior experience, education, and expertise and is
influenced by numerous external and internal components.
•
Improved operational efficiency. Competency management facilitates business-driven learning and
development, eliminates non-value-added training, highlights strengths to be further developed, flags
critical skill gaps for mitigation, and generates higher levels of employee and leader satisfaction with
their overall experience with the organization.
•
Integrated broker. There are numerous approaches to building competency frameworks.
Competencies are encapsulated differently across industries, geographic regions, training domains, and
even across our military services.
20
A Competency Management System preserves all KSAO data within a framework and aligns competencies
with organizational goals, drives experience-building opportunities for individuals within the organization,
and acts as a linkage between competencies and other talent-development processes. By preserving data
throughout a learner’s career, this system can act as a brokerage service between a layer of trusted data that
encapsulates the relationship between human performance and mission outcomes. Figure 11 depicts the
roll-up of KSAO data.
Figure 11. KSAO Roll-Up Competency - The evidence used to infer competency is derived from authoritative
sources aligned with the KSAOs in any number of competency frameworks and preserved for further analysis and/or
proof.
Competency vs. Credential: A network of competencies typically has varying mastery levels as part of its
credentialing model. Previous levels contribute to the next level of mastery, and competency elements
within the various levels may atrophy over time from disuse. Competencies are valued by the trust
associated with them. Legacy credentialing models are dependent on the social value of groups of people
trusting each other. The future of digital competencies derives trust from immutable data associated with
discrete evidence that cannot be repudiated. These data can build from legacy constructs like certificates
and degrees but preserving the raw elements of individual experiences for persistent consideration changes
the possibilities for future education and training.
Credentials: A trusted exchange format that encapsulates a single competency or a set of competencies
into an artifact from an accrediting organization. A credential is issued by an entity with authoritative power
and provides proof of an individual’s qualification or competence in a subject. These artifacts range widely
from a college degree to a professional certificate to a badge or a micro-credential. Possessing a credential
not only helps one to prove competency and capability within a field, it also serves as verification that the
individual is properly trained and equipped to carry out their duties within their specific vocations or
disciplines.
21
In the current paradigm, credentials don’t typically include data about the curricula or experience associated
with the award of the credential. This is seen in degrees, where many years of abstract study and practical
demonstration are distilled into a degree, associated with a transcript, and tied to a grade point average. For
example, the domain of electrical engineering, governed by Accreditation Board for Engineering and
Technology (ABET), Inc.,15 has an extensive requirement list for what an institution must do to have its
bachelor’s degree program accredited. At the point of conferring a degree, the student becomes a
professional, and most of the performance data collected in obtaining the degree is lost. The degree,
transcript, and GPA are trusted, but they do not provide much detail on the performance capabilities of the
graduates. In theory, credentials can be issued at any level of granularity in relation to a competency and
may “roll up” in many cases. Examples include assignments “rolling up” to transcript recorded grades,
stacking professional certifications, or apprenticeships. Types of credentials include:
15
•
Badges: Represent a way of acknowledging achievements or skill acquisition in a granular fashion.
A badge is designed to motivate behavior, recognize achievement, and establish credibility. Badges
capture knowledge, skills, and accomplishments that help transfer learning across different
communities. Digital badges have emerged to document knowledge about a subject, professional
development, leadership, or other accomplishments. The Open Badge16 initiative and other
standards are built on the concept of capturing accomplishments, relating them to a graphical
depiction, and then providing a validation process for assigning them. They do not necessarily need
to represent attainment of a unique competency but have value in terms of social learning and
motivation.
•
Certification: A formal, but typically voluntary process that recognizes and validates an
individual’s qualifications in a certain subject. Certification is earned by individuals to assure they
are qualified to perform a role or task through the acknowledgement of developmental
achievement. Typically, a non-governmental entity grants a time-limited recognition to an
individual after verifying that they have met a pre-determined criterion. Certificates verify that a
professional has achieved a specific level of competence in a specific subject area and assures
employers that the individual can handle the challenges associated with the role. Certifications are
typically earned from a professional society or a commercial entity, like Microsoft or Cisco, and
must be renewed periodically, generally through continuing education units and/or a reassessment
of qualifications/capabilities. Examples include a Certified Project Manager from the Project
Management Institute17, or a Microsoft Certified Solutions Expert (MCSE)18. Unlike licensure,
individuals do not typically need to be certified to engage in given occupations. Sometimes,
however, certifications become so important to certification attainment that they are quasimandatory.
•
Licensure: A mandatory process by which a governmental agency grants time-limited permission
to an individual to engage in a given occupation after verifying that he or she has met pre-
ABET Engineering program requirements http://www.abet.org/accreditation/accreditation-criteria/criteria-foraccrediting-engineering-programs-2017-2018/
16
https://www.imsglobal.org/sites/default/files/Badges/OBv2p0Final/index.html#intro-example
17
www.pmi.org
18
https://www.microsoft.com/en-us/learning/mcse-certification.aspx
22
determined and standardized criteria. Examples include registered nurses19, a real estate agent20, or
professional engineers21. The goal of licensure is to ensure that licensees possess enough levels of
competency to ensure that public health, safety, and welfare are reasonably well-protected.
•
Micro-Learning and Micro-Credentials: Like badges, micro-credentials are performance-based
assessments that allow learners to showcase their growing skills. Micro-credential is a term used
to reference a subset of competencies that is significant enough to report (such as clearance to
perform a medical procedure with a new device) that can be unique or build towards a more
substantial credential. Micro-credentials are typically on-demand, shareable, and personalized. In
some professional communities, individuals collect micro-credentials to document continuing
education units that apply towards re-certification. To put micro-credentials in a broader context,
learners typically have a choice in what credentials they want to pursue and can create their own
developmental plans.
Credential Management System: A credential management system captures performance of individuals
and publishes a certificate, degree, or qualification that is trusted in an organization. In the civilian domain,
independent accrediting bodies play a critical role. In the military, independent accrediting bodies affect
some domains but are not required across all areas. Credentials as the key measurement artifact tend to
generalize people into designations that obfuscate granular metrics on their performance. This section
describes how measurements become obscured in credentialing systems, and establishes granularity and
traceability related to individual knowledge and skill as key concerns addressed by CBL.
Figure 12. Credential Roll-Up - The services all have varying methods to prove proficiency upon being assigned a
role in an operational environment. However, this is often based on assumptions about typically non-expiring
credentials. By removing granular data, specific grading, and not providing a traceable data-set, leadership at the
working level must build their own internal model of what subordinates can do based on limited data.
19
https://www.ncsbn.org/nclex.htm
https://www.kapre.com/resources/real-estate/requirements-to-get-a-real-estate-license/
21
https://ncees.org/engineering/pe/
20
23
The Armed Services Vocational Aptitude Battery (ASVAB) combined with a High School Diploma or
equivalent degree provides the baseline measurement for recruitment. After recruitment these few
measurements represent the total measure of an individual entering the accession training. Compared to the
individual’s previous data set captured in high school, college, work environments, and other sources, the
granularity of records has gone to a scale that obfuscates what the individual is capable of beyond anecdotal
discussion. Specifically, demonstration of knowledge or skill upon entering the military requires selfidentification on the individual and proving their ability in the workplace. Traceability is also sacrificed.
An example is that aptitude to perform complex math problems identified by the ASVAB does not identify
what formal mathematics education a service member obtained prior to enlistment. Many military
schoolhouses pursue additional tutoring services for service members who join with difficulty reading or
writing. If those records were preserved, overall learning could be tracked, and the DoD could better model
how to educate moving forward at the individual level.
Upon completion of training, records of individual performance are captured within a credential. Individual
performance, areas of strength or weakness, and soft-skill application are all lost, even though staff at
training commands take note of these factors either formally or informally. As a service member enters the
career phase, work experiences, as well as training and education experiences foster knowledge and skills
to enable capture of credentials. Post accession service members are represented by their credentials, rank,
and current medical condition at the talent management level. As shown in Figure 12, “Fit and Fill” is a
common euphemism in the DoD for finding the right combination of rank and Navy Enlisted
Classification/Military Occupational Specialty (NEC/MOS) credentials then placing them in a role.
Learner Profile: Learner profiles have the potential to empower personalized learning through the
accumulation of better data that can inform learning in new and meaningful ways. A learner profile typically
includes a broad range of data, including demographic data, data about student interests, learning
preferences, inter- and intra-personal skills, existing competencies, and additional competencies that still
need to be developed (in the personal, social-emotional, academic and career arenas). It might also include
information on credentials or other student-learning strengths, needs, and types of support that have been
successful in the past.
A learner profile might contain explicit goals about desired learner outcomes. When coupled with behaviors
and experiences of other students, however, inferred data may inform valuable feedback sessions addressing
fragile or missing skills, enable decisions about student placement, or highlight specific interventions
needed to support success. Some elements of learner profiles afford learners a great deal of control. Just as
learner profiles evolve over time to support personalization at the learner level, it is anticipated the learner
has access to obtain, share, and interact with these artifacts, as well as controlling access to them by others,
including other people, organizations, applications, and even other TLA components.
Safeguarding learner data to preserve privacy is an important legal and ethical consideration. This is
especially true in the future learning ecosystem, where data-rich learner profiles and student achievement
data are collected, stored and shared. The evolution from learner profile to student-owned pathways is key.
In education, a transcript is a copy of a student’s permanent academic record, typically including all courses
taken, all grades received, all honors received, and all degrees conferred to a student. A learner profile needs
to consider this formal information while also being dynamic enough to document other achievements as
individual learners are constantly changing and will become competent in new areas over time. This
information about learners should be used – with learners’ full cooperation – to determine their unique
paths to achieving proficiency in all required and desired competencies throughout life.
24
Experience Application Program Interface (xAPI): xAPI22 is a specification currently going through the
Institute of Electrical and Electronics Engineers (IEEE) - Learning Technology Standards Committee
(LTSC) standards-development process (https://www.tagxapi.org/). It facilitates the collection of data
about the wide range of experiences a person has (both online and offline). xAPI captures data in a
consistent format about a person’s or group’s activities from many technologies. Different systems can
securely communicate by capturing and sharing a stream of activities using xAPI simple vocabulary. xAPI
specifies a structure to describe learning experiences and defines how these descriptions can be exchanged
electronically. The main components of xAPI are the data structure called statements and the data
storage/retrieval capability called the LRS. The xAPI specification has stringent requirements on the
structure of this data and the capabilities of the LRS. Statements employ an actor, a verb, and an object to
describe any experience. Each statement also includes timestamps and unique and traceable identifiers.
xAPI Profile: The “Companion Specification for xAPI Vocabularies”23 released in March 2016, describes
the general approach needed to strengthen the semantic interoperability of xAPI. The “xAPI Profiles
Specification”24 refines and extends the vocabulary specification to offer a common way to express
controlled vocabularies, provide instructions on the formation of xAPI statements, and describe patterns of
xAPI usage that provide additional context to a domain, device, or system. The xAPI Profiles Specification
also adds tools to support authoring, management, discovery and/or adoption, including additional data
elements and properties needed to support numerous use cases. A Benefit of xAPI Profiles is that they can
standardize xAPI implementations across the “Internet of Learning Things (IoLT).”
22
xAPI Specification https://github.com/adlnet/xAPI-Spec)
https://adl.gitbooks.io/companion-specification-for-xapi-vocabularies/content/
24
https://github.com/adlnet/xapi-profiles
23
25
APPENDIX B – TECHNICAL STANDARDS AND SPECIFICATIONS
Achievement Standards NetworkTM (ASNTM): ASN™ provides access to machine-readable
representations of learning objectives and curriculum standards. It provides a Resource Description
Framework (RDF)-based framework based on the Dublin Core Metadata Initiative’s (DCMI) syntaxindependent abstract information model (DCAM). The ASN™ framework is made up of Standards
Documents and Statements. The Standards Document represents the overall competency framework, while
Statements represent the individual achievements within the overall framework that can be asserted. A set
of core properties define the relationships between the two in terms of an Entity-Relationship model.
Structural relationships replicate the relationships between different components of the Standards
Document, and semantic relationships define the relationships of meaning between statements (e.g.,
assertion of equivalence).
ASN™ is designed for interoperability and open access to learning objectives. It has seen wide adoption in
the K-12 community. The ASN Description Framework (ASN-DF) provides the means to create ASN™
profiles through inclusion of additional properties/classes from other namespaces. ASN-DF provides a
small vocabulary of classes and properties and a set of mechanisms for tailoring an ASN™ Profile based
on the Dublin Core's conceptualization of application profiles and description templates.
Competencies and Academic Standards ExchangeTM specification (CASETM): The IMS Global
Competencies created the CASE™ specification to define how systems exchange and manage information
about learning standards and/or competencies in a consistent and digitally-referenceable way. CASE™
connects standards and competencies to performance criteria and provides a way to transmit rubrics
between various platforms. The CASE™ Service specification is the definition of how systems achieve the
exchange of information about learning standards and/or competencies. Within CASE™, the underlying
structure of both a competency and an academic standard are represented using the same data model. The
data model is composed of three core constructs:
•
The Root Definition Document – the top-level structure that collects the set of statements that
define the individual competencies/academic standards. Within CASE, this structure is
identified as the CFDocument.
•
The Set of Composition Statements – the set of statements into which the top-level
competency/academic standards have been decomposed. Within CASE, this structure is
identified as the CFItem.
•
The Rubric – the detailed definition of how it can be determined that mastery of the associated
competency/standard has been achieved. This requires the definition of specific criteria used
for each of the scores that can be awarded during an assessment. Within CASE, this structure
is identified as the CFRubric.
The CASE™ specification defines internal relationships between statements like Parent/Child, Precedes, Is
Related To, Exemplar, and Is Part Of. All competency frameworks published in CASE™ format can be
linked together in a network of equivalent or aligned competencies. Having universal identifiers for each
competency makes it possible for any tools or applications to share information between systems. The
composition and logical structure of the data model for this hierarchical structure uses a linked list approach.
Competencies must be related to programs, courses and modules, assessments, course materials, and
learners’ records of achievement. The CASE™ rubric is also suited for establishing the criteria and
performance levels for earning credentials.
26
Occupational Information Network (O*Net): Valid data is essential to understanding the rapidly
changing nature of work and how it affects the workforce. O*NET was developed under the sponsorship
of the U.S. Department of Labor/Employment and Training Administration (USDOL/ETA) to facilitate and
maintain a skilled workforce. Central to the project is the O*NET database, containing hundreds of
standardized and occupation-specific descriptors on almost 1,000 occupations covering the entire U.S.
economy.
Every occupation requires a different mix of knowledge, skills, and abilities and is performed using a variety
of activities and tasks. As shown below in Figure 13, the O*NET database identifies, defines, describes,
and classifies occupations through Experience Requirements (Training, Licensing), Work Requirements
(Basic and Functional Skills), Occupation Requirements (Activities, Context), Worker Characteristics,
Occupation Specific Information, and Occupational Characteristics. The O*NET Database includes a
Military Transition search that connects occupations in the O*NET database to other classifications systems
within the Military. This is accomplished through data from the Military Occupational Classification
(MOC) system at the Defense Manpower Data Center (DMDC). This capability is also available via web
services25. Other resources include “Careers in the Military” and links to Army, Navy, Marine Corps, and
Air Force COOL projects. Google26 has recently released their own career recommender that leverages
O*NET and military classification data.
Figure 13. O*NET Content Model – This model provides the framework that identifies and organizes the
distinguishing characteristics of an occupation. The model defines the key features of an occupation as a standardized,
measurable set of variables called descriptors. This hierarchical model starts with six domains that expand to 277
descriptors.
25
26
https://www.onetcenter.org/crosswalks.html
Google Veteran Job Finder https://qz.com/work/1371331/a-simple-tweak-to-google-search-now-helps-veterans-tofind-jobs/
27
MedBiquitous Competency Framework (MEDBIQ CF): The use of outcome and competency
frameworks is a growing part of healthcare education and maintenance of certification. The MedBiquitous
Competency Framework, ANSI /MEDBIQ CF.10.1-2012, is a technical standard for representing
competency frameworks in eXtensible Markup Language (XML). MedBiquitous is accredited by the
American National Standards Institute (ANSI). Organizations that publish competency frameworks can do
so in this standard format, making it easier to integrate competency frameworks into educational
technologies like curriculum management systems.
The data model establishes relationships between competency objects that narrow or broaden the definition
of overall competency. The MedBiquitous Competency Object specification provides a consistent format
and data structure for defining a competency object, an abstract statement of learning or performance
expectations, and information related to the statement. Statements can be learning outcomes, competencies,
learning objectives, professional roles, topics, classifications/collections, etc. The Competency Object may
include additional data to expand on or support the statement. This specification is meant to be used in
concert with complementary specifications, including the MedBiquitous Competency Framework
specification.
Reusable Competency Definition (RDC): The IEEE Standard for Learning Technology—Data Model for
Reusable Competency Definitions (RCD) WG 20 will revise the 2008 Reusable Competency Definition
(RCD) (1484.20.1) standard. The current standard defines a data model for describing, referencing, and
sharing competency definitions, primarily in the context of online and distributed learning. The standard
provides a way to represent formally the key characteristics of a competency, independent of its use in any
specific context. It enables interoperability among learning systems that deal with competency information
by providing a means for them to refer to common definitions with common meanings. The competencies
WG 20 intends to take the Credential Ecosystem Mapping Project’s mapping of competencies metadata
and update RCD to represent the most common elements that are found in multiple standards addressing
competencies and competency frameworks. This effort started in September 2018 and will be monitored
for progress.
Credential Transparency Description Language
(CTDL)27: CTDL is a vocabulary comprised of terms that
are useful in making assertions about a credential and its
relationships to other entities. CTDL defines terms for
Properties, Classes, Concept Schemes, and/or Data Types.
Like activity streams, the CTDL dictionary is comprised of
nouns (classes) and verbs (properties) that allow us to make
simple statements that enable a rich description of
credential-related resources. This includes credentialing
organizations and specific subclasses of credentials, such as
Degrees, Certificates, and Digital Badges.
CTDL is modeled as a directed graph using the W3C’s
Resource Description Framework (RDF) for describing
data on the web. Like an activity stream, the “triple” is the
27
http://credreg.net/ctdl/handbook
28
Figure 14. CDTL Class Structure
basic grammatical construct in making CTDL assertions about things and is comprised of three simple
components: a subject, a predicate and an object. The comprehensiveness and scope of this specification
makes it ideal for defining TLA Credentials. The CTDL is comprised of a small set of primary classes
identifying major entities in the credentialing ecosystem.
As shown in Figure 14, CTDL provides clear guidance about the structure and terms used to describe
attributes and assert relationships between all classes for each credential. Two super classes called Agent
and Credential define families or sets of subclasses used throughout the CTDL. The primary classes also
include the ConditionProfile used to define sets of constraints on the credential described by an
Assessment Profile, Learning Opportunity Profile, or Competency Framework. These are used to express
learning goals and outcomes in terms of knowledge, skills, and abilities across a diverse set of profiles.
Credential Registry: The Credential Registry28 is both a repository of information regarding credentials
and a set of services that make it easier to use that information. Together with CTDL, it provides for the
capturing, connecting, archiving, and sharing of information about credentials (including degrees,
certificates, certifications, licenses, apprenticeships, badges, micro-credentials, and more). It also registers
credentialing organizations that issue these credentials to individual credential holders, including
universities, colleges, schools, industry and professional associations, certification organizations, military,
and more.
Credential Registry also allows users to see what various credentials represent in terms of competencies,
transfer value, assessment rigor, third-party approval status, and much more. The Credential Registry works
with CTDL to allow the storage of credentials that have been expressed using that specification. The
Credential Engine project’s developers are using the Dublin Core Application Profiles process to create
systems that communicate virtually all aspects of credentials. The Technical Advisory Committee (TAC)
promotes collaboration across, and harmonization of, standardization initiatives that are developing data
models, vocabularies, and schemas for credentials and competency frameworks, as well as related
competency information such as criticality ratings and assessment data typically captured with a wide
variety of systems. The Credential Registry uses technology and CTDL to capture, link, update, and share
up-to-date information about credentials so it can be organized and centralized within the Registry, and
then made searchable by customized applications, and linkable from anywhere on the open Web.
The Credential Registry serves as a complement to the CTDL specification and meets required TLA registry
capabilities for defining credentials. It is important to note that this is only a registry and, as such, only
stores the description of a credential in the abstract sense. From this perspective, the registry can be used to
house credentials created with other specifications (e.g., CASETM). The Credential Registry is not a store
of personal information or of personally-obtained credentials. That information will be stored in the TLA
Learner Profile.
IMS Global Open Badges: Open Badges29 are information-rich visual representations of verifiable
achievements earned by recipients. Open Badges is a technical specification and set of associated opensource software designed to enable the creation of verifiable credentials across a broad spectrum of learning
experiences. The Open Badges standard describes a method for packaging information about
28
29
https://www.credentialengine.org/credentialregistry
https://www.imsglobal.org/activity/digital-credentials-and-badges
29
accomplishments, embedding it into portable image files as digital badges, and establishing resources for
its validation and verification. Open Badges contain detailed metadata about achievements, such as who
earned a badge, who issued it, and what it means. The Open Badges 2.0 specification expands on this
capability to include versioning, endorsements, and full use of JavaScript Object Notation Linked Data
(JSON-LD)30.
The Open Badges Vocabulary defines several data classes used to express achievements that are
understandable within software and services that implement Open Badges. There are three core data classes:
Assertions, Badge Classes, and Profiles. Each data class is a collection of properties and values, and each
defines which are mandatory and optional, as well as the restrictions on the values those properties may
take. Each Badge Object may have additional properties in the form of an Open Badges Extension, a
structure that follows a standard format so that any issuer, earner, or consumer can understand the
information added to badges. Assertions are representations of an awarded badge, packaged for
transmission as JSON objects with a set of mandatory and optional properties.
Open Badges are expressed as linked data so that badge resources can be connected and references as a
permanent fixture in a TLA Learner Profile. The attributes defined by the Badge Class and Issuer Profile
are clear and the rules surrounding the specification drive the interoperability. Finally, the URL usage of
Badges and many of their properties (Evidence, Criteria, Image, Issuer, etc.) enable them to be easily
accessed on the Web.
W3C Verifiable Claims: The mission of the Verifiable Claims Working Group (VCWG) is to make
expressing and exchanging credentials that have been verified by a third party easier and more secure on
the Web. Driver’s licenses are used to claim that we can operate a motor vehicle, university degrees can be
used to claim our education status, and government-issued passports enable holders to travel between
countries. This specification provides a standard way to express these sorts of claims on the Web in a way
that is cryptographically secure, privacy respecting, and automatically verifiable. A verifiable claim is a
claim that is effectively tamper-proof, and whose authorship can be cryptographically verified. Multiple
claims may be bundled together into a set of claims.
The basic components of a set of verifiable claims include a Subject Identifier, Claims about the Subject,
Claim Set Metadata, and a Digital Signature. Both the Entity Profile Model and Entity Credential Model
consist of a collection of name-value pairs that are referred to as properties in the data model. The link
between the two is in the ID property. The Entity Profile Model defines a subject identifier in the id
property, while the claims section of the Entity Credential Model uses the id property to refer to that subject
identifier. Unlike the properties in the Entity Profile Model, the properties in the claim section of the Entity
Credential Model are claims made by an entity about the subject defined in an entity profile.
Additional research is required to better evaluate its applicability to the TLA. However, the defined data
model and structure of this specification implies that it could connect to the TLA both semantically and
syntactically. The data associated with verifiable claims are largely susceptible to privacy violations when
shared with other TLA components, so many of the Verifiable Claims – Use Cases should be examined to
inform how Personal Identifying Information (PII) connected to credentials can be protected within the
security boundaries of the TLA.
30
JSON-LD https://json-ld.org/
30
Comprehensive Learner Record (CLR): IMS Global led the development of the CLR
(https://www.imsglobal.org/activity/comprehensive-learner-record), formerly known as the Extended
Transcript, it is designed to support traditional academic programs and co-curricular and competency-based
education in order to capture and communicate a learner's achievements in verifiable digital form. The
vision for the learner record is transformative in its potential, beyond providing relevant student
competencies and skills. Closely following the developing registrar guidance of American Association of
Collegiate Registrars and Admissions Officers (AACRAO), the vision for CLR is a secure, student-centered
digital record for the 21st century.
The CLR standard is a new generation of secure, verifiable digital records for learners, containing all nature
of learning experiences and achievements including courses, competencies, skills, co-curricular
achievements, prior learning, internships and experiential learning. This variety of credentials targeted by
this standard could make it viable as a TLA Credential standard. The CLR seems to be versioned according
to the Extended Transcript standard. At the time of this report, the support of these efforts, and the emerging
IMS Pathways specification are unknown.
31