NCHRP20-07 (422) FR

Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

Project No.

NCHRP 20-07/ Task 422

User Review of the AASHTO ‘Guide for the Local Calibration of the
Mechanistic-Empirical Pavement Design Guide’

FINAL REPORT

Prepared for
Transportation Research Board

of

The National Academies of Sciences, Engineering, and Medicine

Georgene M. Geary, P.E.


GGfGA Engineering, LLC
Stockbridge, Georgia
June 2018

Permission to use any unoriginal material has been


obtained from all copyright holders as needed.
CRP Project 20-07/Task 422

ACKNOWLEDGEMENT OF SPONSORSHIP

This work was sponsored by the American Association of State Highway and Transportation
Officials, in cooperation with the Federal Highway Administration, and was conducted in the
National Cooperative Highway Research Program, which is administered by the Transportation
Research Board of the National Academies of Sciences, Engineering, and Medicine.

DISCLAIMER

The opinions and conclusions expressed or implied herein are those of the contractor. They are not
necessarily those of the Transportation Research Board, the Academies, or the program sponsors.

ii
CRP Project 20-07/Task 422

User Review of the AASHTO ‘Guide for the Local Calibration of the
Mechanistic-Empirical Pavement Design Guide’

TABLE OF CONTENTS

LIST OF FIGURES AND TABLES........................................................................................ iv

SUMMARY/ABSTRACT ....................................................................................................... v

1. BACKGROUND AND INTRODUCTION ........................................................................ 1

1.1. Local Calibration History ........................................................................................... 2


1.2. Survey Results Summary and Discussion ................................................................... 6
1.3. Global and Local Calibration.................................................................................... 10

2. CURRENT CONTENTS OF THE LOCAL CALIBRATION GUIDE ............................. 22

2.1. Categorized Contents of the Local Calibration Guide ................................................ 22


2.2. Minimum Information Necessary to Perform Local Calibration................................. 26
2.3. General/Summary Comments on Current Local Calibration Guide ............................ 26

3. PROPOSED SPECIFIC REVISIONS TO THE LOCAL CALIBRATION GUIDE .......... 29

3.1. General Revisions .................................................................................................... 29


3.2. Detailed Description of New Local Calibration Guide .............................................. 30

REFERENCES ...................................................................................................................... 41
a. References included in Task 422 Report ......................................................... 41
b. State Local Calibration Research Reports ....................................................... 41
c. National/Other Reports related to Local Calibration ........................................ 46
d. Thesis Reports related to Local Calibration ..................................................... 46

APPENDIX A Global Calibration Factors and Equations ................................................... A-1


APPENDIX B Survey Form and Detailed Survey Results ................................................. B-1
(Excel document with complete survey results available on request)
APPENDIX C Examples for Section 7 and 8 of New Guide ............................................... C-1

iii
CRP Project 20-07/Task 422

LIST OF FIGURES AND TABLES

FIGURES

Figure 1- June 2004 MEPDG Review Software CD .................................................................... 4


Figure 2- Do you currently use AASHTO Pavement ME either for your State/Provinces
pavement designs or for comparison designs (parallel effort)?..................................................... 7
Figure 3- Have you ever performed (or contracted) a local calibration of AASHTO PMED? ...... 8
Figure 4 – Indicate any of the conditions you encountered in Performing the latest local
calibration (Check all that apply) ................................................................................................ 9
Figure 5- Hypothesis testing...................................................................................................... 30
Figure 6- Example of Sampling Template ................................................................................. 32
Figure 7- Global Calibration Factors From PMED Local Calibration Webinar 3 ....................... 35
Figure 8- Potential format for Step 7.2 results (From PMED Local Calibration Webinar 3)....... 36
Figure 9- Step #8 clarification ................................................................................................... 36
Figure 10- JPCP Calibration factors from PMED Webinar 3 ..................................................... 37
Figure 11- Identifying Trends in Data (From LCGuide, Figure A2-17) ..................................... 36
Figure 12- Computation of standard deviation of the distress (from PMED webinar 3).............. 38
Figure 13 – Table of SEE (Se) values ........................................................................................ 38

TABLES

Table 1- NCHRP Projects Related to AASHTO Pavement ME Design ....................................... 2


Table 2- MEPDG to DARWin-ME to AASHTO Pavement ME Design Chronology................... 4
Table 3- Responses to “Have you performed (or contracted) specialty studies for PMED? .......... 8
Table 4- Global Calibration Factors and Local Factors by State ................................................ 17
Table 5- Flexible Pavements, recent State reports calibration factors ......................................... 18
Table 6- Rigid Pavements, recent State reports calibration factors ............................................. 20
Table 7- CRCP, recent State reports calibration factors ............................................................. 21
Table 8 - Minimum information necessary to perform Local Calibration ................................. 26

iv
CRP Project 20-07/Task 422

SUMMARY/ABSTRACT

This project “User Review of the AASHTO Guide for Local Calibration of the MEPDG” was a project
to prepare a critical review of the Local Calibration Guide from the viewpoint of a general pavement
practitioner. The results show that local calibration is at the same time very straightforward but also more
complex than anticipated. The equations used can be straightforward, and the statistical tests used are
relatively standard, but the myriad of factors involved, including the need for input from practitioners to
provide expertise in local calibration is evident from this review. The first two sections of this report are
provided as background and support for the changes that are advocated in section 3. A detailed description
of changes recommended to improve the Local Calibration Guide was developed based on this effort and
is included as Section 3. It is anticipated that local calibrations performed based on an improved Local
Calibration Guide developed following these recommendations will be easier to perform and also easier to
compare to other local calibration efforts. As previous local calibration efforts have led to improvements
in the models in the PMED software, it is expected that this will continue and therefore it is also
recommended to develop a process to gather and disseminate the latest information on calibration to all
users.

v
CRP Project 20-07/Task 422

1. BACKGROUND AND INTRODUCTION

Pavements are the most ubiquitous and complex engineered system in our environment.
Pavements connect every large city, and every small city. Pavements connect to your house,
school, work, parks and other recreation areas. Millions of miles of pavement are walked, biked
or driven on every day by millions of people. How can something that is everywhere be so
complex? Pavements are man-made engineered systems constructed mainly using local materials,
following local conditions, local history and affected by local environmental conditions and local
practices. A whole area of study is devoted to geological differences and soil engineering
(geotechnical engineering), which is the foundation of all pavements. Pavements are composed
of different materials (aggregates, binders, fillers, etc.) that are most cost-effective if procured
locally. Pavements are exposed to very different climates, ranging from the relatively constant 75
degrees F of South Florida to the below freezing temperatures of Northern Ontario. Pavements
are exposed to different levels of loading, from a pavement on a major shipping route to a rural
pavement connecting two remote cities. Human nature and personal preference along with the
previously mentioned local conditions and history has all had some influence on the different ways
pavements are structured, designed and constructed in different locations. Is it no surprise that any
mechanistic model to describe the complex, diverse engineered system that we collectively call
‘pavements’ requires local calibration?

The mechanistic design model most widely used for pavements is AASHTOWare Pavement ME
Design (PMED). PMED uses the inputs of the different materials, different layer configurations,
different climates and different loadings and converts the stresses into distresses while also
considering the incremental effects of environment and fatigue. PMED has been nationally
calibrated with Long Term Pavement Performance (LTPP) data to address size effects and other
circumstances that won’t allow us to go straight from small sample test results to full-size in place
field conditions without some adjustment. The PMED software is licensed worldwide.
Implementation of the software is an ongoing effort, and local calibration is an important part of
implementation. Local calibration is also a necessary part of validation after major software
updates. The Pavement ME National Users Group recently identified local calibration and
verification as the most common difficult or challenging part of implementation. In that same
meeting report, 14 of the 25 responding States noted that they had performed local calibration.
Based on the survey performed for this project, the number of States that have identified that they
have performed local calibration is 28, with 3 others currently in process of performing local
calibration. Several States have also performed local calibration more than once. Local calibration
will be an ongoing process used by States as improvements are made to the PMED software, so
methods to make it less challenging in practice, and especially in the minds of the users, is key.
More users are performing local calibration and now is an opportune time to improve and

1
CRP Project 20-07/Task 422

streamline the process. Based on the volume of users and interest in this area, and the knowledge
that has been gained in performing the numerous local calibrations, it is clear that changes are
needed to the AASHTO Guide for the Local Calibration of the Mechanistic-Empirical Pavement
Design Guide. It is also clear that a more automated method to perform calibrations and
recalibrations is a current need of the AASHTO Pavement ME Design software Users community.
As of the development of this document, that effort has already started as part of a separate
AASHTOWare initiative.

This report consists of the following sections: (1) a history of AASHTO Pavement ME software
versions and the changes that would affect local calibration including a summary of the Survey on
Local Calibration performed in April 2018, (2) review of the current contents in the Local
Calibration Guide, and (3) specific recommendations on revisions to the Local Calibration Guide.
The Reference section includes standard references included in the report, but also includes a
listing by State of Local Calibration reports identified, thesis reports identified, and National or
other reports related to local calibration. The Appendices contain (A) listing of all current and
previously documented global calibration factors and equations, (B) details from the 2018 Local
Calibration survey and results (summarized in section 1) and (C) examples of what is expected in
the next version of the Local Calibration Guide (as described in section 3).

1.1 Local Calibration History

AASHTO Pavement ME Design started as NCHRP Project 1-37, Development of the 2002
Guide for the Design of New and Rehabilitated Pavement Structures in 1996. That project led to
the much larger NCHRP 1-37a, Development of the 2002 Guide for the Design of New and
Rehabilitated Pavement Structures, Phase II. Table 1 provides the past and current NCHRP
projects under the pavements area most related to local calibration. Other NCHRP projects have
also been completed in the materials area (NCHRP 4-36 and 9-30A) but are not included here.

Table 1- NCHRP Projects Related to AASHTO Pavement ME Design


1-37A Development of the 2002 Guide for the Design of New and Rehabilitated Pavement
Structures
1-40 Facilitating the Implementation of the Guide for the Design of New and
Rehabilitated Pavement Structures

1-40(A) Independent Review of the Recommended Mechanistic-Empirical Design Guide


and Software

1-40(D) User Manual and Local Calibration Guide for the Mechanistic-Empirical Pavement
Design Guide and Software
1-40(D)01 Technical Assistance to NCHRP and NCHRP Project 1-40A: Versions 0.9 and 1.0
of the M-E Pavement Design Software (Flexible)

2
CRP Project 20-07/Task 422

1-40(D)02 Technical Assistance to NCHRP and NCHRP Project 1-40A: Versions 0.9 and 1.0
of the M-E Pavement Design Software (Rigid)

1-41 Models for Predicting Reflection Cracking of Hot-Mix Asphalt Overlays

1-42 Top-Down Fatigue Cracking of Hot-Mix Asphalt Layers – Phase 1

1-42A Models for Predicting Top-Down Cracking of Hot-Mix Asphalt Layers

1-47 Sensitivity Evaluation of MEPDG Performance Prediction

1-51 A Model for Incorporating Slab/Underlying Layer Interaction into the MEPDG
Concrete Pavement Analysis Procedures (Completed 12/2016)

1-52 A Mechanistic-Empirical Model for Top-Down Cracking of Asphalt Pavement


Layers (Active)

1-53 Proposed Enhancements to Pavement ME Design: Improved Consideration of the


Influence of Subgrade and Unbound Layers on Pavement Performance (Active)

1-59 Proposed Enhancements to Pavement ME Design: Improved Consideration of the


Influence of Subgrade Soils Susceptible to Shrink/Swell and/or Frost Heave on
Pavement Performance (Pending)

1-61 Evaluation of Bonded Concrete Overlays on Asphalt Pavements (Active)

As noted above, a number of follow on NCHRP projects followed and continue to this day, as the
most complex mechanistic based pavement design software is continuously improved. A number
of these NCHRP projects have been incorporated into upgrades of the software, but projects
recently completed or still in progress also will lead to changes that will impact local calibration.
AASHTOWare has started a more regular schedule for future software updates. The AASHTO
Website notes that none of the FY 2017 enhancements “have any impact on the transfer function
calibration coefficients”. It is also noted that based on a technical audit of the software a global
recalibration of the flexible and semi-rigid pavements is planned in FY 2018. As these on-going
projects in Table 1 emphasize, local calibration should not be considered a one-time event, but part
of the evolution and improvement of the pavement design software. As the calibration factors
increase in number and complexity it is increasingly relevant that a relatively standard method be
developed and computerized to assist in recalibrating the software.

3
CRP Project 20-07/Task 422

1.1.1 Chronology of AASHTO Pavement ME Design Software Changes related to Local Calibration

In the span of less than two decades AASHTO Pavement ME Design has
evolved from the original MEPDG Design Guide software CD
Figure 1- June 2004 MEPDG
Review Software CD (shown in Figure 1) based on the original NCHRP 1-37A project
to the current AASHTOWare Pavement ME Design (PMED)
software Version 2.4. A timeline of these changes are noted in Table 2. Note that the term
MEPDG has generically been and continues to be used to describe the software and the design
process collectively, even though the software name changed to DARWin-ME in 2011 and
Pavement ME Design (PMED) in 2013. PMED has also been used to collectively refer to
DARWin-ME and PMED. In literature published after 2013, DARWin-ME was often noted as
AASHTO Pavement ME Design 1.0. But, the literature published before 2013, such as the FHWA
TechBrief on the AASHTOWARE Website and research reports completed before 2013, note the
2011 version as DARWin-ME. Other references published before 2013 also use DARWin-ME to
describe the software at the time, as that is what it was called before the 2013 rebranding to PMED.
Based on the number of references that use PMED Version 1.0 to actually describe DARWin-ME,
PMED Version 1.0 is also noted in Table 2. It should also be noted that the 2013 AASHTO
Pavement-ME Version 1 software is actually the same as the previous Darwin-ME, the software
itself did not change in 2013, just the name changed.

Table 2- MEPDG to DARWin-ME to AASHTO Pavement ME Design Chronology


2004 NCHRP 1-37A completed (original MEPDG) Review software provided to States (June
2004)
2007 NCHRP 1-40D Recalibration of MEPDG (MEPDG Version 1.0)

2007 AASHTO approves MEPDG as Interim AASHTO Guide

2008 NCHRP 1-40B Local Calibration and Manual of Practice project

AASHTO MEPDG Manual of Practice, July 2008 (Interim Edition)


Guide for Local Calibration of the MEPDG
2009 MEPDG Version 1.10 (September 2009)

2010 AASHTO Adopts Local Calibration Guide – November 2010 (current version)

2011 AASHTO DARWin-ME released (also later called Pavement ME Design Version 1.0)

4
CRP Project 20-07/Task 422

NCHRP 20-07/Task 288 complete (rigid recalibration for CTE)

2012 NCHRP 20-07/Task 317: Update of MEPDG Manual of Practice started

2013 AASHTO Pavement ME Design (PMED)Version 1 (Build 1.3.29 dated 3/26/2013)

AASHTO PMED Educational Version released (1.5.08)

2014 January – AASHTO PMED Version 2 ( Build 2.0.19) (citrix capability and layer by
layer asphalt rutting added)
Changes affecting calibration: asphalt rutting model

April/May – FHWA MEPDG Local Calibration Webinars presented


-Introduction to Local Calibration
-Preparing for Local Calibration
-Determining the Local Calibration Coefficients
http://me-design.com/MEDesign/Webinars.html

April 2014– NCHRP 20-07/Task 327 results presented to Joint Technical committee on
Pavements (rigid pavement recalibration)
July 2014- PMED Version 2 (2.1.22) (backcalculation added, subgrade moduli in
sensitivity analysis added)

2015 AASHTO MEPDG Manual of Practice, Second Edition (current version) approved by
AASHTO
Based on PMED Version 1
PMED Version 2 (Build 2.2)
Changes affecting calibration: reflection cracking model added, added semi-rigid
options, fully integrated CTE changes from NCHRP 20-07/Task 327
2016 PMED Version 2 (Build 2.3.0)
Changes affecting calibration: Added SJPCP Analysis Model based on BCOA-ME

2017 PMED Version 2.4(Build 2.3.1)* - added Backcalculation Tool (BcT) Version 1.0
*The latest User Group report noted that V2.4 was the formal designation for this version,
but the AASHTOWare website and the software itself note it as V2.3.1

The chronology presented in Table 2 was developed using a combination of sources: Release Notes
from the AASHTOWARE website for the most recent changes (2013-current), and, literature
reviews from different State local calibration studies (Iowa, Georgia, Washington State and
Wyoming) for the 2004 to 2013 changes. Some of the research reports also reference MEPDG V
0.6, V 0.9, V 1.003 or V 1.1, but no dates for those changes were found in the literature. Now that
the Pavement ME Design software is under the umbrella of AASHTOWare any future changes
should be more controlled and accurately recorded.
The version of the software that was used to perform a local calibration is necessary to
compare calibration values for different States or in a Region, it could be confusing to make
comparison of precision and bias for different versions of the software if the global calibration
values changed.

5
CRP Project 20-07/Task 422

It should be noted that changes in the PMED software do not automatically require
recalibration, even if a model in the software changes. If the Agency is not using that pavement
type that is being modeled (i.e. like semi-rigid bases in PMED Build 2.2), then there is no need to
recalibrate if that is the only change in the software.

1.1.2 MEPDG Users Groups


The MEPDG User Groups started with a peer exchange initiated by Wisconsin DOT in 2013
involving the Mid America AASHTO Region (MAASTO). FHWA and AASHTO expanded this
in 2014 to all four AASHTO Regions (NASTO, SASHTO, MAASTO and WASHTO) in 2014.
An outgrowth of this was the development of National User Group meetings. Two National
meetings have been held, (1) December 14-15, 2016 in Indianapolis, Indiana and (2) October 11-
12, 2017 in Denver, Colorado. The third annual National Users Group meeting is scheduled for
November 7-8, 2018 in Nashville, Tennessee. Detailed Reports and Appendices for both of these
meetings are available on the TPF -5 (305) Pooled-Fund website. The TPF-5(305) transportation
pooled-fund study is actively supporting the MEPDG User Groups. TPF-5(305), Regional and
National Implementation and Coordination of ME Design, is led by FHWA and currently includes
20 State DOTs and 2 Canadian Provinces (Manitoba Transportation, AL, AZ, CA, CO, FHWA,
FL, GDOT, IA, IL, KS, KY, MDOT SHA, MI, MO, NC, ND, NV, Ontario MOT, PA, SC, VA,
WI ). The study site is located at: http://www.pooledfund.org/Details/Study/549.

1.2 Survey Results Summary and Discussion


A survey of the members of the AASHTO Committee on Materials and Pavements was conducted
related to local calibration in April 2018. A copy of the survey and a summary of the results are
found in Appendix B. An Excel document with the complete results is also available. This section
provides the highlights from the short survey. Based on responses from 46 State DOTs and Ontario
(47 total responses) 40% of the respondents have already implemented AASHTO PMED in some
form, and another 35% are considering implementation, as shown in Figure 2. The specific States
that responded to this question are as follows:
Asphalt and Concrete (15): Arizona, Colorado, Georgia, Indiana, Kentucky, Missouri, Nevada,
New Jersey, New Mexico, Pennsylvania, South Carolina, Utah, Virginia, Wyoming, and
Ontario
Asphalt Only (1): Maine
Concrete Only (3): California, Florida, North Dakota
No, not yet (16): Alabama, Connecticut, Delaware, Iowa, Massachusetts, Mississippi, Nebraska,
New York, North Carolina, Ohio, Rhode Island, South Dakota, Tennessee, Texas, Vermont,
Washington
No plans (4): Alaska, Illinois, Minnesota, New Hampshire
No, never (1): Montana
Other (6): Arkansas*, Kansas*, Maryland, Oklahoma*, Oregon, Wisconsin *currently calibrating

6
CRP Project 20-07/Task 422

Figure 2- Do you currently use AASHTO Pavement ME either for your State/Provinces pavement
designs or for comparison designs (parallel effort)?

Twenty-nine States and Ontario noted that they have performed a local calibration, as shown in
Figure 3. Of these, twenty different research reports have been located that reflect local calibration
efforts for different States. The individual efforts may not have calibrated all the models or they
may have in a few cases only used Level 3 inputs, but it appears they all have been an effort at
local calibration. Another 3 States noted as Other (Maine, Nebraska and South Carolina) in the
map are in the process of calibration. Fourteen States noted they have not performed a local
calibration. As shown in the map, many States in the Northeast noted they had not calibrated,
some others (Maine) have been identified as performing calibration in-house. This could be an
opportunity for a potential Transportation Pooled-Fund research project for local calibration of
neighboring states or the region as a whole, depending upon how similar their pavement practices
are.

7
CRP Project 20-07/Task 422

Figure 3- Have you ever performed (or contracted) a local calibration of AASHTO PMED?

Twenty-four States noted that they had performed Local calibration after 2010, when the Local
Calibration Guide was first published. Twenty of these states noted that the Local Calibration
Guide was used in the research. Review of the available research reports identified that they all
recognized the Local Calibration Guide as a reference if they were after 2010, and many
specifically referenced using the specific process in the Local Calibration Guide.

A question on specific studies related to PMED elicited the responses noted in Table 3. A
number of States reported efforts in materials and traffic studies in the latest National Users
Group meeting report [Pierce, 2018], and these also showed up as the most performed specialty
studies in this survey.

Table 3 - Responses to “Have you performed (or contracted) specialty studies for PMED?
Specialty Studies Performed? Number of
responses
Traffic Data Analysis 26
Materials Catalog/Categorization 25
Sensitivity 20
Training 18
Climate Specific Study 11
None 6

8
CRP Project 20-07/Task 422

Sensitivity studies were also identified by a high number of respondents, discussions and use of
sensitivity studies was also found in the individual research reports on local calibration. Local
calibration is obviously recognized as just one part of implementing PMED.

Another question was related to what conditions or issues were encountered as part of local
calibration, the responses are noted in Figure 4. Sample size was identified by most. Both
examples used in the current Local Calibration Guide noted sample size issues. Sample size issues
also appear in the National User Group report, especially as related to JPC pavements. Other
potential responses dealt with pavement management system data (PMS). At least one State noted
that they used PMS data with conversions along with PMS data directly. This combination was
seen in the local calibration research reports and was also shown in the examples in the Local
Calibration Guide.

Figure 4 – Indicate any of the conditions you encountered in performing the latest local calibration
(Check all that apply)

States checking the “unexpected results” included North Carolina, whose local calibration
research effort identified issues with the unbound aggregate base which lead to additional
research on characterizing aggregate bases [Chow et.al, 2013].

Other issues noted:


• Lack of Material inputs for older projects
• Needed new testing equipment (CTE- coefficient of thermal expansion)
• Needed to do additional materials testing

9
CRP Project 20-07/Task 422

• Lack of WIM (Weigh-in-motion) data

The last question in the Survey was open-ended and requested: Did you encounter any other special
situations in local calibration you can share? A portion of the responses are noted below:
• “it was found that the model in the Manual of Practice (MOP) is questionable for both the
alligator and Longitudinal cracking, the standard error increases as the amount of
alligator or longitudinal cracking increases.”
• “cementitious stabilized material base layers were never calibrated at the national level.”
• “Climatic model (soil PI, passing #200, etc.) is very sensitive to the IRI. The critical
pavement performance criteria for pavement design in PavementME is always the IRI,
never the fatigue cracks. To local calibrate the soil in the EICM in relation to IRI is close
to impossible. “
• “"borrowed" a couple of LTPP sites from our neighboring states “
• “New materials used recently will require different calibration “
• “the major distress for new HMA pavement is top-down alligator cracking which is not
modeled in MEPDG. “
• “JPC-longitudinal cracking and rutting are not modeled in MEPDG. “
• “There was either little to no cracking, or a lot of cracking in our calibration sections. “
• “major distress for”… “new (doweled) JPCP is multi-cracking (including longitudinal
and transverse cracking) and rutting due to studded tire wear, but longitudinal cracking
and rutting are not modeled in MEPDG.”
• “Concrete cracking quantities made calibration difficult. There was either little to no
cracking, or a lot of cracking in our calibration sections. During the most recent
recalibration of the concrete models, a model to estimate the curl/warp input was
developed. This was done to induce more cracking in our design to better match our
experience with concrete.”
• “Did not have any field validation of rutting per layer and our pavement management
data does not distinguish between top-down and bottom-up cracking.”

The comment about borrowing LTPP sites is another area where pooled-fund research may be
beneficial, States could identify pavement sections that use similar materials and do a pooled-fund
study to monitor those pavements as a region, like a Regional LTPP program. Comments on the
lack of distress or variety of distress has also been identified in the MEPDG User Group meetings.
Other comments relate to the conditions that are not modeled in the software, or specific concerns
with the results of the local calibration.

1.3 Global and Local Calibration

Calibration of the AASHTO Pavement ME Design software involves correlating the measured
distresses to the predicted distress from the software. A perfect correlation would have measured
values = predicted values. In the real world there are differences between the measured and
predicted values. The standard error of the estimate (SEE or Se) is a measure of these differences,
so it is a measure of the accuracy of the model. A smaller SEE value typically, but not always,
indicates a more accurate model.

10
CRP Project 20-07/Task 422

1.3.1 Global Calibration and the Manual of Practice (MOP) 2008 and 2015 Versions
Global calibration was performed for the original NCHRP 1-37A project which developed the
original MEPDG software, and then again for the NCHRP 1-40 projects that also developed the
Local Calibration Guide and the original Mechanistic -Empirical Pavement Design Guide, A
Manual of Practice (MOP). AASHTO adopted the MOP officially in 2008. The AASHTO version
(AASHTO, 2008) notes that the calibration factors included in that document are based on the 1-
40D project, but the rigid calibration factors are the same as reported in the original 1-37A project.
The current edition of the MOP (AASHTO, 2015) does not specify the basis for the calibration
factors noted for the flexible or rigid models, although they have changed from the 2008 MOP,
especially for rigid pavements (see Appendix A). It is assumed based on documentation from the
NCHRP 20-07 Task 317 project (the project that was the basis for the 2015 MOP) that some of
the changes from the 2008 to the 2015 version were based on documented technical errors found
with the original 2008 MOP, changes to align with what was in the then current (when the project
started in 2011) AASHTO DARWin-ME software, and changes to the rigid calibration factors due
to the NCHRP 20-07/Task 288 project. But as shown in Appendix A, the Task 288 values are
different than what is in the 2015 MOP. Some time elapsed between the completion of Task 317
and the publishing of the 2015 MOP, and therefore those changes most likely occurred during this
time span.

The 2008 and 2015 version of the MOPs both note that they include the following prediction
models:

HMA- Surfaced Pavements and HMA Overlays


o Total Rutting: HMA, unbound aggregate base and subgrade rutting
o Non-Load Related Transverse Cracking
o Load Related Alligator Cracking, Bottom-Up
o Load Related Longitudinal Cracking, Top-Down
o Reflective Cracking
o Smoothness

PCC Pavements and PCC Overlays


o JPC: Faulting, Load Transfer efficiency (LTE)
o JPC Transverse Cracking
o JPC joint spalling
o CRCP – LTE
o CRCP Punchouts
o Smoothness for JPC and CRCP

The 2015 MOP also identified that it changed calibration coefficients in the following models:
o HMA Rutting model (Unbound materials), ks1
o HMA Fatigue Cracking, kf2 and kf3

11
CRP Project 20-07/Task 422

o HMA Thermal cracking, kt (Levels 1, 2 and 3)


o HMA Rutting model, k2r and k3r
o JPC Faulting model, C1, C2, C3, C4 and C7
o CRC punchout model, APO, alpha PO and beta PO

The effect of these changes for the Flexible and Rigid models is discussed below.

FLEXIBLE:
Based on the values of the calibration coefficients in the Current PMED software (Version 2.4-
Build 2.3.0), the flexible (HMA) calibration coefficients in the software are the same as noted in
the 2008 and 2015 versions of the MOP except as described below. These three changes seem
minor but could have a major impact on local calibration since they affect several different models
(cracking, rutting and IRI). Based on this it is questionable to compare local calibrations (i.e. SEE
(Se) values) that were performed using different versions of these values.

The HMA fatigue/load related cracking coefficients (kf2 and kf3) went from negative values to
positive values. These coefficients were both non-zero exponents which related the allowable
loads to the strain (kf2) and the dynamic modulus of the asphalt (kf3), respectively, as noted in the
equation below. A local calibration performed using these values as negative instead of positive
would not be comparable since it would totally change the equation that is being calibrated. Two
recent local calibration research reports noted these values as negative as shown in Section 1.3,
Table 2.

The HMA rutting coefficients (k2r and k3r) were switched. The 2008 MOP k2r became the 2015
MOP k3r and viceversa. These calibration coefficients were also non-zero exponents, k2r for
loading and k3r for pavement temperature as shown in the equation below. Since these were not
simply linear changes, comparing different local calibrations that used these factors unswitched
would also totally change the equation that is being calibrated. Two recent local calibration
research report identified the switched values as shown in Section 1.3, Table 2.

The HMA IRI equation includes a Site Factor component. The method to compute the SF changed
from the 2008 MOP to the 2015 MOP as shown below. Based on the fact that the comparison of
measured and predicted IRI in Figure 5-6 in the 2008 MOP is the exact same Figure 5-6 in the
2015 MOP, this appears to have had absolutely no effect on the calibration. But if different
equations were used in local calibration a difference would be expected in calibration. Some of the
reports noted in Section 1.3 used the 2008 MOP value, and some used the 2015 MOP value.

12
CRP Project 20-07/Task 422

Michigan identified this difference and used the 2015 MOP value. Iowa identified a SF equation
different than either of the ones noted in the MOP versions.

2008 MOP
SF= AGE[0.02003(PI+1) + 0.007947(Precip +1) + 0.000636(FI + 1)]

2015 MOP

IOWA
SF= AGE[1+ 0.5556FI)(1+ p200) x 10-6

RIGID:
Based on the values of the calibration coefficients in the Current PMED software (Version 2.4-
Build 2.3.0) and the 2008 and 2015 MOP as shown in Appendix A, the rigid (JPCP and CRCP)
model has been globally calibrated a number of times and therefore the global calibration
coefficients have changed numerous times. Throughout the different global recalibrations, the C1
and C2 values for the Transverse cracking model and the IRI calibration coefficients (C1-C4) are
the only values that stayed constant.

Realization of an issue with the method of testing the concrete coefficient of thermal expansion
(CTE) drove the first separate recalibration that was completed in the 2011/2012 timeframe:
NCHRP 20-07, Task 288. Unusual results using the Task 288 recalibration values, including
significantly thinner pavements just due to the different calibration factors, drove the second
NCHRP 20-07, Task 327, which started in 2012/2013. The concrete recalibration project, Task
327, was completed in April 2014. The recommended values from the Task 327 project are the
values now in the current PMED software version 2.3. Neither the Task 288 or the Task 327
values are in the 2015 MOP, but they have both apparently have been used by States performing
local calibration. Two different versions of Task 288 values were found and are noted in Appendix
A, one from the Task 327 report and the other from a journal paper that described local calibration
of the JPC (Mu et. al. 2016). Michigan DOT also recently identified significant slab thickness
decreases between PMED Versions 2.0 and 2.2. Michigan DOT has recently recalibrated their
rigid pavements again, this time to PMED version 2.3.
The Task 327 reports notes that the use of different software versions and different
calibration data sets led to the concerns that prompted the need for the Task 327 project in the first
place. Based on the many changes in the software it is not recommended to compare local
calibrations to global calibrations (i.e. SEE (Se) values) that were performed using different
definitions of global coefficients.

13
CRP Project 20-07/Task 422

1.3.2 Local Calibration


As part of this project, 20 different State DOT research reports have been identified and collected
that involve local calibration of MEPDG/ PMED. To prepare this document, these reports were
reviewed, along with other documentation (NCHRP Reports, MEPDG User Group reports, TRB
papers, and thesis documents) related to local calibration.
Based on a review of the current literature, and the changes in calibration coefficients and
models, many of the local calibrations that have been performed previously have increased our
overall understanding of the models used in the PMED but trying to compare these reports can be
complicated. Reasons for that include the items discussed in Section 1.3.1 and:

-LCGuide left a lot of the details that were in other documents out (MEPDG 1-37A reports
and Appendices, MOP), therefore the different calibration efforts, even though they did
follow the LCGuide, did not perform it the same way.
-LCGuide did not explain the meaning and use of calibration coefficients, apparently
relying on the MOP definitions and the discussions in the Appendix of the MEPDG,
therefore some calibration efforts may have been calibrated incorrectly (i.e. changing
mechanistic model coefficients using just statistical methods and no laboratory testing).
-LCGuide did not provide a clear means of consistently documenting the results of a
local calibration effort, therefore it is not easy to compare different calibration efforts.
Some provided SEE (Se) values for split samples separately and some did not clearly
note what global calibration factors were used.
-Since in some cases the global calibration coefficients have changed a number of times,
and in some cases the mechanistic models themselves have changed, it is difficult to
compare the different SEE (Se) values, since they are based on different global
calibration factors.

The LCGuide was developed in 2007/2008 and has not been updated in the decade since, but the
software has changed numerous times and much knowledge has now been gained from different
local calibration efforts
-Issues in the models have been identified and have led to new research for improvements
to the models
-Outside factors have also complicated some of the models (i.e CTE testing issue)

Tables 3 -7 shows the results of local calibration efforts from some of the most recent State
sponsored research reports. Each of these reports were performed by a different research team.
Table 3 identifies the number of pavement sections that were used in the calibration and how the
results were validated. As noted, the software used by the reports varied from DarWIN ME 3.1
to the latest version of the PMED (Version 2.3). Although the different SEE (Se) values are
included in the Table, due to the discussion in Section 1.3.1, comparing these values is really not
recommended. Many State reports identified at least one distress that was not prevalent in their

14
CRP Project 20-07/Task 422

state, and the SEE (Se) is reflective of the distress values used, so if the distress is small the SEE
(Se) will be small. This is evident in the case of Kansas in Table 3, they have an extremely low
SEE (Se) value for faulting but they also have extremely low faulting. The States using more
than the minimum number of sections typically used their PMS data and the ones with the
smaller number of sections did field testing to confirm some of the inputs. Some States did not
calibrate all the models. Even comparing specific models and using the most recent reports, the
global calibration values that were used were not the same and other considerations include:
• If PMS data was used, the State had to convert their methods of collecting cracking into
the method used by PMED
• The validation procedures used were not always clearly defined
• Some provided different options for local calibration factors (noted as value1|value2 in
the cell)
• Different methods were used for calibration in some cases
• Some research reports identified potential errors in the previous (2008) or current MOP
(2015)

Some specific facts from the latest research reports are noted below:

Iowa noted that both the FAULTMAX equation and the IRI equation for JPC in the 2008 MOP
did not match the software they were using (PMED 2.1.24). (The equations are also the same in
the 2015 MOP.) Iowa also defined and used additional statistical values beyond R2 and SEE(Se)
to evaluate regression models; LOE (line of equality) R2 and MAPE(mean absolute percentage
error). Although they calibrated to V2.1.24 they did report on preliminary studies comparing the
results of V2.2 with their local calibration factors developed using V2.1.24. They noted
differences in IRI prediction between PMED V2.1.24 and V2.2, which they attributed to changes
in the Freezing Index Factor and EICM (Enhanced Integrated Climatic Model) changes. Iowa
also noted that their flexible pavement distress was mainly reflective cracking, so they will need
to fully recalibrate to PMED Version 2.2., which now includes the new reflection cracking
model. Iowa used two approaches to IRI calibration, one using the local calibration values for
the distresses and one using the national values for the distresses.
Kansas used PMS data for the distresses used for this calibration effort and Level 3
traffic, climate and materials inputs. They found differences in their HMA sections related to
subgrade modulus, so they calibrated asphalt based on two levels of modulus. They did not
identify any bottom-up asphalt cracking in their PMS, so they did not calibrate the asphalt
bottom-up fatigue cracking model. They identified very small faulting values of mainly less than
0.01 inch in their data (The accuracy of the faulting test itself is +/- 1/32” or 0.03 inches). They
did not include calibration of the JPC transverse cracking model in the report, but they noted
they did calibrate the JPC IRI model.
Louisiana used a new process for fatigue and rutting, setting one of the fatigue (bf2) and
rutting (br2) factors to 1 and only changing the other two, they noted the description of the

15
CRP Project 20-07/Task 422

process was available as a published paper [Wu et. al, 2015]. They also used a Finite Element
Analysis to determine a new C1 factor for JPC cracking based on their typical 20-foot joint
spacing instead of 15 ft. They checked reasonableness of their local calibration results by
running 15 projects with the 1993 design and the calibrated PMED and comparing. They did not
include soil-cement projects in their calibration due to concern with the reflective cracking
model.
Michigan compared results for PMED V2.0, 2.2 and 2.3 in their latest report (2017).
That report identified changes from V 2.0 to 2.3 that would require recalibration of the rigid
pavement but not the flexible pavement. They also had a concern (similar to Iowa) that the
freezing index was different between PMED V 2.0 and 2.2 and it was one of the factors that
affected their results. An earlier (2015) Michigan report shared a method to compute flexible
pavement rutting contribution using a transverse profile.
Virginia used PMS data in their local calibration. They did not have enough JPC sections
to perform a calibration. They did not calibrate flexible top-down cracking, thermal cracking or
chemically stabilized layers fatigue due to pending revisions in the software, or lack of
sensitivity to Virginia’s conditions. They noted a lack of initial IRI values for their projects and
they were concerned that this may have been why they had difficulty in calibrating the flexible
IRI model. They used residual plots to compare the rutting and fatigue errors and identified an
overprediction of rutting at higher AADTT using this method. Some example designs were
performed for different AADTT levels as a final check.

As the next version of the PMED software is expected to incorporate many improvements that will
require recalibration, many States will benefit from a new LCGuide that incorporates the needs
identified. It is also appropriate to have software that assists in local calibration to provide some
consistency in the specific methods and procedures that are best practices to be used in local
calibration.
As such, a new Local Calibration Guide is necessary that clearly and concisely describes the
intent of the calibration coefficients and also provides an outline to document the local calibration
coefficients developed to provide the most benefit to other States and to the State themselves for
their future calibrations. The new LCGuide should clearly note:
Calibration coefficients that are mechanistic in nature and should be changed only based on
new laboratory studies
Calibration coefficients that should/can be locally calibrated statistically
Calibration coefficients that can be adjusted based on finding a ”best fit” to the field data
statistically
Calibration coefficients that can only be adjusted based on trial and error based on PMED
software runs

16
CRP Project 20-07/Task 422

Table 4 Global Calibration Factors and Local Factors by State


Local Cal Guide Arizona Iowa Kansas Louisiana Michigan Virginia
recommendation DarWIN PMED PMED 1.3 PMED PMED 2.0 PMED 1.3
ME 3.1 2.1.24 2.0.19 and
Recalibration
PMED 2.3
Min 30 for load Min 18 Not noted Min 18 Not noted Min 16 to 83 Not noted
Required N computed related cracking HMA & 21 specifically HMA & 21 specifically HMA and 11 specifically
JPC JPC to 101 JPC
58 35 New 28 71 New 25 selected 52
HMA Sites 60 HMA 33 Overlays from original
o/JPC 100
48 35 32 43 20 New JPC noted as
JPC Sites 8 Rehab too few to
calibrate
80/20 split or 90/10 split 70/30 split Split 80/20 split Bootstrapped Split sample
Validation
jackknife sample samples & jackknife
National &Local SEE - 7% L/14.8% N/0.55 N= N/A N=2.91 **N=7.64 N=3.1
HMA fatigue cracking V/0.55 L=N/A L=2.83 L=6.69 L=3.34
National &Local SEE - 0.10 inch L/0.11 inch N/0.09 N=.03 to .05 N=0.18 N=0.3425 N=0.183
HMA rutting V/0.09 L=0.02 L=0.07 L=0.0865 L=0.076
National & Local SEE - 18.9 in/mile L/8.7 in/mile N/15.21 N=3.4 to 6.5 N=15.20 N=14.7738 N=23.99
HMA IRI HMA over 9.6 in/mile V/15.09 L= 0.02 to L=13.15 L=13.9428 L=27.51
JPC 7.29 in/mi
National & Local SEE - 7% L/7.25% N/28.02% Did not do N=29.77 N=6.07 -
JPC cracking V/8.23% cracking L=8.73 L=4.93
National & Local SEE - 0.05 inch L/0.0225 N/0.24 <0.01 inch N=0.034 N=0.059 -
JPC faulting inch V/0.22 L=0.044 L=0.024
17.1 in/mile L/9.85 N/32.10 N=8.87 N=31.36 N=29.89 -
National & Local SEE -
in/mile V/4.97 L=9.68 L=23.58 L=9.83
JPC IRI
in/mile
V= Local calibration(Validation set) L= Local calibration, N= National calibration
** The Michigan report included 4 options that used different pavement sections (new, overlays and combinations), Option 1(new) is shown.
Kansas separated the analysis by subgrade modulus for flexible pavements, so 2 values are shown

17
CRP Project 20-07/Task 422

Table 5- Flexible Pavements, recent State reports calibration factors


Distress Current Arizona Iowa Kansas Louisiana Michigan Virginia
Transfer
Software DarWIN PMED 2.1.24 PMED 1.3 PMED PMED 2.0 PMED 1.3
Function
PMED V 2.4 ME 3.1 in 2.0.19
Coefficient
(Build 2.3.1) 2012
Fatigue kf1 0.007566 “ “ “
Cracking kf2 +3.9492 “ -3.9492 -3.9492
kf3 +1.281 “ -1.281 -1.281
bf1 1.0 249.0087232 “ .01 “ 42.87
bf2 1.0 “ “ “ “ “
bf3 1.0 1.233411397 “ “ 1.05 “
C1bottom 1.00 “ “ “ 0.892 0.50 | 0.67 0.3190
C2bottom 1.00 4.5 “ “ 0.892 0.56 | 0.56 0.3190
C3bottom/C4 6000 “ “ “ “
C1top 7.00 2.32 0.438 | 4.5 - 3.32 | 2.97 -
C2top 3.5 0.47 “ - 1.25 | 1.2 -
C4top 1000 “ 36,000 - -
C3top 0 0 - -
AC Rutting k1r -3.35412 “ “ 0.1
k2r 1.5606 “ 0.4791 0.4791
k3r 0.4791 “ 1.5606 1.5606
B1r 1.0 0.69 “ 0.9 0.8 0.9453 0.687
B2r 1.0 “ 1.1 “ 1.3 “
B3r 1.0 “ “ “ 0.85 0.7 “
Unbound Coarse- 2.03 “
Rutting Grained, ks1
Coarse- 1 0.14 0.001 “ 0.0985 0.153
Grained, Bs1
C: ks1 * Bs1** 2.03 0.2842 0.00203
Fine- 1.35 “ “
Grained, ks1
Fine- 1 0.37 0.001 0.1281 | 0.40 0.0367 0.153
Grained, Bs1 0.3251
F: ks1 * Bs1** 1.35 0.4995 0.00135 0.54

18
CRP Project 20-07/Task 422

Distress Current Arizona Iowa Kansas Louisiana Michigan Virginia


Transfer
Software DarWIN PMED 2.1.24 PMED 1.3 PMED PMED 2.0 PMED 1.3
Function
PMED V 2.4 ME 3.1 in 2.0.19
Coefficient
(Build 2.3.1) 2012
Thermal L1Kt= 1.5 K1= 1.5| 1.5 - K1 = 0.75
Transverse Bt1 L2Kt= 0.5 K2= 0.5| 0.5 -
Cracking kt L3Kt= 1.5 K3=1.5 K3 =120| 3.6 - K3 = 4

AC IRI C1 40.0 1.2281 5 | 25 270 | 95 “ 50.372|21.43 “


C2 0.400 0.1175 “ 0.04 | 0.04 “ 0.4102|0.1600 “
C3 0.008 “ “ 0.001 | 0.001 “ 0.0066|0.0049 “
C4 0.015 0.0280 0.026 | 0.019 “ 0.0068|0.0271 “
Flex o/PCC 40.8 - 0.13 | 25 - - - -
C1
Flex o/PCC 0.575 - “ - - - -
C2
Flex o/PCC 0.0014 - “ - - - -
C3
Flex o/PCC 0.00825 - 0.02432 / - - - -
C4 0.019
IRI Site See section MOP 2015 See section MOP 2015 MOP 2008 MOP 2015 Not
Factor (SF) 1.3.1 1.3.1 identified
*Since the k and b factors simply multiply each other in the transfer equation for unbound rutting, the real factor ends up being k times b.
Some fields are blank, in that case they did not identify the factors used.

19
CRP Project 20-07/Task 422

Table 6-Rigid Pavements, recent State reports calibration factors

Distress Current Software Arizona Iowa Kansas Louisiana Michigan Virginia


Transfer
PMED V 2.4 (Build DarWIN PMED PMED 1.3 PMED 2.0.19 PMED 2.3 PMED
Function
2.3.1) ME 3.1 in 2.1.24 1.3
Coefficient
2012
Transverse C1 2 “ 2.25 - 2.75 “ -
Cracking C2 1.22 “ 1.4 - “ “ -
C4 0.52 0.19 4.06 - 1.16 0.16 | 0.7 -
C5 -2.17 -2.067 -0.44 - -1.73 -2.81| -1.34 -
3.522*POW(CRACK, POW(9.87* - - POW(5.3116* - -
Standard
0.3415) + 0.75 CRACK, CRACK,0.3903)
Deviation
0.4012) + 0.5 + 2.99
Faulting C1 .595 0.0355 0.85 *N(M2015) 1.5276 0.4 |0.4 -
C2 1.636 0.1147 1.39 N(M2015) N(M2015) N(M2015) -
C3 .00217 0.004436 0.002 0.00164 0.00262 N(M2015) -
C4 .00444 1.1 E-07 0.274 N(M2015) N(M2015) N(M2015) -
C5 250 20000 250.8 “ “ “ -
C6 0.47 2.0389 0.4 0.15 N(M2015) N(M2015) -
C7 7.3 0.1890 1.45 0.01 N(M2015) N(M2015) -
C8 400 “ “ “ “ “ -
0.07162*POW(FAULT, POW().037 * - - POW(0.0097* - -
Standard 0.368) + 0.00806 FAULT, FAULT,0.5178)
Deviation 0.6532) + 0.014
+0.001
IRI C1 0.8203 0.6 0.11/ 0.03 “ “ 0.951 |0.42 -
3.48 “ “ 2.902 | -
C2 0.4417
9.39
1.22 0.04 / 9.38 “ 1.211 | 0.7 -
C3 1.4929
0.01
45.20 11.32 / 70 “ 47.056| -
C4 25.24
15.12 33.92
Initial Std 5.4 “ - - - - -
Dev
*N(M2015) denotes the national values in the 2015 MOP, see Appendix A
Kansas did not calibrate transverse cracking and Virginia did not calibrate JPC

20
CRP Project 20-07/Task 422

Table 7- CRCP, recent State reports calibration factors

Transfer Current Software Arizona Virginia


Function
Coefficient
# CRCP sites 2 17
CRCP C1 2.0 “ N(M2015)
Punchouts C2 1.22 “ N(M2015)
C3 107.73 85 114.76
C4 2.475 1.4149 N(M2015)
C5 -0.785 -0.8061 N(M2015)
Standard 2.208*POW(PO, 1.5+ 2.9622*
Deviation 0.5316) POW(PO,0.4356)
IRI, CRCP C1 3.15 “ N(M2015)
C2 28.35 “ N(M2015)
Standard 5.4 “
Error

21
CRP Project 20-07/Task 422

2. CURRENT CONTENTS OF THE LOCAL CALIBRATION GUIDE

As part of this project, the existing “Guide for the Local Calibration of the Mechanistic-
Empirical Pavement Design Guide” was categorized by content, purpose and understandability.
The results of this effort are presented in this section. General/summary comments are provided
at the end of this Chapter. Nomenclature used:
– The ‘MOP’ is the 2015 edition of the AASHTO Manual of Practice
– MEPDG is the previous term for the AASHTO PMED software
– SEE is the term used here to describe the standard error of the estimate (also noted
as Se in the Local Calibration Guide). It is factor that relates to the difference
between the actual and measured values in calibration.

2.1 Categorized Contents of the Local Calibration Guide


• The Headings for each Section in the Local Calibration Guide is noted below in BOLD.
• The review of purpose and content are provided under the bolded headings in italics.

Guide for the Local Calibration of the Mechanical-Empirical Pavement


Design Guide

1.0 INTRODUCTION
The introduction very generally familiarizes the reader to the MEPDG design procedure by
presenting the Conceptual Flow chart for the MEPDG Design process. Distress prediction
models (transfer functions), calibration and validation are also discussed in a very generic
manner. The Introduction also notes that the performance models in the MEPDG were
calibrated on a global level.

2.0 TERMINOLOGY AND DEFINITION OF TERMS


2.1 Statistical Terms
Basic statistical terms are introduced alphabetically and defined without use of any graphs or
examples or relationships to other terms. One term, Verification, is noted that it is not included
in the Local Calibration Guide, but it is defined.

2.2 MEPDG Calibration Terms


This section also is a list of definitions. Five terms are noted, but two simply reference a term in
Section 2.1. Terms are alphabetical and no relation between the terms is clearly stated. The
MOP is referenced in relation to two of the definitions (Calibration factors and Reliability).

2.3 Hierarchical Input Level Terms


This section is the same as what is in the current MOP as Chapter 4.2. It defines the Input
Levels 1, 2 and 3.

22
CRP Project 20-07/Task 422

2.4 Distress or Performance Indicator Terms


This section is very similar to what is in the current MOP as Chapter 4.5 and 4.6. It defines the
different distresses (alligator cracking, longitudinal cracking, etc.). It also provides a
reasonable SEE [standard error of the estimate (se)] for some of the distresses (which is not in
the MOP 4.5 & 4.6).

3.0 SIGNIFICANCE AND USE


The importance of the SEE [standard error of the estimate (se)] is discussed in this section. It
also notes that only the transfer functions or statistical models are calibrated, the mathematical
models are assumed to be accurate.

4.0 DEFINING ACCURACY OF MEPDG PREDICTION MODELS


4.1 Calibration
This section describes bias and precision in a model. It identifies methods to evaluate goodness-
of-fit (method of least squares using multiple regression, stepwise regression, principal
component analysis and principal component regression analysis) but does not describe them. It
references a NCHRP report that describes jackknifing. It notes that two approaches are
warranted: one for models that directly calculate magnitude of distress (i.e. rutting) and one for
incremental damage models (i.e. HMA fatigue cracking). It does not provide an example of how
these would be handled differently.

4.2 Validation
This section describes the basics of validation: an additional and independent set of data to test
the calibration. It mentions using chi-square or t-tests on SEE (se) to provide a check on
validation.
It notes the null hypothesis (predicted values are not statistically different than measured values)
should be checked for each performance indicator. It also suggests a 80/20 split sample for
calibration/validation.

4.3 General Approach to Local Calibration-Validation


4.3.1 Traditional Approach—Split-Sample
4.3.2 Jack-knife Testing—An Experimental Approach to Refine Model Validation
These sections define standard split sample and jackknife testing separately and then it goes on
to recommend a split sample jack-knifing procedure which is not fully described, only referenced
as part of NCHRP Project 9-30.

5.0 COMPONENTS OF THE STANDARD ERROR OF THE ESTIMATE


This section introduces 4 sources of model error: measurement error, input error, model or lack-
of-fit error and pure error. The relationship between the individual error terms and the
dependencies (Distress/IRI, Input Level and Prediction Model) are noted. The errors are further
defined in the next sections. This section does not clearly describe how these errors are related
to calibration.

5.1 Distress/IRI Measurement Error

23
CRP Project 20-07/Task 422

This error is Distress/IRI dependent and is different for different distress types based on how the
distress is measured (can the measurement be repeated, reproduced, and how variable is the
distress). Example: mean rut depth is an estimate of the true mean value)

5.2 Estimated Input Error


The error based on estimating a value, such as using two HMA samples to represent a true mean
value. The error is described as being composed of three parts: testing error, sampling error and
inherent variation of the material properties.

5.3 Model or Lack-of-Fit Error


Error due to the model (transfer function or mathematical model). Caused by inappropriate
assumptions, model simplicity or inadequate form. Example: assuming uniform tire pressure is
not reality.

5.4 Pure Error


Normal variation between distress values. Dependent on the input level, distress type and
prediction equation. Need replicate sections to compare to identify pure error.

6.0 STEP-BY-STEP PROCEDURE FOR LOCAL CALIBRATION


The local calibration flowchart includes 11 steps which are described in this section. Just the
headings are provided here. More detailed examination of these sections are included in Section
3.2 of this document.
Step 1 – Define input levels
Step 2- Develop Sampling Template
Step 3-Estimate sample size
Step 4- Select roadway projects
Step 5- Extract Distress Data
Step 6- Conduct Field and Forensic Investigations
Step 7- Assess Local Bias of Global Calibration Factors
Step 8- Eliminate Local Bias
Step 9- Assess the SEE (Se)
Step 10- Reduce SEE (Se)
Step 11- Interpretation of Results

7.0 REFERENCED DOCUMENTS AND STANDARDS


7.1 Referenced Documents
7.2 Test Protocols and Standards
List of 7 Referenced documents, mainly NCHRP reports, and 5 AASHTO Test standards on
distress measurements (i.e. faulting, IRI).

APPENDIX: EXAMPLES AND DEMONSTRATIONS FOR LOCAL CALIBRATION


A1: Background
• This section notes that the MEPDG was globally calibrated but provides reasons why
local calibration would be beneficial also. Data for the examples are from Kansas DOT
for the flexible example and from Missouri DOT for the JPC example. Specific notes for

24
CRP Project 20-07/Task 422

the flexible and JPC examples are noted under each heading, comments related to both
examples are noted here:
• Both examples are based on an ‘expedited time frame’ and do not have enough samples.
• They each show a ‘simplified’ sampling template, not a recommended template.
• The factorials are unbalanced and do not have replication in either example.
• Estimated number of segments needed are noted for both examples but how they were
computed was not shown.
• The use of 50% reliability as noted in Step 7 is not clearly noted.
• Wording in the examples repeats wording found in the Local calibration guide, without
adding additional context.
• No examples were provided for how statistical parameters were computed. The sections
chosen had very little distress in some cases.

A2: New Flexible Pavements and Rehabilitation of Flexible Pavements


A2.1 Demonstration 1 -PMS Data and Local Calibration
A2.2 Demonstration 2 – LTPP Data and Local Calibration
A2.3 Summary for Local /Regional Calibration Values

• The example separates PMS data and LTPP data


• A value for se/sy was selected as 0.5 without justification, it was just noted as low
• A range for observation timing was noted as acceptable, but an unacceptable value is not
noted
• It appears that the sample size was too small to reject the null hypothesis (A-14), that
should have been noted
• Color coding of graphs would assist in seeing patterns
• ANOVA is noted as being used in Step 8, but the Local Cal Guide does not mention
ANOVA
• Bf1, bf2 and bf3 noted on A-24 and A-25 is not a calibration coefficient noted in Table 6-
1 of the Local Calibration Guide but they are in the MOP
• Used different Input levels for the examples and did not discuss reasons for that
• A-40 suggests using WINJULEA or an elastic layer program to calculate unbound
aggregate stresses- no discussion of this is in the Local Calibration Guide
• A-48 provides suggestions on how to adjust bs1 term for rutting- not in the body of the
Guide
• Table A2-15 shows calibration factors for out of spec material, is that realistic?
• A-53 describes how to adjust bf1, bf2, bf3 and c2 for fatigue cracking- not in body of
Guide
• A-68 notes to use the local calibration values for fatigue but use the global SEE but does
not explain why

A3: New Rigid Pavements- Jointed Plain Concrete Pavements


A3.1 Demonstration 3 -LTPP and PMS Data and Local Calibration

• The equations noted under Table A3-5 use different nomenclature than the equations in
Step 3 of the Guide.

25
CRP Project 20-07/Task 422

• The discussion on the faulting Coefficients (page A-111 and A-114) and what they affect
would be more valuable in the body of the report
• A-117 recommends a comprehensive sensitivity analysis be performed but provides no
references or other recommendations

2.2 Minimum Information Necessary to Perform Local Calibration

The minimum information necessary to perform local calibration and whether it is in the current
Local Calibration Guide (LC Guide) is noted in Table 8:

Table 8- Minimum information necessary to perform Local Calibration

In LC Guide Partially in LC Guide Not in LC Guide


Local Calibration Calibration factors & Basic description of what all
Process Flow Chart transfer equations the calibration factors
(current LC Guide affect/relate to
Chapter 6)
Statistical methods and National SEE terms to
examples evaluate local calibration
(does not now include the
current ones)
Distress definitions used
in PMED

Examples of how best to


use PMS data that is not
exactly as defined

Not all of the calibration factors are included in the current Local Calibration Guide, and very
few of the transfer equations are included. The Manual of Practice (MOP) Chapter 5 includes
the transfer equations but does not always define the relationship between the calibration factors,
in some cases the original NCHRP 1-37A documents and Appendices are required. The statistics
that are included in the Guide could be explained better with figures and consistent
nomenclature. Statistics that are mentioned but not shown should be presented with examples
for clarity.

2.3 General/Summary Comments on Current Local Calibration Guide

The calibration steps described in the Local Calibration Guide have been used by many researchers
and although improvements have been incorporated by others, the basic steps have stood the test

26
CRP Project 20-07/Task 422

of time. With more entities performing local calibration, the relation of practitioners being
involved in local calibration to the success of implementation has been acknowledged. Over the
last decade, statistical methods and concepts have become more mainstream, and so the
terminology needs to be updated to address and engage the non-statistician. The Examples
provided in the Guide were based on the best data at the time, but the amount and quality of data
that was used in the Examples over 10 years ago is now considered inadequate. Due to these
considerations, and since the body of the LCGuide (Sections 1-7) is only 33 pages in length, (only
16% of the volume of the entire document- the entire document is 202 pages, including front matter
and back cover) and the Examples make up the bulk of the Guide, it is appropriate to revise and
update the Local Calibration Guide.

Some concerns with the current Guide document is that it does not lay out the
framework/procedure until almost the last Section (Section 6) and the Guide does not have enough
information to perform a local calibration. The Manual of Practice (MOP) is necessary to even set
up the equations to do any sort of analysis. The MOP is now included in the AASHTO Pavement
ME Software as part of the Help screens but searching for equations and copying them from a help
screen is not ideal. It is noted that some calibration factors can be adjusted outside the software
and other must be adjusted by running the software, but this is not clearly delineated. Specific
concerns:

• Written like a research report, not a Guide.


• Two of the sections are replicated in the MOP (Section 2.3 and 2.4) [It actually appears
that when the MOP was updated in 2015 information from these sections from the Local
Calibration Guide were added to the MOP]
• Inconsistent format and nomenclature in many places makes it confusing (‘Users
Manual’ and ‘Manual of Practice’ are both used to describe the MOP on the same page;
pavement distress prediction models, transfer functions and performance models all used
interchangeably; standard error of the estimate changes from Se in the body to SEE in the
Appendix)
• Does not include all the calibration factors in the software (at the time of print and now)
• Does not include enough information on how best to change calibration factors or enough
information on interrelation of the calibration coefficients and models (i.e adjust IRI last)
• Does not include all the transfer function equations that are being calibrated
• Examples both did not have enough samples to match what is recommended
• Examples both noted they were performed in an “expedited manner”

As such the Local Calibration Guide is not a stand-alone document, since it relies on the Manual
of Practice (MOP -specifically Chapter 5) and the APPENDICES to the NCHRP 1-37A report,
and even though it does reference the MOP and NCHRP 1-37A it does not clearly state that they
are absolutely necessary for understanding and performing local calibration correctly.

27
CRP Project 20-07/Task 422

The statistics noted in Sections 2, 4 and 5 are mainly traditional statistics, but they could be
explained better with some simple graphs and consistent nomenclature. The Examples do not
provide examples of exactly how the statistics are computed, and in cases the nomenclature is
different for the statistical terms in Section 6 and the Examples in the Appendix.

28
CRP Project 20-07/Task 422

3. PROPOSED SPECIFIC REVISIONS TO THE LOCAL CALIBRATION


GUIDE

The following provides recommended revisions to the Guide for Local Calibration of the
Mechanistic -Empirical Pavement Design Guide. First, general recommendations are presented,
then each section of the proposed revised Local Calibration Guide is covered in detail. The Title
of the Guide is also recommended to be revised: ‘Guide for the Local Calibration of the
AASHTOWare Pavement ME Design Software’ is suggested.

NOTE, these are the standard conventions used in the recommendations that follow, others can
be used in the Guide, just as long as they are defined and used consistently:
• MOP is used for the Manual of Practice
• LCGuide is used for the Local Calibration Guide
• LCPMED is used for the new, revised Guide for Local Calibration of
AASHTOWare Pavement ME Design Software

3.1 GENERAL REVISIONS


In an effort to improve the usefulness of the LCPMED, first it should be reorganized to focus on
the calibration steps:

• The Local Calibration Flow Chart (Existing LCGuide Fig 6-1 and 6-2) should be in the
Introduction. Each new Section will be one part of the Step-by-Step procedure (or:
current Section 6 of the LCGuide turns into new Sections 1-11 of the LCPMED)
• New Sections 1-11, (which describe the step-by-step procedure) should provide
examples of generic tables and matrices, statistical values, with computations or
reference location in Appendix with examples
• Statistical terms should be described for a more general audience (not
statisticians) using graphics if possible and including an example of how to
specifically compute
• Statistical terms that are used repeatedly should be in an Appendix (i.e. appendix
would describe model error terms, hypothesis testing, calibration and validation
and provide a general example of each, if one is not in the Body of the document)
• The Appendices should include:
• A. General “Lessons Learned” in performing local calibration
• B. Statistical definitions and discussions

Existing LCGuide Sections 1-5 are removed. Some parts of these sections can be included in the
new Sections as noted specifically below in the Detailed Recommendations, or part of the new
Appendix B related to statistical terms. Others, including the Examples in the existing Appendix,
will not be included or used.

29
CRP Project 20-07/Task 422

Each Step in the procedure needs more guidance and some specific examples of what is reasonable
or expected. Statistical explanations in the Body and Appendix B of the document should include
figures and graphs to better explain their meaning. The three PMED Local Calibration webinars
provided some of these types of examples in both generic and State specific cases. A very simple
example from the PMED webinar, that is not in the body of the existing LCGuide, is shown in
Figure 5.

Figure 5 - Hypothesis testing

Generic examples should be used as much as possible in the Guide, since State specific examples
may not always translate well to other conditions/States. Consistent nomenclature is imperative
for ease of use and understanding. Using the specific terms and calibration constants consistent to
what is in the PMED software would provide a needed link to the software. Additional quality
control should be performed to assure that the equations and default calibration factors are
accurate, as problems with local calibration due to errors in the MOP have been noted in the
literature.
Specific State reports are noted in the Detailed Recommendation section below to provide
potential examples to be used or considered in the revision (the State is noted in italics and
corresponds to the list of State Calibration Reports in the Appendix of the final Review report,
these reports will be placed on the AASHTOWARE website for ease of access). Additional
guidance should be mined from the three PMED local calibration webinars located on the me-
design.com website.
It is also recommended that the rewrite of the Guide only be performed by an entity that
has actually completed a local calibration effort. The intricacies and interrelationships of the steps
and statistical methods can be underestimated and not fully understood by someone that has not
been fully involved in local calibration. The final rewrite should be reviewed by a layperson that
is not as familiar with the statistics or the local calibration process.

3.2 DETAILED DESCRIPTION OF NEW LOCAL CALIBRATION GUIDE

The following specific revisions (by proposed new Section) are recommended for the new
LCPMED:

New Title:
Guide for the Local Calibration of the AASHTOWare Pavement ME Design
Software

30
CRP Project 20-07/Task 422

Introduction
The current description in paragraph 1 needs to be updated but overall it is appropriate. Other
paragraphs need to be revised. Content is too repetitive, too technical/statistical and in places
unclear. This section needs to include the Local Calibration Flowchart. The Introduction should
include a description of how Steps 1-6 relate to Steps 7-11 and how setting up Steps 1-6
correctly and judiciously could assist in the inevitable redoing of Steps 7-11 when needed (i.e.
major software changes). A caution on adding bias into recalibration due to improper site
selection is also warranted. A description of how the LCPMED is arranged should be provided.

Section 1 – Select Hierarchical Input Level for Each Input Parameter


The existing description in the LCGuide is acceptable. The input level terms can be described
here like in the current Section 2.3 of the LCGuide or preferably the MOP Chapter 4.2 should
just be referenced, since the exact same information is located there.
ADD: Sensitivity analysis should also be considered and described here as it relates to a state
specific calibration. Local typical pavement sections, the range of traffic levels found in the
state, and, any special climatic or materials conditions should all be identified as part of this step,
even though they are not used until Step 5. (Louisiana (Figure 5 and 7) provides good examples
of identifying typical pavement sections.) National sensitivity studies such as NCHRP 1-47 Final
Report (or Research Results Digest 372) should be referenced and applicable results noted in this
step, especially as related to factors that are sensitive but are not as easy to collect (i.e. HMA E*
master curve data was found to be sensitive but requires laboratory testing; JPC slab width was
found to be sensitive but relatively easy to get). It should also be noted that NCHRP 1-47 was
based on MEPDG Version 1.1 (2009 release), and the models in the latest version of AASHTO
Pavement ME Design are different, therefore, if more recent sensitivity studies are available or if
a State has a local sensitivity study that should be consulted and referenced. Michigan noted
performing a State specific sensitivity study in their calibration report (Michigan, Section 2.4.1).
A generic Input Level Table should be presented to provide a relatively constant method
to identify all the factors that should be considered for input levels, the Table should also
indicate the values that are somewhat set (i.e. tire pressure, tire spacing, traffic wander are almost
always going to be input level 3). It should be specifically stated that it is acceptable to mix
input levels, since to combine LTPP data and PMS data that would be necessary. Reports from
Colorado (Table 21), Michigan (Table 3-26), the MOP (Table 5-1) and NCHRP Synthesis 457
(Table 9) should be reviewed for examples of Tables to use, but a Generic Input Level Table
should be provided in the LCPMED.

Section 2- Develop Local Experimental Plan and Sampling Template


The existing description in the LCGuide is acceptable, except for the last two paragraphs
(Paragraph 3 that starts with “The sampling template…” and Paragraph/sentence 4 that starts
with “Most cells..”). These paragraphs need to be rewritten in less of a “Design of Experiments”
verbiage or left in as a note to statisticians and a better description also provided. Blocking or
blocked needs to be defined in a more general manner (i.e. grouping conditions that are expected
to behave similarly). Sample templates can be used to better describe the idea of a fractional
factorial matrix and blocking. An example from the PMED webinar is shown below in Figure 6.
Other specific recommendations from the webinar include: try to keep it simple, identify a

31
CRP Project 20-07/Task 422

primary tier, identify potential unusual factors by looking at your data. The thickness and
climate typically lead as primary tiers.

Figure 6- Example of Sampling Template

Replicate projects should be described as benefitting minimizing error of the model, using the
explanation of replicate from Note 2 from the existing LCGuide.
ADD: A generic Sampling Template should be presented for both Flexible and Rigid pavements
to provide a relatively consistent method to address sampling design. Reports from states such
as Missouri (Table 1), Arizona (Table 11 & 12), Virginia (Table 4 & 5), Louisiana (Table 5)
should be reviewed for examples of Templates to use, but a Generic Sampling Template or
Templates should be provided in the LCPMED. Kansas (Table 3.13 and 3.14) had an unusual
sampling template based on Subgrade Mr, it would also be good to include an example of an
unusual template like this and the reasons behind it.

Section 3-Estimate Sample Size for Specific Distress Prediction Models


The existing description in the LCGuide, paragraph 1 is acceptable. The remainder needs to be
rewritten with consistent terminology and an example of how to actually compute the sample
size (using theoretical data). The formula and guidance in the Guide, shown below, should also
be reviewed and revised if necessary based on the following comments.

Equation (6-1) from Local Calibration Guide

The equation noted appears related to the classis Cochran’s sample size formula (Cochran,
William G. (1977) Sampling Techniques, 3rd Edition, John Wiley and Sons, New York, NY.),
which is based on proportions needed for a representative sample, based on the estimated
proportions expected to be encountered. Cochran’s sample size is computed by multiplying a Z2
value to the result of dividing the variability (p*q) squared (which a maximum variability could
be construed as a threshold) by the desired precision squared (or tolerable bias), as shown below.

$ %&'
Cochran’s Sample size ! = (%

It is not clear if this value recognizes or is affected by the method of validation (i.e. spilt sample
vs jack-knifing). It was shown in one report (Michigan, Table 3-3) that if you use the same value
(i.e. 90%) for the confidence interval and reliability (which many State reports did) the equation

32
CRP Project 20-07/Task 422

in 6-1 simplifies to the square of the distress threshold divided by the standard deviation of the
distress as noted below.

+,-./(-- .0/(-012+ ;
!~ *3 (-.67+6/+ +(8,6.,17)
: if C.I. interval = reliability (1)
4

The values that are included in the current LCGuide example (Table A2-5) does not appear to
follow this equation, the N values noted are only the threshold values divided by the Se, they are
not squared, making the N values much lower. In all the State reports reviewed the N computed
for Flexible pavements IRI is much higher (3 to 4 times) than that needed for the individual
distresses of cracking and rutting, but then in most cases the IRI value is noted as being
neglected based on the argument that if the distresses are accurately modeled then the IRI will be
accurate. This argument needs to be revisited and specifically addressed for reasonableness.
It should also be noted that using equation (6-1) from the LCGuide, if a State has a lower
distress threshold than the Nationally calibrated model, the required N for that distress for that
State will be lower. At the same time, it has been documented that it is harder to calibrate
correctly with low levels of distress, so this result does not appear appropriate and it may require
the use of a different equation, or the use of a minimum value for sample size in all cases, like
the “standard minimum” 30 samples used in most statistics text books.
ADD: Many State local calibration reports did reference the minimum values shown on Page 6-5
of the current LCGuide. The report that is referenced in the existing LCGuide for these values
(Rada, 1999) does not have an example of how these were computed. Since the minimum values
given in the LCGuide apparently are being used the most they should also be re-evaluated and
any stipulations with using these instead of computing a value should be emphasized. The
actual distress values and SEE (Se)values that were used in calculating the minimum values
should also be included in the LCPMED to provide an ability to judge if these are applicable to
different situations.

Section 4- Select Roadway Segments


The existing description in the LCGuide needs to be updated to reflect the wording used in new
Section 2, it should also refer to the new Section 2 template. Since only one report (Guo, 2013)
has been found using accelerated pavement testing in local calibration, the value of including
descriptions of experiment test sections 2 and 3 on page 6-6 is questioned and it is recommended
to be removed. The factors included in the three bullets on the bottom of page 6-6 and the top of
page 6-7 were referenced by many reports and although they should be reviewed and updated as
necessary, the information should remain as part of this section.
ADD: Suggestions or example on how to select roadway segments should be included. Some
State reports identified sections geographically by selecting a number of sites from each District
(Virginia), some convened a group of experts (Missouri), one did it through a separate research
project that looked at data in the existing PMS (Georgia) and some did not describe how it was
done. Geographic Information Systems (GIS) should also be referenced in this section as a tool
to assist in project selection. Important factors should be addressed, such as having high enough
distress levels and variability of distress, availability of input values to match desired input levels
and how State procedures and policies can affect the factors.

33
CRP Project 20-07/Task 422

Section 5- Extract and Evaluate Distress and Project Data


The existing description in the LCGuide needs to be updated. The Local Calibration of the
MEPDG using Pavement Management Systems (FHWA HIF-11-026) document should be
referenced. The three steps (5.1, 5.2, 5.3) are still applicable but the descriptions need to be
updated also.

Step 5.1: The AASHTO References need to be reviewed and updated to the current standards.
The two options under Step 5.1 need to be updated, with the new HPMS requirements States
have already had to identify some method to convert some of their data to LTPP definitions
(especially cracking) or they are collecting it specifically in LTPP format for the HPMS sections,
either way this section needs to be updated in light of that information.

Step 5.2: The existing description in the LCGuide appears appropriate, but it does not address
what to do if the maximum distress values are lower than the threshold values. This was
identified in many of the local calibration research reports. Do you reexamine your non-selected
pavements to try to identify pavements with higher distress, at a potential cost of more variability
of input data? Do you specifically identify pavements that can be left unmaintained to gather
additional distress data for the future?
ADD: A recommendation or suggestions should be provided.

Step 5.3: The existing description in the LCGuide is appropriate but does need to be updated and
the last three paragraphs, which are heavy on statistical terms needs to be written in a more
general manner. Reference to sections that are not included in the new LCPMED (i.e. Section 5)
should be revised.
ADD: Chapter 9 of the MOP should be referenced for the field test portion. An additional
paragraph should be added recommending the researcher to label any time series graphs used for
analysis with location (road number, county), year and age of the pavement section and not just a
number label and age. The DOT personnel may be able to assist in analysis of outliers if they
have these details easily accessible when viewing the graphs. This also warrants a discussion on
the need for the modelers and data owners to discuss and review any anomalies in the data
together.

Section 6- Conduct Field and Forensic Investigations


Step 6.1: The existing description in the LCGuide is acceptable.

Step 6.2: The existing description in the LCGuide, in general, is appropriate. Taking cores for
HMA cracking initiation was identified in the research reports. An issue with this that should be
noted is that it is difficult to confirm bottom-up cracking, but top-down cracking can be
confirmed. Potentially due to issues with the rutting model and the changes to the rut model that
only recently allowed the rutting to be characterized by layer, or the cost and inconvenience of
trenching, trenching was rarely identified in the research reports reviewed. An exception was
Colorado. Colorado (Table 29) trenched 3 sites and identified 50-70% of the rutting in the
asphalt layers, 5-20% in the aggregate base and close to 25% for the top 12 inches of the
subgrade.

34
CRP Project 20-07/Task 422

Step 6.3: The existing description in the LCGuide is acceptable but needs to be updated.

ADD: A new Step 6.4 should be added. This step should recommend that the documentation of
the sites, PMED software project designs that are developed, Distress values from Section/Step 5
and any other data related to the local calibration should be organized and a method put in place
to maintain and update the performance measurements for these sections in preparation for future
recalibrations. It should be recognized, that depending upon future software modeling changes
the PMED projects themselves may need to be redone for future calibrations (that is the case
with some projects developed with PMED version 1.1) but having all the information readily
available will make the entire process much easier.

Section 7- Assess Local Bias of Global Calibration Factors


This section needs to be greatly enhanced and revised and Section 7 and Section 8 should be the
meat of the new LCPMED. An example of what this section should contain is in Appendix C.
ADD: It should start with a new Table listing the Distresses and Transfer Functions that are used
for each pavement type. (Michigan, Table 4-1) provides a good example. Nationally calibrated
Se (SEE or standard error of the estimate) values for each distress should also be included in the
new Table or a separate Table, like shown in Figure 7 below for flexible pavements.

Figure 7- Global Calibration Factors (From PMED Local Calibration Webinar 3)

The sentence related to using 50% reliability should be more prominent. A general discussion on
bias and errors is warranted here like what is in the existing Section 4, but the details of the
statistical components (i.e. existing Section 2 and 5) should be referenced in an Appendix. Parts
of the existing Section 4 (which includes calibration and validation) would be appropriate to lead
off this section, including a discussion on the different approaches to calibration (direct vs
incremental damage), but they need to be rewritten for a more general audience.

Step 7.1 should cover each distress model separately, matching the order of the new ‘Distresses
and Transfer Function’ Table. The definitions noted in the existing LCGuide Section 2.4 should
be reviewed/updated and moved to this section, but included with each distress discussion, not
all in one place. See the Example for JPC Transverse Cracking distress in Appendix C.
ADD: An example of how to compute the residual errors, bias and SEE (Se) should be included,
including graphs to support the examples.

Step 7.2 should provide an example of each hypothesis test (Difference =0, Intercept = 0, Slope
= 1) for one of the distresses noted in Step 7.1, and it should provide an example of a paired t-test

35
CRP Project 20-07/Task 422

for IRI. The meaning of the p-values should be addressed. A format to document the results
should also be provided, like that shown in Figure 8.

Figure 8 - Potential format for Step 7.2 results (From PMED Local Calibration Webinar 3)

Section 8- Eliminate Local Bias


The existing description in the LCGuide should be revised to be clearer. Graphs showing the 3
different possibilities (1. errors biased, 2. high error, but low precision and 3. trends in the errors)
would be beneficial. This section should provide an example of how to adjust the calibration
factors for each possibility. The current discussion that notes changing the “local calibration
coefficient” for case 1 and the “coefficient of the prediction equation” for case 2 is unclear. It
appears to relate to the two approaches noted in the LCguide webinar, see Figure 9. An example
of what this section should contain for JPS Transverse Cracking is in Appendix C.

Figure 9 - Step #8 clarification

ADD: Clarify the two approaches noted, and when they are appropriate. PMED Webinar 3 did
this well for JPC transverse cracking, where the basis of C1 and C2 were defined and it was

36
CRP Project 20-07/Task 422

recommended to leave those values and only change C4 and C5. (See Figure 10)

Figure 10 - JPCP Calibration factors from PMED Webinar 3

This section should also include a discussion of using graphing software to look for any trends in
the data to assist in identifying cases where potentially different trends are going on that can be
seen graphically like the one shown in Figure 11 (Figure A2-17 of the existing LCGuide).

Figure 11 – Identifying Trends in Data

The new construction projects shown as diamonds in Figure 11 appear to have two distinct
patterns, one that overpredicts and one that underpredicts rutting. If the individual projects can
also be identified graphically by different factors, i.e. thickness, construction date, mix type etc.
or the individual projects can be identified and discussed with the DOT personnel involved in
their maintenance or construction, it may clearly show the underlying cause for the differences.
This section also needs to cover each distress model separately, in the same order as
shown in Section 7, but this time concentrating on the calibration factors that should be adjusted
for a certain outcome. Existing Table 6-1 from the existing LCGuide provides some guidance,
but additional guidance is needed, like what is found in the existing LCGuide page A-48 for
rutting coefficients, page A-53 for fatigue cracking coefficients, page A-111 and A-114 for
faulting coefficients and, page A-57 for the fact that IRI needs to be adjusted after adjusting the
other distresses. Also, the existing Table 6-1 does not include all the calibration factors, and the
difference between coefficients that are used in local calibration (Option 1 in Figure 5) and
coefficients (Option 2 in Figure 5) that should only be changed due to material related properties
based on laboratory testing needs to be clearly discussed and differentiated.
Computing new standard deviation values for the distresses is not discussed in the
current LCGuide. It was discussed in the PMED webinar 3 and the method and reasoning for
this should be added to the LCPMED. Figure 12 is from that portion of webinar 3.

37
CRP Project 20-07/Task 422

Figure 12 - Computation of standard deviation of the distress (from PMED Webinar 3)

Section 9- Assess the Standard Error, SEE (Se)


The existing description in the LCGuide needs to be revised to match the verbiage changes made
in Section 7. Reference should be made to the new Table in Section 7 that includes SEE (Se)
values. This section should provide an example of a hypothesis test that would be performed. A
format to document the results should also be provided, similar but not exactly like that shown in
Figure 13. The table should include the National/global values as a comparison.

Figure 13 – Table of SEE (Se) values

Section 10- Reduce the Standard Error, SEE (Se)


The existing description in the LCGuide needs to be revised to match the changes made in
Section 7. Potentially an example would make this section clearer. Currently it appears that it
recommends that you look at each category in the sampling matrix closer and readjust the
calibration factors to see if they get better.

Step 10.1: This step needs to be rewritten and tied to the changes in Section 7.

Step 10.2: This step needs to be rewritten using wording from Section 2 (‘blocking’ is used here)
and tied to the changes in Section 7.

38
CRP Project 20-07/Task 422

Step 10.3: Not quite sure what Step 10.3 does? Does this mean go back to Step 8 (Section 8) and
recalibrate? If so, it should be written that way.

Section 11- Interpretation of Results


The existing description in the LCGuide is unclear. How do you evaluate the SEE (Se) for each
distress at different reliability levels? How do you determine expected design life? It does not
say to run a number of sections using the local calibration factors and comparing the results to
the national calibrated factors, but that is one way to assess reasonableness in the results. Or
running parallel designs with the current method of design for a period of time prior to
implementation is another way. In the LCGuide Examples this step appeared to be more of a
practical analysis of reasonableness than a statistical analysis. Some clarity of the intent and
methods to perform this step are needed in the LCPMED.

Appendix A – Lessons Learned


Appendix A should contain some specific examples based on experiences or novel methods used
by States in their Local calibration, such as:

• How local calibration has driven changes in the software models, and therefore sharing
local calibration information is a necessary part of continuous improvement of the
software
• How State collected cracking for both flexible and rigid pavements has been adjusted to
fit the PMED definitions (All States have had to address this, at least for flexible
cracking)
• How using cut-off values or defaults in PMS can affect local calibration, such as
Louisiana and faulting (Louisiana, page 55-59)
• Different specific methods or procedures that have been used in performing local
calibration
o Purdue/Indiana’s method of using a Grid Search method to calibrate and
identify optimum sample size
o Iowa’s use of a sensitivity index (Iowa, page 189)
o Michigan’s use of transverse profiles to assist in rutting prediction
(Michigan, page 96-102)
o Louisiana’s simplification of the rutting model (Louisiana, page 65))

Appendix B – Statistical Reference


• Calibration and validation methods (do not just use existing Section 4- the pertinent
information in existing Section 4 should be here but rewritten in a format that is
consistent with new Section 7 and more appropriate for a general audience). This section
should also provide a recommendation- it seems that jack-knifing would be more
appropriate for small sample sizes like what would be used for Level 1 and 2 inputs and
split sampling for large sample sizes that would rely on PMS data and level 2 and 3
inputs – but someone with the necessary statistical knowledge should provide the
recommendations that will be provided here, and possibly even consider bootstrapping or
the grid search method used by Indiana/Purdue

39
CRP Project 20-07/Task 422

• Derivation/discussion of the Sample size based on new Section 3


• Hypothesis testing basic discussion
• Regression basics and standard error terms, include graph of measured-predicted and
residuals and describe the graphs in relation to new Section 8 (Local bias). NCAT
(Figure A) and Michigan (Figure 4-1, page 92) reports both provide a good example of
graphically describing error, precision and bias.

40
CRP Project 20-07/Task 422

REFERENCES

a. References included in Task 422 Report (Alphabetical)

American Association of State Highway and Transportation Officials (AASHTO). 2008. Mechanistic-Empirical
Pavement Design Guide: A Manual of Practice, Interim Edition. Washington, DC.

American Association of State Highway and Transportation Officials (AASHTO). 2010. Guide for the Local
Calibration of the Mechanistic-Empirical Pavement Design Guide. Washington, DC.

Chow, L.C., Mishra, D. and Tutumluer, E. 2014. Aggregate Base Course Material Testing and Rutting Model
Development. FHWA/NC/3013-18. North Carolina DOT. Raleigh, N.C.

Mu, F., Mack, J.W. and Rodden, R.A. 2016. Review of National and State Level Calibrations of the
AASHTOWare Pavement ME Design for New Jointed Plain Concrete Pavement. International Journal of
Pavement Engineering.

Pierce, L.M. 2018. AASHTO Pavement ME National Users Group Meetings. Technical Report: Second Annual
Meeting-Denver, CO. FHWA. Washington, D.C.

Pierce, L.M., and McGovern, G. 2014. Implementation of the AASHTO Mechanistic-Empirical Pavement
Design Guide and Software: A Synthesis of Highway Practice. NCHRP Synthesis 457. Transportation Research
Board of the National Academies, Washington, D.C.

Von Quintus, H.L., Mallela, J., Sadasivam, S., and Darter, M. 2013. Literature Search and Synthesis—
Verification and Local Calibration/Validation of the MEPDG Performance Models for Use in Georgia.
GADOT-TO-01-Task 1. Georgia Department of Transportation, Forest Park, GA.

Wu, Z., Yang, X & Zhang, Z. 2013. Evaluation of MEPDG flexible pavement design using pavement
management system data: Louisiana experience, International Journal of Pavement Engineering, 14:7, 674-685,
DOI: 10.1080/10298436.2012.723709

b. State Local Calibration Research Reports (by State)

The following 36 states were identified as having either performed a local calibration, being in
the process of performing a local calibration, or noted as having a local calibration project in
the Transportation Research Board Research in Progress (RIP) database. Under the State name
the authors of the latest local calibration research report are noted. The date next to the State
name is the published date of the latest report. The software version that was used for local
calibration is identified if it was included in the report. The actual local calibration research
reports have been located for the states with a Report Icon (19 of the 36). These reports will
be placed on the AASHTOWare site for future reference.

ARIZONA (2014) REPORT


ARA(Michael I. Darter, Leslie Titus-Glover, Harold Von Quintus, Biplab B. Bhattacharya, and
Jagannath Mallela). Calibration and Implementation of the AASHTO Mechanistic-Empirical

41
CRP Project 20-07/Task 422

Pavement Design Guide in Arizona. [DARWin-ME, performed in 2012]

ARKANSAS (In TRB RIP database -Survey also noted that it was in process)
University of Arkansas (Kevin Hall) Local Calibration of the MEPDG (TRC-1003)

COLORADO (2013) REPORT


ARA (Jagannath Mallela, Leslie Titus-Glover, Suri Sadasivam, Biplab Bhattacharya, Michael
Darter, and Harold Von Quintus). Implementation of the AASHTO Mechanistic Empirical
Pavement Design Guide for Colorado. [AASHTO Pavement ME 1.0]

FLORIDA (2008) REPORT


TTI (Jeongho Oh and Emmanuel G. Fernando). Development of Thickness Design Tables Based
on the M-E PDG. Tallahassee, Florida. [MEPDG Version 0.8]

GEORGIA (2014) REPORT


ARA (Mr. Harold L. Von Quintus, P.E., Dr. Michael I. Darter, P.E., Dr. Biplab Bhattacharya,
P.E., and Mr. Leslie Titus-Glover). Calibration of the MEPDG Transfer Functions in Georgia,
Task Order 2 Report. Forest Park, GA. [AASHTO Pavement ME Version 1]

HAWAII (from TRB RIP)


University of Hawaii, Manoa (Archilla, Adrian) Updating of the State Pavement Management
System and Calibration of the 2002 Design Guide for Hawaiian conditions. (noted as starting in
2005)

IDAHO (from TRB RIP- in progress)


University of Idaho, Moscow (Bayomy, Fouad) Calibration of the MEPDG Performance Models
for Flexible Pavements in Idaho. RP235 (started 2015, in progress)
University of Idaho, Moscow (Bayomy, Fouad, Kassem, Emad and Muftah, Ahmed) Calibration
of the MEPDG Performance Models for PCC Pavement in Idaho RP268 (started 2017, in
progress)

INDIANA (from TRB RIP – in progress)


Calibration is noted in 2018 survey as being in NCHRP Report. TRB RIP notes two projects:
Purdue University/IDOT (Haddock, John) SPR-4212: Structural Evaluation of Full-Depth
Flexible Pavement Using APT.(started in 2018)
Purdue University/IDOT (McCullouch, Bob, Lee, Ju Sang, Nantung, Tommy and Chun,
Hyonho) SPR-3711: MEPDG Implementation (Validation/Model Calibration/Acceptable
Distress Target/IRI Failure Trigger/Thermal Selection/Binder Selection) and Climate Data
Generation (started in 2013, 2018 completion date)

IOWA (2015) REPORT


Iowa State University (Halil Ceylan, Sunghwan Kim, Orhan Kaya, and Kasthurirangan
Gopalakrishnan). Investigation of AASHTOWare Pavement ME Design/DARWin-ME
Performance Prediction Models for Iowa Pavement Analysis and Design. InTrans Project 14-
496. Ames, IA. [ AASHTO Pavement ME Version 2.1.24 - Report is an update to a previous
calibration that used DARWin-ME Version 1.1]

42
CRP Project 20-07/Task 422

KANSAS (2015) REPORT


University of Kansas (Xiaohui Sun, Jie Han, Ph.D., P.E., Robert L. Parsons, Ph.D., P.E., Anil
Misra, Ph.D., P.E., Jitendra K. Thakur) Calibrating the Mechanistic-Empirical Pavement Design
Guide for Kansas. Report KS-14-17. Topeka, KS. [AASHTO Pavement ME Version 1.3]
Also the flexible example in the local calibration guide and 2009 Report KSU-04-4.

KENTUCKY (In TRB RIP database, survey noted they have not calibrated)
University of Kentucky (Graves, L). “Local Calibration and Strategic Plan for Implementation
of AASHTO MEPDG”, “AASHTO MEPDG Calibration Continuation“ and “MEPDG
Implementation” (three projects listed in RIP, latest starting date noted as 2009)

LOUISIANA (2016) REPORT


LTRC (Zhong Wu, Ph.D., P.E., and Danny X. Xiao, Ph.D., P.E.) Development of DARWin-ME
Design Guideline for Louisiana Pavement Design. Report FHWA/LA.11/551. Baton Rouge, LA.
[AASHTO Pavement ME 2.0]

MAINE
(Noted in Survey that they are currently calibrating in-house)

MICHIGAN (2017) REPORT


Michigan State University (Syed Waqar Haider, Gopikrishna Musunuru, M. Emin Kutay,
Michele Antonio Lanotte and Neeraj Buch). Analysis of Need for Recalibration of Concrete IRI
and HMA Thermal Cracking Models in Pavement ME Design. Report SPR-1668 (AASHTO
Pavement ME Versions 2.0, 2.2 and 2.3)
Michigan State University. 2015. (Syed Waqar Haider, Neeraj Buch, Wouter Brink, Karim
Chatti and Gilbert Baladi). Preparation for Implementation of the Mechanistic-Empirical
Pavement Design Guide in Michigan, Part 3: Local Calibration and Validation of Performance
Models. Report RC-1595. Lansing, MI. [Appears to be AASHTO Pavement ME Version 1.3]

MINNESOTA
Noted in survey only calibrated for simplified rigid pavements.

MISSISSIPPI
Survey noted that only a preliminary calibration was performed and the final report is still in
review and not yet available

MISSOURI (2009 and ongoing in TRB RIP) REPORT


ARA (Jagannath Mallela, Leslie Titus-Glover, Harold Von Quintus, Michael Darter, Mark
Stanley and Chetana Rao). Implementing the AASHTO Mechanistic Empirical Pavement Design
Guide in Missouri, Vol II: MEPDG Model Validation and Calibration. MODOT Study R104-
002. [Not defined in report but assumed to be original MEPDG 1.0 software due to date of
report] Also the PCC example in the local calibration guide.
Existing Project from RIP: RAO Research and Consulting (Rao, Chetana) MEPDG Local
Calibration (started in 2016)

43
CRP Project 20-07/Task 422

MONTANA (2007) REPORT


ARA and Fugro (Harold Von Quintus and James Moulthrop). Mechanistic-Empirical Pavement
Design Guide Flexible Pavement Performance Prediction Models for Montana. Project HWY-
30604 DT. [Not defined in report but assumed to be original MEPDG 1.0 software due to date of
report]

NEBRASKA
ARA (Survey noted they are currently calibrating with ARA)

NEVADA
(Noted in the Survey that they had calibrated but that the reports are not available)

NEW MEXICO (2012) REPORT


University of New Mexico (Rafiqul A. Tarefder, Nasrin Sumee, Jose I. Rodriguez, Sriram
Abbina, and Karl Benedict). Development of a Flexible Pavement Database for Local
Calibration of MEPDG. Report NM08MSC-02. [MEPDG Version 1.0 (MEPDG 2010)]

NORTH CAROLINA (2007) REPORT


North Carolina State University (Y. Richard Kim, Fadi M. Jadoun, T. Hou, and N. Muthadi)
Local Calibration of the MEPDG for Flexible Pavement Design. Report FHWA\NC\2007-07.
[MEPDG Version 1.1]

NORTH DAKOTA
(Noted in the Survey that they had calibrated but did not provide any other information)

OHIO (2009) REPORT


ARA (Leslie Titus Glover and Jagannath Mallela) Guidelines for Implementing NCHRP 1-37A
M-E Design Procedures in Ohio: Volume 4—MEPDG Models Validation & Recalibration.
FHWA/OH-2009/9D.Columbus, OH. [MEPDG Version 1.0]

OKLAHOMA (2011)
University of Oklahoma (Hossain, Musharraf Zaman, Curtis Doiron, Steven Cross) Development
of flexible pavement database for local calibration of MEPDG final report. ODOT SPR No 2209.
Oklahoma City, OK. [AASHTO Pavement ME Version 1.1 based on 2015 TRB paper] {Do Not
have copy of report}

OREGON (2013) REPORT


Iowa State University (Dr R. Chris Williams and R. Shaidur) Mechanistic-Empirical Pavement
Design Guide Calibration for Pavement Rehabilitation. SPR 718. [DARWin-ME noted in report,
appears to have started in 2011]

PENNSYLVANIA REPORT to be available in June 2018


Noted in survey that they would have reports available.

44
CRP Project 20-07/Task 422

SOUTH CAROLINA (in progress)


University of South Carolina (R. L. Baus, N. R. Stires) Mechanistic-Empirical Pavement Design
Guide Implementation FHWA-SC-10-01 [MEPDG Version 1.003 noted as used for rigid and
MEPDG 1.10 used for flexible] This report is a Sensitivity analysis and not a full calibration
(Noted in Survey that they are calibrating now)

TENNESSEE (2015)
University of Tennessee (Baoshan Huang and Xiang Shu) Summary for RES2013-33 and Thesis
located, no report. [Student’s Thesis noted that MEPDG Version 1.1 was used]

TEXAS
(Survey noted that the University of Texas/San Antonio and ARA are currently calibrating)

UTAH (2009) REPORT


ARA (Michael I. Darter, Leslie Titus-Glover, and Harold L. Von Quintus) Implementation Of
The Mechanistic-Empirical Pavement Design Guide In Utah: Validation, Calibration, And
Development Of The UDOT MEPDG User’s Guide. UT-09.11 [MEPDG Version 1.0]

VERMONT (from TRB RIP, survey noted a calibration was done with PMS data)
Vermont Agency of Transportation (In-house researcher Nick Meltzer) SPR 711: Correlating
ME-PDG with Vermont Conditions, Phase II. (start date is 2010)

VIRGINIA (2015) REPORT


VCTIR (Bryan Smith, P.E., and Harikrishnan Nair, Ph.D., P.E.) Development of Local
Calibration Factors and Design Criteria Values for Mechanistic-Empirical Pavement Design.
FHWA/VCTIR 16-R1. [AASHTO Pavement ME Version 1.3]

WASHINGTON STATE (2011) REPORT


University of Washington (Jianhua Li, Jeff S. Uhlmeyer, Joe P. Mahoney, Stephen T. Muench)
Use of the 1993 Guide, MEPDG and Historical Performance to Update the WSDOT Pavement
Design Catalog. Seattle, WA [Recalibrated to DARWin-ME Version 1.0 under this project.
Report notes that WSDOT calibrated rigid with MEPDG V 0.6 in 2005 and flexible with Version
1.0 in 2008]

WISCONSIN (2012) REPORT


Marquette University (James A. Crovetti & Kathleen T Hall) Local Calibration of the
Mechanistic Empirical Design Software for Wisconsin. Madison, WI. [MEPDG Version 1.1]

WYOMING (2015) REPORT


University of Wyoming (Taylor Kasperick and Khaled Ksaibati) Calibration of the Mechanistic-
Empirical Pavement Design Guide for Local Paved Roads in Wyoming. Mountain Plains
Consortium MPC 15-294. [DARWin-ME Version 1.1 Build 1.1.32 12/20/2011 identified in
software screen shots]

45
CRP Project 20-07/Task 422

c. National/Other Reports related to Local Calibration (by Area and Date


published)

AASHTO Pavement ME National User Group Meetings-2 Denver, Colorado 2017

AASHTO Pavement ME National User Group Meetings- 1 Indianapolis, Indiana 2016

FHWA-HIF-15.021. AASHTO MEPDG Regional Peer Exchange Meetings (Pierce, L. and


Smith, K. 2015)

FHWA-HIF-11-026. Local Calibration of the MEPDG Using Pavement Management Systems.


(2010)

NCHRP 20-07/Task 327 Report (2014) Developing Recalibrated Concrete Pavement


Performance Models for the Mechanistic-Empirical Pavement Design Guide. (Unofficial report)

NCHRP Synthesis 457 (2014) Implementation of the AASHTO Mechanistic-Empirical


Pavement Design Guide and Software (Pierce, L.M., and McGovern, G.)

NCHRP Report 719 (2012) Calibration of Rutting Models for Structural and Mix Design

NCHRP RRD 308 (2006) Changes to the Mechanistic-Empirical Pavement Design Guide
Software Through Version 0.900. NCHRP Project 1-40D.

NCAT 17-07, (2017) Summary of local calibration efforts for flexible pavements

NCAT 17-08, (2017) Impact of local calibration, foundation support, and design and reliability
thresholds

d. Thesis Reports Related to Local Calibration

Five different student Thesis (3 Phd and 2 MS) related to local calibration were identified. Their
references are provided below.

Abdullah, Ali Qays. (2015) Development Of A Simplified Flexible Pavement Design Protocol
For New York State Department Of Transportation Based On AASHTO ME Pavement Design
Guide. University of Texas at Arlington. PhD diss.

46
CRP Project 20-07/Task 422

Guo, Xiaolong. (2013) Local Calibration of the MEPDG Using Test Track Data. Auburn
University. MS thesis.

Nabhan, Peter. (2015) Calibration of the AASHTO MEPDG for Flexible Pavements to Fit
Nevada’s Conditions. University of Nevada, Reno. MS thesis.

Rahman, Md Shaidur. (2014) Local calibration of the MEPDG prediction models for pavement
rehabilitation and evaluation of top-down cracking for Oregon Roadways. Iowa State University.
PhD diss.. http://lib.dr.iastate.edu/etd/14295

Zhou, Changjun, (2013) Investigation into Key Pavement Materials and Local Calibration on
MEPDG. University of Tennessee. PhD diss.. http://trace.tennessee.edu/utk_graddiss/2504

47
CRP Project 20-07/Task 422

APPENDIX A
Global Calibration Factors (MOP 2008 and MOP 2015) and Equations
(MOP 2015)
The values for the global calibration factors over time, and the latest transfer equations affected by
these factors are included in this Appendix. Tables A-1 to A-5 show the documented flexible
global calibration factor values in the two different versions of the Manual of Practice (MOP) 2008
and 2015 and the values in the current software. Tables A-6 to A-8 show the documented JPC
global calibration factor values for the MOP versions and also two values identified for the Task
288 NCHRP recalibration project, the Task 327 version are the same values in the current software.
Table A-9 and A-10 show the documented CRC global calibration factor values for the MOP
versions and a version found in the Task 327 NCHRP recalibration report. As shown, the values
have changed, and calibration values have been added. The changes in individual calibration
factors are shown as shaded cells. The cell is only shaded the first time it changes (i.e. in Table
A-2, the Coarse-Grained, ks1 factor changed from 2008 MOP to the 2015 MOP but is the same for
MOP 2015 and the software, so it is only shaded in the MOP 2015 column).
These calibration factor values are included in Chapter 5 of the MOP and somewhat noted
in the Examples in the Local Calibration Guide, but they are not summarized as noted here in either
document. It is not even clear which factors are important and what their influence is in the Local
Calibration Guide, since the equations that use the calibration factors are not included in the Local
Calibration Guide document. The Examples in the Appendix of the Local Calibration Guide do
provide some insight on the calibration factors and what their influence is, but this information is
within the examples and not easily found.

FLEXIBLE PAVEMENTS

Table A1— HMA/AC Rutting


Global Current
Transfer Function Global Value Value Software*
Coefficient (MOP 2008) (MOP
2015)
k1r -3.35412 -3.35412 -3.35412
k2r 0.4791 0.4791** 1.5606
k3r 1.5606 1.5606** 0.4791
B1r 1.0 1.0 1.0
B2r 1.0 1.0 1.0
B3r 1.0 1.0 1.0
*Current Software has ability to have different values for 3 different layers and it notes a Std
Dev of 0.24POW(RUT,0.8062) + 0.001
**MOP 2015 notes in the Preface (Page v1) that these values have changed to what is in the
current software, but they are not changed in the body of the document (pg 39)

A-1
CRP Project 20-07/Task 422

Table A2— Unbound Layer Rutting


Current
Transfer Function Global Value
Global Value Software*
Coefficient (MOP 2008)
(MOP 2015)
Coarse-Grained, ks1 1.673 2.03 2.03
Coarse-Grained Bs1 1
Fine-Grained, ks1 1.35 1.35 1.35
Fine-Grained Bs1 1
*Current software also notes two Std Dev values:
Coarse: 0.1477*POW(BRUT,0.6711) + 0.001
Fine:0.1235*POW(BRUT,0.5012) + 0.001

Table A3— HMA/AC Bottom-Up Alligator/Fatigue Cracking


Current
Transfer Function Global Value
Global Value Software*
Coefficient (MOP 2008)
(MOP 2015)
kf1 0.007566 0.007566 0.007566
kf2 -3.9492 +3.9492 +3.9492
kf3 -1.281 +1.281 +1.281
bf1 1.0 1.0 1.0
bf2 1.0 1.0 1.0
bf3 1.0 1.0 1.0
C1bottom 1.00 1.00 1.00
C2bottom 1.00 1.00 1.00
C4bottom/C3 6000 6000 6000
C1top 7.00 7.00 7.00
C2top 3.5 3.5 3.5
C4top 1000 1000 1000
C3top 0
*Current software also notes two Std Dev values:
Top: 200 + 2300/(1+ exp(1.072-2.1654*LOG10(TOP + 0.0001)))
Bottom: 1.13 + 13/(1+ exp(7.57-15.5*LOG10(BOTTOM + 0.0001)))

A-2
CRP Project 20-07/Task 422

Table A4—HMA/AC Thermal Transverse Cracking


Global Current
Transfer Function
Value Global Value Software
Coefficient
(MOP 2008) (MOP 2015)
Bt1 400 400
Level 1=5.0 Level 1=1.5
kt Level 2=1.5 Level 2=0.5
Level 3=3.0 Level 3=1.5

Table A5—HMA IRI


Calibration Global Value Global Value Current
Factor (MOP 2008) (MOP 2015) Software
C1 (Rut) 40.8 40.0 40.0
C2 (LCracking) 0.575 0.400 0.400
C3 (TCracking) 0.0014 0.008 0.008
C4 (SiteFactor)* 0.00825 0.015 0.015
Flex o/PCC C1 40.8 40.8
Flex o/PCC C2 0.575 0.575
Flex o/PCC C3 0.0014 0.0014
Flex o/PCC C4 0.00825 0.00825
SEE 18.9 in/mile 18.9 in/mile
SEE (Flex 9.6 in/mile
9.6 in/mile
o/PCC)
*Note SF equation changed from 2008 to 2015 MOP

A-3
CRP Project 20-07/Task 422

RIGID PAVEMENTS

Table A6—JPCP Mid-Slab (Transverse) Cracking


Transfer Task 288 NNC* Current
Global Value
Function Global Value Software
(MOP 2008)
Coefficient (MOP 2015)
C1 2.0 2.0 2 2 2
C2 1.22 1.22 1.22 1.22 1.22
C4 Not defined (1) 1.00 0.6 0.6 0.52
Not defined (- -1.98 -2.05 -2.05 -2.17
C5
1.98)
-0.00198*POW POW(57.08* 3.522*POW(C
(CRACK,2) + POW(5.3116* CRACK, RACK,
Standard
0.56857 CRACK,0.390 0.33) + 1.5 0.3415) + 0.75
Deviation
CRACK + 3)+ 2.99
2.76825
*Task 288 values shown are as defined in the Task 327 report, NNC was defined as the Task
288 values in a paper (Mu et al. 2016), current software uses the Task 327 values

Table A7—JPCP Faulting


Transfer Task NNC* Current
Function Global Value Global Value (MOP 288* Software
Coefficie (MOP 2008) 2015)
nt
C1 1.29 1.0184 0.5104 1.252632 .595
C2 1.1 0.91656 0.00838 1.1273688 1.636
0.00147 0.0026875 0.00217
C3 0.001725 0.0021848
5
0.00834 0.0010869 0.00444
C4 0.0008 0.000883739
5 51
C5 250 250 5999 250 250
C6 0.4 0.4 0.8404 0.4 0.47
C7 1.2 1.83312 5.9293 9.1 7.3
C8 Defined as 400 Defined as 400 400 400 400

A-4
CRP Project 20-07/Task 422

0.07162*PO
(0.00761*FAUL
Standard [0.0097*FAULT(t)]0. W
T(t) + 5178
Deviation + 0.014 (FAULT,0.36
0.000008099)0.445
8) + 0.00806
*Task 288 values shown are as defined in the Task 327 report, NNC was defined as the Task
288 values in a paper (Mu et al. 2016), current software uses the Task 327 values

Table A8—IRI JPCP


Calibration Global Value Global Value Current
Factor (MOP 2008) (MOP 2015) Software
C1 (Cracking) 0.8203 0.8203 0.8203
C2 (Spalling) 0.4417 0.4417 0.4417
C3 (Faulting) 0.4929 1.4929 1.4929
C4 (Site Factor) 25.24 25.24 25.24
0.35 m/km
SEE 17.1 in/mile reported
(22.2 in/mile)
Initial Standard 5.4
deviation

A-5
CRP Project 20-07/Task 422

Table A9—CRCP Punchout(All CRCP Applications)


Transfer Current
Global Value (MOP
Function Global Value Software
2008)
Coefficient (MOP 2015)
C1 2 2.0 2.0
C2 1.22 1.22 1.22
C3/ Apo 195.789 216.8421 107.73
C4/ 33.15789 2.475
19.8947
alphaPO
C5/ betaPO -0.526316 -0.58947 -0.785
2+ 2.208*POW(PO,
Standard -0.00609*PO2 +
2.2593*POW(PO, 0.5316)
Deviation 0.56242*PO + 3.36783
0.4882)
SEE 3.6 in/mile 3.6 in/mile*
*SEE value for 2015 is suspect since Figure 5-14 (PO resulting from Global Calibration) in the
2015 MOP is exactly the same as the 2008 MOP, but the calibration values are different.

Table A10—IRI CRCP


Calibration Global Value Global Value Current Software
Factor (MOP 2008) (MOP 2015)
C1 (PO) 3.15 3.15 3.15
C2 (Site Factor) 28.35 28.35 28.35
(VARIRIi + 5.4
Standard Error C12*VARpo + 7.08 ln (IRI) -11
Se 2 )
0.21 m/km (13.3
SEE (Se) 14.6 in/mile
in/mile)

A-6
CRP Project 20-07/Task 422

APPENDIX B1.
Local Calibration Survey 2018: Summary

B-1
CRP Project 20-07/Task 422

Some of the comments:


• “it was found that the model in the Manual of Practice (MOP) is questionable. for both
the alligator and Longitudinal cracking, the standard error increases as the amount of
alligator or longitudinal cracking increases.”
• “cementitious stabilized material base layers were never calibrated at the national
level.”
• “Climatic model (soil PI, passing #200, etc.) is very sensitive to the IRI. The critical
pavement performance criteria for pavement design in PavementME is always the IRI,
never the fatigue cracks. To local calibrate the soil in the EICM in relation to IRI is close
to impossible. “
• “"borrowed" a couple of LTPP sites from our neighboring states “
• “New materials used recently will require different calibration “
• “the major distress for new HMA pavement is top-down alligator cracking which is not
modeled in MEPDG. “
• “JPC-longitudinal cracking and rutting are not modeled in MEPDG. “
• “There was either little to no cracking, or a lot of cracking in our calibration sections. “
(Appendix B2 is Excel spreadsheet containing all survey results, available on request.)

B-2
CRP Project 20-07/Task 422

APPENDIX C
Examples for Section 7 and 8 of New Guide

The following examples are intended to provide a format/outline for the changes recommended to
Section 7 and Section 8 of the New Local Calibration Guide as described in Section 3.2 of this
report. The examples only cover one distress (JPC Transverse Cracking Model) but the intent is
that the New Guide will cover each distress in detail like shown here. The distress itself needs to
be defined and the equations relating to calibration need to be shown. The specific values that are
compared to assess the global values needs to be described like shown here for Section 7. Section
8 needs to clarify which calibration coefficients are mechanistically modeled and which should be
identified for local calibration. Specific effects of changing the calibration coefficients need to
be discussed, along with an example of how to set up a regression to perform the calibration (if
appropriate).

Section 7 - (Step 7.1) Assess Global Calibration Factors


JPCP Transverse Cracking Model
JPCP Transverse Cracking is composed of Bottom-Up and Top-Down Cracking.

Transverse Cracking, Bottom-Up (JPCP)—When the truck axles are near the longitudinal edge of
the slab, midway between the transverse joints, a critical tensile bending stress occurs at the bottom
of the slab under the wheel load. This stress increases greatly when there is a high positive
temperature gradient through the slab (the top of the slab is warmer than the bottom of the slab).
Repeated loadings of heavy axles under those conditions result in fatigue damage along the bottom
edge of the slab, which eventually result in a transverse crack that propagates to the surface of the
pavement. A reasonable standard error of the estimate for total transverse cracking or total percent
slabs cracked is seven percent. The PMED predicts the total percent slabs cracked which includes
both bottom-up and top-down cracking of JPCP.

Transverse Cracking, Top-Down (JPCP)—Repeated loading by heavy truck tractors with certain
axle spacing when the pavement is exposed to high negative temperature gradients (the top of the
slab cooler than the bottom of the slab) result in fatigue damage at the top of the slab. This stress
eventually results in a transverse or diagonal crack that is initiated on the surface of the pavement.
The critical wheel loading condition for top-down cracking involves a combination of axles that
loads the opposite ends of a slab simultaneously. In the presence of a high negative temperature
gradient, such load combinations cause a high-tensile stress at the top of the slab near the critical
pavement edge. This type of loading is most often produced by the combination of steering and drive
axles of truck tractors and other vehicles with similar axle spacing. Multiple trailers with relatively
short trailer-to-trailer axle spacing are the other source of critical loadings for top-down cracking.

The equations that include the four calibration factors (C1, C2, C4 and C5) for the
transverse cracking model are shown as follows:

C-3
CRP Project 20-07/Task 422

Cracking is computed separately for top-down and bottom- up cracking and the total cracking is
computed using the following equation:

Measured Cracking and month: The dates of the measured cracking should be identified, and the
month of the measurement is computed by calculating the time from construction date to
measurement date. (Note that the equation above is from the MOP 2015 and the typographical
error that is in the MOP should be remedied, therefore it should be CRKBottom-up, not CRKBottop-up .

Predicted Cracking and month: % Cracking by month is computed in the software and included
in the JPCP_Cracking.xls file. The % Cracking used should correlate to the same months
defined for the Measured Cracking.

The Measured Cracking and Predicted Cracking are compared by graphing the % cracking and
the residual errors as shown below. The measured and predicted cracking should also be
statistically compared as described in Appendix B. The Global calibration factors used for the
PMED runs, R2, Standard error and N should all be documented, along with the bias and residual
errors, as described in Appendix B.

C-4
CRP Project 20-07/Task 422

Section 8.2-Local Calibration


JPCP Transverse Cracking Model-
Adjusting calibration coefficients for JPC Transverse cracking
C1 and C2 are coefficients based on laboratory testing which
determines the number of load repetitions. It is not
recommended to adjust the C1 and C2 coefficients unless the
C4 and C5 calibration factors cannot provide a reasonable
regression. Additional laboratory testing can indicate different
C1 and C2 coefficients for the conditions being calibrated.

PMEDCracking.xls document from the PMED analysis runs provides Total_BU and Total_TD
fatigue damage by month. These values are based on the fatigue damage predictions which then
uses the transverse cracking transfer function and then the C4 and C5 coefficients are used to
determine the total transverse cracking prediction. The fatigue damage values themselves are not
based on C4 and C5, so the values in the PMED runs computed using the global calibration
factors from Section 7 can be used directly here. Based on the TCrack equation noted in Section
7 the TCrack predicted is:
=>> =>> =>> =>>
TCrack predicted= =?@ BC DE + =?@ GHDE − *=?@ BC DE : *=?@ GHDE :
A A A A
And the regression is:
Ypred = TCrack predicted at month I based on the formula above, and
Ymeas = Measured % transverse cracks at month i, and
X= Total_BU + Total_TD
A typical graph of measured % cracking vs fatigue damage is shown below:

Changes in C4 is expected to reduce the bias and changes in C5 is expected to improve the
precision of the model. The values can be adjusted individually based on the needs identified in
Section 7 (i.e. if the global values were particularly biased or imprecise), and then at the same
time, and the results compared.
An example from an excel spreadsheet (only a portion shown) is below. Microsoft Excel Solver
can be used to minimize the Sum of the Squared error by varying calibration coefficients C4 and
C5. The resulting calibration factors identified, R2, standard error and N should all be
documented, along with the bias and residual errors, as described in Appendix B.

C-5
CRP Project 20-07/Task 422

Checking Local Calibration Factors


The YPred vs YMeas relationship from Section 7 should also be reviewed and documented as
noted in Section 7, using the calibration factors identified above to calculate YPred, instead of
the global calibration factors. The SEE (Se) value that is identified based on the YPred vs
YMeas graph is the one to compare to the values computed under Section 7 for the Global
calibration coefficients.

C-6

You might also like