NCHRP20-07 (422) FR
NCHRP20-07 (422) FR
NCHRP20-07 (422) FR
User Review of the AASHTO ‘Guide for the Local Calibration of the
Mechanistic-Empirical Pavement Design Guide’
FINAL REPORT
Prepared for
Transportation Research Board
of
ACKNOWLEDGEMENT OF SPONSORSHIP
This work was sponsored by the American Association of State Highway and Transportation
Officials, in cooperation with the Federal Highway Administration, and was conducted in the
National Cooperative Highway Research Program, which is administered by the Transportation
Research Board of the National Academies of Sciences, Engineering, and Medicine.
DISCLAIMER
The opinions and conclusions expressed or implied herein are those of the contractor. They are not
necessarily those of the Transportation Research Board, the Academies, or the program sponsors.
ii
CRP Project 20-07/Task 422
User Review of the AASHTO ‘Guide for the Local Calibration of the
Mechanistic-Empirical Pavement Design Guide’
TABLE OF CONTENTS
SUMMARY/ABSTRACT ....................................................................................................... v
REFERENCES ...................................................................................................................... 41
a. References included in Task 422 Report ......................................................... 41
b. State Local Calibration Research Reports ....................................................... 41
c. National/Other Reports related to Local Calibration ........................................ 46
d. Thesis Reports related to Local Calibration ..................................................... 46
iii
CRP Project 20-07/Task 422
FIGURES
TABLES
iv
CRP Project 20-07/Task 422
SUMMARY/ABSTRACT
This project “User Review of the AASHTO Guide for Local Calibration of the MEPDG” was a project
to prepare a critical review of the Local Calibration Guide from the viewpoint of a general pavement
practitioner. The results show that local calibration is at the same time very straightforward but also more
complex than anticipated. The equations used can be straightforward, and the statistical tests used are
relatively standard, but the myriad of factors involved, including the need for input from practitioners to
provide expertise in local calibration is evident from this review. The first two sections of this report are
provided as background and support for the changes that are advocated in section 3. A detailed description
of changes recommended to improve the Local Calibration Guide was developed based on this effort and
is included as Section 3. It is anticipated that local calibrations performed based on an improved Local
Calibration Guide developed following these recommendations will be easier to perform and also easier to
compare to other local calibration efforts. As previous local calibration efforts have led to improvements
in the models in the PMED software, it is expected that this will continue and therefore it is also
recommended to develop a process to gather and disseminate the latest information on calibration to all
users.
v
CRP Project 20-07/Task 422
Pavements are the most ubiquitous and complex engineered system in our environment.
Pavements connect every large city, and every small city. Pavements connect to your house,
school, work, parks and other recreation areas. Millions of miles of pavement are walked, biked
or driven on every day by millions of people. How can something that is everywhere be so
complex? Pavements are man-made engineered systems constructed mainly using local materials,
following local conditions, local history and affected by local environmental conditions and local
practices. A whole area of study is devoted to geological differences and soil engineering
(geotechnical engineering), which is the foundation of all pavements. Pavements are composed
of different materials (aggregates, binders, fillers, etc.) that are most cost-effective if procured
locally. Pavements are exposed to very different climates, ranging from the relatively constant 75
degrees F of South Florida to the below freezing temperatures of Northern Ontario. Pavements
are exposed to different levels of loading, from a pavement on a major shipping route to a rural
pavement connecting two remote cities. Human nature and personal preference along with the
previously mentioned local conditions and history has all had some influence on the different ways
pavements are structured, designed and constructed in different locations. Is it no surprise that any
mechanistic model to describe the complex, diverse engineered system that we collectively call
‘pavements’ requires local calibration?
The mechanistic design model most widely used for pavements is AASHTOWare Pavement ME
Design (PMED). PMED uses the inputs of the different materials, different layer configurations,
different climates and different loadings and converts the stresses into distresses while also
considering the incremental effects of environment and fatigue. PMED has been nationally
calibrated with Long Term Pavement Performance (LTPP) data to address size effects and other
circumstances that won’t allow us to go straight from small sample test results to full-size in place
field conditions without some adjustment. The PMED software is licensed worldwide.
Implementation of the software is an ongoing effort, and local calibration is an important part of
implementation. Local calibration is also a necessary part of validation after major software
updates. The Pavement ME National Users Group recently identified local calibration and
verification as the most common difficult or challenging part of implementation. In that same
meeting report, 14 of the 25 responding States noted that they had performed local calibration.
Based on the survey performed for this project, the number of States that have identified that they
have performed local calibration is 28, with 3 others currently in process of performing local
calibration. Several States have also performed local calibration more than once. Local calibration
will be an ongoing process used by States as improvements are made to the PMED software, so
methods to make it less challenging in practice, and especially in the minds of the users, is key.
More users are performing local calibration and now is an opportune time to improve and
1
CRP Project 20-07/Task 422
streamline the process. Based on the volume of users and interest in this area, and the knowledge
that has been gained in performing the numerous local calibrations, it is clear that changes are
needed to the AASHTO Guide for the Local Calibration of the Mechanistic-Empirical Pavement
Design Guide. It is also clear that a more automated method to perform calibrations and
recalibrations is a current need of the AASHTO Pavement ME Design software Users community.
As of the development of this document, that effort has already started as part of a separate
AASHTOWare initiative.
This report consists of the following sections: (1) a history of AASHTO Pavement ME software
versions and the changes that would affect local calibration including a summary of the Survey on
Local Calibration performed in April 2018, (2) review of the current contents in the Local
Calibration Guide, and (3) specific recommendations on revisions to the Local Calibration Guide.
The Reference section includes standard references included in the report, but also includes a
listing by State of Local Calibration reports identified, thesis reports identified, and National or
other reports related to local calibration. The Appendices contain (A) listing of all current and
previously documented global calibration factors and equations, (B) details from the 2018 Local
Calibration survey and results (summarized in section 1) and (C) examples of what is expected in
the next version of the Local Calibration Guide (as described in section 3).
AASHTO Pavement ME Design started as NCHRP Project 1-37, Development of the 2002
Guide for the Design of New and Rehabilitated Pavement Structures in 1996. That project led to
the much larger NCHRP 1-37a, Development of the 2002 Guide for the Design of New and
Rehabilitated Pavement Structures, Phase II. Table 1 provides the past and current NCHRP
projects under the pavements area most related to local calibration. Other NCHRP projects have
also been completed in the materials area (NCHRP 4-36 and 9-30A) but are not included here.
1-40(D) User Manual and Local Calibration Guide for the Mechanistic-Empirical Pavement
Design Guide and Software
1-40(D)01 Technical Assistance to NCHRP and NCHRP Project 1-40A: Versions 0.9 and 1.0
of the M-E Pavement Design Software (Flexible)
2
CRP Project 20-07/Task 422
1-40(D)02 Technical Assistance to NCHRP and NCHRP Project 1-40A: Versions 0.9 and 1.0
of the M-E Pavement Design Software (Rigid)
1-51 A Model for Incorporating Slab/Underlying Layer Interaction into the MEPDG
Concrete Pavement Analysis Procedures (Completed 12/2016)
As noted above, a number of follow on NCHRP projects followed and continue to this day, as the
most complex mechanistic based pavement design software is continuously improved. A number
of these NCHRP projects have been incorporated into upgrades of the software, but projects
recently completed or still in progress also will lead to changes that will impact local calibration.
AASHTOWare has started a more regular schedule for future software updates. The AASHTO
Website notes that none of the FY 2017 enhancements “have any impact on the transfer function
calibration coefficients”. It is also noted that based on a technical audit of the software a global
recalibration of the flexible and semi-rigid pavements is planned in FY 2018. As these on-going
projects in Table 1 emphasize, local calibration should not be considered a one-time event, but part
of the evolution and improvement of the pavement design software. As the calibration factors
increase in number and complexity it is increasingly relevant that a relatively standard method be
developed and computerized to assist in recalibrating the software.
3
CRP Project 20-07/Task 422
1.1.1 Chronology of AASHTO Pavement ME Design Software Changes related to Local Calibration
In the span of less than two decades AASHTO Pavement ME Design has
evolved from the original MEPDG Design Guide software CD
Figure 1- June 2004 MEPDG
Review Software CD (shown in Figure 1) based on the original NCHRP 1-37A project
to the current AASHTOWare Pavement ME Design (PMED)
software Version 2.4. A timeline of these changes are noted in Table 2. Note that the term
MEPDG has generically been and continues to be used to describe the software and the design
process collectively, even though the software name changed to DARWin-ME in 2011 and
Pavement ME Design (PMED) in 2013. PMED has also been used to collectively refer to
DARWin-ME and PMED. In literature published after 2013, DARWin-ME was often noted as
AASHTO Pavement ME Design 1.0. But, the literature published before 2013, such as the FHWA
TechBrief on the AASHTOWARE Website and research reports completed before 2013, note the
2011 version as DARWin-ME. Other references published before 2013 also use DARWin-ME to
describe the software at the time, as that is what it was called before the 2013 rebranding to PMED.
Based on the number of references that use PMED Version 1.0 to actually describe DARWin-ME,
PMED Version 1.0 is also noted in Table 2. It should also be noted that the 2013 AASHTO
Pavement-ME Version 1 software is actually the same as the previous Darwin-ME, the software
itself did not change in 2013, just the name changed.
2010 AASHTO Adopts Local Calibration Guide – November 2010 (current version)
2011 AASHTO DARWin-ME released (also later called Pavement ME Design Version 1.0)
4
CRP Project 20-07/Task 422
2014 January – AASHTO PMED Version 2 ( Build 2.0.19) (citrix capability and layer by
layer asphalt rutting added)
Changes affecting calibration: asphalt rutting model
April 2014– NCHRP 20-07/Task 327 results presented to Joint Technical committee on
Pavements (rigid pavement recalibration)
July 2014- PMED Version 2 (2.1.22) (backcalculation added, subgrade moduli in
sensitivity analysis added)
2015 AASHTO MEPDG Manual of Practice, Second Edition (current version) approved by
AASHTO
Based on PMED Version 1
PMED Version 2 (Build 2.2)
Changes affecting calibration: reflection cracking model added, added semi-rigid
options, fully integrated CTE changes from NCHRP 20-07/Task 327
2016 PMED Version 2 (Build 2.3.0)
Changes affecting calibration: Added SJPCP Analysis Model based on BCOA-ME
2017 PMED Version 2.4(Build 2.3.1)* - added Backcalculation Tool (BcT) Version 1.0
*The latest User Group report noted that V2.4 was the formal designation for this version,
but the AASHTOWare website and the software itself note it as V2.3.1
The chronology presented in Table 2 was developed using a combination of sources: Release Notes
from the AASHTOWARE website for the most recent changes (2013-current), and, literature
reviews from different State local calibration studies (Iowa, Georgia, Washington State and
Wyoming) for the 2004 to 2013 changes. Some of the research reports also reference MEPDG V
0.6, V 0.9, V 1.003 or V 1.1, but no dates for those changes were found in the literature. Now that
the Pavement ME Design software is under the umbrella of AASHTOWare any future changes
should be more controlled and accurately recorded.
The version of the software that was used to perform a local calibration is necessary to
compare calibration values for different States or in a Region, it could be confusing to make
comparison of precision and bias for different versions of the software if the global calibration
values changed.
5
CRP Project 20-07/Task 422
It should be noted that changes in the PMED software do not automatically require
recalibration, even if a model in the software changes. If the Agency is not using that pavement
type that is being modeled (i.e. like semi-rigid bases in PMED Build 2.2), then there is no need to
recalibrate if that is the only change in the software.
6
CRP Project 20-07/Task 422
Figure 2- Do you currently use AASHTO Pavement ME either for your State/Provinces pavement
designs or for comparison designs (parallel effort)?
Twenty-nine States and Ontario noted that they have performed a local calibration, as shown in
Figure 3. Of these, twenty different research reports have been located that reflect local calibration
efforts for different States. The individual efforts may not have calibrated all the models or they
may have in a few cases only used Level 3 inputs, but it appears they all have been an effort at
local calibration. Another 3 States noted as Other (Maine, Nebraska and South Carolina) in the
map are in the process of calibration. Fourteen States noted they have not performed a local
calibration. As shown in the map, many States in the Northeast noted they had not calibrated,
some others (Maine) have been identified as performing calibration in-house. This could be an
opportunity for a potential Transportation Pooled-Fund research project for local calibration of
neighboring states or the region as a whole, depending upon how similar their pavement practices
are.
7
CRP Project 20-07/Task 422
Figure 3- Have you ever performed (or contracted) a local calibration of AASHTO PMED?
Twenty-four States noted that they had performed Local calibration after 2010, when the Local
Calibration Guide was first published. Twenty of these states noted that the Local Calibration
Guide was used in the research. Review of the available research reports identified that they all
recognized the Local Calibration Guide as a reference if they were after 2010, and many
specifically referenced using the specific process in the Local Calibration Guide.
A question on specific studies related to PMED elicited the responses noted in Table 3. A
number of States reported efforts in materials and traffic studies in the latest National Users
Group meeting report [Pierce, 2018], and these also showed up as the most performed specialty
studies in this survey.
Table 3 - Responses to “Have you performed (or contracted) specialty studies for PMED?
Specialty Studies Performed? Number of
responses
Traffic Data Analysis 26
Materials Catalog/Categorization 25
Sensitivity 20
Training 18
Climate Specific Study 11
None 6
8
CRP Project 20-07/Task 422
Sensitivity studies were also identified by a high number of respondents, discussions and use of
sensitivity studies was also found in the individual research reports on local calibration. Local
calibration is obviously recognized as just one part of implementing PMED.
Another question was related to what conditions or issues were encountered as part of local
calibration, the responses are noted in Figure 4. Sample size was identified by most. Both
examples used in the current Local Calibration Guide noted sample size issues. Sample size issues
also appear in the National User Group report, especially as related to JPC pavements. Other
potential responses dealt with pavement management system data (PMS). At least one State noted
that they used PMS data with conversions along with PMS data directly. This combination was
seen in the local calibration research reports and was also shown in the examples in the Local
Calibration Guide.
Figure 4 – Indicate any of the conditions you encountered in performing the latest local calibration
(Check all that apply)
States checking the “unexpected results” included North Carolina, whose local calibration
research effort identified issues with the unbound aggregate base which lead to additional
research on characterizing aggregate bases [Chow et.al, 2013].
9
CRP Project 20-07/Task 422
The last question in the Survey was open-ended and requested: Did you encounter any other special
situations in local calibration you can share? A portion of the responses are noted below:
• “it was found that the model in the Manual of Practice (MOP) is questionable for both the
alligator and Longitudinal cracking, the standard error increases as the amount of
alligator or longitudinal cracking increases.”
• “cementitious stabilized material base layers were never calibrated at the national level.”
• “Climatic model (soil PI, passing #200, etc.) is very sensitive to the IRI. The critical
pavement performance criteria for pavement design in PavementME is always the IRI,
never the fatigue cracks. To local calibrate the soil in the EICM in relation to IRI is close
to impossible. “
• “"borrowed" a couple of LTPP sites from our neighboring states “
• “New materials used recently will require different calibration “
• “the major distress for new HMA pavement is top-down alligator cracking which is not
modeled in MEPDG. “
• “JPC-longitudinal cracking and rutting are not modeled in MEPDG. “
• “There was either little to no cracking, or a lot of cracking in our calibration sections. “
• “major distress for”… “new (doweled) JPCP is multi-cracking (including longitudinal
and transverse cracking) and rutting due to studded tire wear, but longitudinal cracking
and rutting are not modeled in MEPDG.”
• “Concrete cracking quantities made calibration difficult. There was either little to no
cracking, or a lot of cracking in our calibration sections. During the most recent
recalibration of the concrete models, a model to estimate the curl/warp input was
developed. This was done to induce more cracking in our design to better match our
experience with concrete.”
• “Did not have any field validation of rutting per layer and our pavement management
data does not distinguish between top-down and bottom-up cracking.”
The comment about borrowing LTPP sites is another area where pooled-fund research may be
beneficial, States could identify pavement sections that use similar materials and do a pooled-fund
study to monitor those pavements as a region, like a Regional LTPP program. Comments on the
lack of distress or variety of distress has also been identified in the MEPDG User Group meetings.
Other comments relate to the conditions that are not modeled in the software, or specific concerns
with the results of the local calibration.
Calibration of the AASHTO Pavement ME Design software involves correlating the measured
distresses to the predicted distress from the software. A perfect correlation would have measured
values = predicted values. In the real world there are differences between the measured and
predicted values. The standard error of the estimate (SEE or Se) is a measure of these differences,
so it is a measure of the accuracy of the model. A smaller SEE value typically, but not always,
indicates a more accurate model.
10
CRP Project 20-07/Task 422
1.3.1 Global Calibration and the Manual of Practice (MOP) 2008 and 2015 Versions
Global calibration was performed for the original NCHRP 1-37A project which developed the
original MEPDG software, and then again for the NCHRP 1-40 projects that also developed the
Local Calibration Guide and the original Mechanistic -Empirical Pavement Design Guide, A
Manual of Practice (MOP). AASHTO adopted the MOP officially in 2008. The AASHTO version
(AASHTO, 2008) notes that the calibration factors included in that document are based on the 1-
40D project, but the rigid calibration factors are the same as reported in the original 1-37A project.
The current edition of the MOP (AASHTO, 2015) does not specify the basis for the calibration
factors noted for the flexible or rigid models, although they have changed from the 2008 MOP,
especially for rigid pavements (see Appendix A). It is assumed based on documentation from the
NCHRP 20-07 Task 317 project (the project that was the basis for the 2015 MOP) that some of
the changes from the 2008 to the 2015 version were based on documented technical errors found
with the original 2008 MOP, changes to align with what was in the then current (when the project
started in 2011) AASHTO DARWin-ME software, and changes to the rigid calibration factors due
to the NCHRP 20-07/Task 288 project. But as shown in Appendix A, the Task 288 values are
different than what is in the 2015 MOP. Some time elapsed between the completion of Task 317
and the publishing of the 2015 MOP, and therefore those changes most likely occurred during this
time span.
The 2008 and 2015 version of the MOPs both note that they include the following prediction
models:
The 2015 MOP also identified that it changed calibration coefficients in the following models:
o HMA Rutting model (Unbound materials), ks1
o HMA Fatigue Cracking, kf2 and kf3
11
CRP Project 20-07/Task 422
The effect of these changes for the Flexible and Rigid models is discussed below.
FLEXIBLE:
Based on the values of the calibration coefficients in the Current PMED software (Version 2.4-
Build 2.3.0), the flexible (HMA) calibration coefficients in the software are the same as noted in
the 2008 and 2015 versions of the MOP except as described below. These three changes seem
minor but could have a major impact on local calibration since they affect several different models
(cracking, rutting and IRI). Based on this it is questionable to compare local calibrations (i.e. SEE
(Se) values) that were performed using different versions of these values.
The HMA fatigue/load related cracking coefficients (kf2 and kf3) went from negative values to
positive values. These coefficients were both non-zero exponents which related the allowable
loads to the strain (kf2) and the dynamic modulus of the asphalt (kf3), respectively, as noted in the
equation below. A local calibration performed using these values as negative instead of positive
would not be comparable since it would totally change the equation that is being calibrated. Two
recent local calibration research reports noted these values as negative as shown in Section 1.3,
Table 2.
The HMA rutting coefficients (k2r and k3r) were switched. The 2008 MOP k2r became the 2015
MOP k3r and viceversa. These calibration coefficients were also non-zero exponents, k2r for
loading and k3r for pavement temperature as shown in the equation below. Since these were not
simply linear changes, comparing different local calibrations that used these factors unswitched
would also totally change the equation that is being calibrated. Two recent local calibration
research report identified the switched values as shown in Section 1.3, Table 2.
The HMA IRI equation includes a Site Factor component. The method to compute the SF changed
from the 2008 MOP to the 2015 MOP as shown below. Based on the fact that the comparison of
measured and predicted IRI in Figure 5-6 in the 2008 MOP is the exact same Figure 5-6 in the
2015 MOP, this appears to have had absolutely no effect on the calibration. But if different
equations were used in local calibration a difference would be expected in calibration. Some of the
reports noted in Section 1.3 used the 2008 MOP value, and some used the 2015 MOP value.
12
CRP Project 20-07/Task 422
Michigan identified this difference and used the 2015 MOP value. Iowa identified a SF equation
different than either of the ones noted in the MOP versions.
2008 MOP
SF= AGE[0.02003(PI+1) + 0.007947(Precip +1) + 0.000636(FI + 1)]
2015 MOP
IOWA
SF= AGE[1+ 0.5556FI)(1+ p200) x 10-6
RIGID:
Based on the values of the calibration coefficients in the Current PMED software (Version 2.4-
Build 2.3.0) and the 2008 and 2015 MOP as shown in Appendix A, the rigid (JPCP and CRCP)
model has been globally calibrated a number of times and therefore the global calibration
coefficients have changed numerous times. Throughout the different global recalibrations, the C1
and C2 values for the Transverse cracking model and the IRI calibration coefficients (C1-C4) are
the only values that stayed constant.
Realization of an issue with the method of testing the concrete coefficient of thermal expansion
(CTE) drove the first separate recalibration that was completed in the 2011/2012 timeframe:
NCHRP 20-07, Task 288. Unusual results using the Task 288 recalibration values, including
significantly thinner pavements just due to the different calibration factors, drove the second
NCHRP 20-07, Task 327, which started in 2012/2013. The concrete recalibration project, Task
327, was completed in April 2014. The recommended values from the Task 327 project are the
values now in the current PMED software version 2.3. Neither the Task 288 or the Task 327
values are in the 2015 MOP, but they have both apparently have been used by States performing
local calibration. Two different versions of Task 288 values were found and are noted in Appendix
A, one from the Task 327 report and the other from a journal paper that described local calibration
of the JPC (Mu et. al. 2016). Michigan DOT also recently identified significant slab thickness
decreases between PMED Versions 2.0 and 2.2. Michigan DOT has recently recalibrated their
rigid pavements again, this time to PMED version 2.3.
The Task 327 reports notes that the use of different software versions and different
calibration data sets led to the concerns that prompted the need for the Task 327 project in the first
place. Based on the many changes in the software it is not recommended to compare local
calibrations to global calibrations (i.e. SEE (Se) values) that were performed using different
definitions of global coefficients.
13
CRP Project 20-07/Task 422
-LCGuide left a lot of the details that were in other documents out (MEPDG 1-37A reports
and Appendices, MOP), therefore the different calibration efforts, even though they did
follow the LCGuide, did not perform it the same way.
-LCGuide did not explain the meaning and use of calibration coefficients, apparently
relying on the MOP definitions and the discussions in the Appendix of the MEPDG,
therefore some calibration efforts may have been calibrated incorrectly (i.e. changing
mechanistic model coefficients using just statistical methods and no laboratory testing).
-LCGuide did not provide a clear means of consistently documenting the results of a
local calibration effort, therefore it is not easy to compare different calibration efforts.
Some provided SEE (Se) values for split samples separately and some did not clearly
note what global calibration factors were used.
-Since in some cases the global calibration coefficients have changed a number of times,
and in some cases the mechanistic models themselves have changed, it is difficult to
compare the different SEE (Se) values, since they are based on different global
calibration factors.
The LCGuide was developed in 2007/2008 and has not been updated in the decade since, but the
software has changed numerous times and much knowledge has now been gained from different
local calibration efforts
-Issues in the models have been identified and have led to new research for improvements
to the models
-Outside factors have also complicated some of the models (i.e CTE testing issue)
Tables 3 -7 shows the results of local calibration efforts from some of the most recent State
sponsored research reports. Each of these reports were performed by a different research team.
Table 3 identifies the number of pavement sections that were used in the calibration and how the
results were validated. As noted, the software used by the reports varied from DarWIN ME 3.1
to the latest version of the PMED (Version 2.3). Although the different SEE (Se) values are
included in the Table, due to the discussion in Section 1.3.1, comparing these values is really not
recommended. Many State reports identified at least one distress that was not prevalent in their
14
CRP Project 20-07/Task 422
state, and the SEE (Se) is reflective of the distress values used, so if the distress is small the SEE
(Se) will be small. This is evident in the case of Kansas in Table 3, they have an extremely low
SEE (Se) value for faulting but they also have extremely low faulting. The States using more
than the minimum number of sections typically used their PMS data and the ones with the
smaller number of sections did field testing to confirm some of the inputs. Some States did not
calibrate all the models. Even comparing specific models and using the most recent reports, the
global calibration values that were used were not the same and other considerations include:
• If PMS data was used, the State had to convert their methods of collecting cracking into
the method used by PMED
• The validation procedures used were not always clearly defined
• Some provided different options for local calibration factors (noted as value1|value2 in
the cell)
• Different methods were used for calibration in some cases
• Some research reports identified potential errors in the previous (2008) or current MOP
(2015)
Some specific facts from the latest research reports are noted below:
Iowa noted that both the FAULTMAX equation and the IRI equation for JPC in the 2008 MOP
did not match the software they were using (PMED 2.1.24). (The equations are also the same in
the 2015 MOP.) Iowa also defined and used additional statistical values beyond R2 and SEE(Se)
to evaluate regression models; LOE (line of equality) R2 and MAPE(mean absolute percentage
error). Although they calibrated to V2.1.24 they did report on preliminary studies comparing the
results of V2.2 with their local calibration factors developed using V2.1.24. They noted
differences in IRI prediction between PMED V2.1.24 and V2.2, which they attributed to changes
in the Freezing Index Factor and EICM (Enhanced Integrated Climatic Model) changes. Iowa
also noted that their flexible pavement distress was mainly reflective cracking, so they will need
to fully recalibrate to PMED Version 2.2., which now includes the new reflection cracking
model. Iowa used two approaches to IRI calibration, one using the local calibration values for
the distresses and one using the national values for the distresses.
Kansas used PMS data for the distresses used for this calibration effort and Level 3
traffic, climate and materials inputs. They found differences in their HMA sections related to
subgrade modulus, so they calibrated asphalt based on two levels of modulus. They did not
identify any bottom-up asphalt cracking in their PMS, so they did not calibrate the asphalt
bottom-up fatigue cracking model. They identified very small faulting values of mainly less than
0.01 inch in their data (The accuracy of the faulting test itself is +/- 1/32” or 0.03 inches). They
did not include calibration of the JPC transverse cracking model in the report, but they noted
they did calibrate the JPC IRI model.
Louisiana used a new process for fatigue and rutting, setting one of the fatigue (bf2) and
rutting (br2) factors to 1 and only changing the other two, they noted the description of the
15
CRP Project 20-07/Task 422
process was available as a published paper [Wu et. al, 2015]. They also used a Finite Element
Analysis to determine a new C1 factor for JPC cracking based on their typical 20-foot joint
spacing instead of 15 ft. They checked reasonableness of their local calibration results by
running 15 projects with the 1993 design and the calibrated PMED and comparing. They did not
include soil-cement projects in their calibration due to concern with the reflective cracking
model.
Michigan compared results for PMED V2.0, 2.2 and 2.3 in their latest report (2017).
That report identified changes from V 2.0 to 2.3 that would require recalibration of the rigid
pavement but not the flexible pavement. They also had a concern (similar to Iowa) that the
freezing index was different between PMED V 2.0 and 2.2 and it was one of the factors that
affected their results. An earlier (2015) Michigan report shared a method to compute flexible
pavement rutting contribution using a transverse profile.
Virginia used PMS data in their local calibration. They did not have enough JPC sections
to perform a calibration. They did not calibrate flexible top-down cracking, thermal cracking or
chemically stabilized layers fatigue due to pending revisions in the software, or lack of
sensitivity to Virginia’s conditions. They noted a lack of initial IRI values for their projects and
they were concerned that this may have been why they had difficulty in calibrating the flexible
IRI model. They used residual plots to compare the rutting and fatigue errors and identified an
overprediction of rutting at higher AADTT using this method. Some example designs were
performed for different AADTT levels as a final check.
As the next version of the PMED software is expected to incorporate many improvements that will
require recalibration, many States will benefit from a new LCGuide that incorporates the needs
identified. It is also appropriate to have software that assists in local calibration to provide some
consistency in the specific methods and procedures that are best practices to be used in local
calibration.
As such, a new Local Calibration Guide is necessary that clearly and concisely describes the
intent of the calibration coefficients and also provides an outline to document the local calibration
coefficients developed to provide the most benefit to other States and to the State themselves for
their future calibrations. The new LCGuide should clearly note:
Calibration coefficients that are mechanistic in nature and should be changed only based on
new laboratory studies
Calibration coefficients that should/can be locally calibrated statistically
Calibration coefficients that can be adjusted based on finding a ”best fit” to the field data
statistically
Calibration coefficients that can only be adjusted based on trial and error based on PMED
software runs
16
CRP Project 20-07/Task 422
17
CRP Project 20-07/Task 422
18
CRP Project 20-07/Task 422
19
CRP Project 20-07/Task 422
20
CRP Project 20-07/Task 422
21
CRP Project 20-07/Task 422
As part of this project, the existing “Guide for the Local Calibration of the Mechanistic-
Empirical Pavement Design Guide” was categorized by content, purpose and understandability.
The results of this effort are presented in this section. General/summary comments are provided
at the end of this Chapter. Nomenclature used:
– The ‘MOP’ is the 2015 edition of the AASHTO Manual of Practice
– MEPDG is the previous term for the AASHTO PMED software
– SEE is the term used here to describe the standard error of the estimate (also noted
as Se in the Local Calibration Guide). It is factor that relates to the difference
between the actual and measured values in calibration.
1.0 INTRODUCTION
The introduction very generally familiarizes the reader to the MEPDG design procedure by
presenting the Conceptual Flow chart for the MEPDG Design process. Distress prediction
models (transfer functions), calibration and validation are also discussed in a very generic
manner. The Introduction also notes that the performance models in the MEPDG were
calibrated on a global level.
22
CRP Project 20-07/Task 422
4.2 Validation
This section describes the basics of validation: an additional and independent set of data to test
the calibration. It mentions using chi-square or t-tests on SEE (se) to provide a check on
validation.
It notes the null hypothesis (predicted values are not statistically different than measured values)
should be checked for each performance indicator. It also suggests a 80/20 split sample for
calibration/validation.
23
CRP Project 20-07/Task 422
This error is Distress/IRI dependent and is different for different distress types based on how the
distress is measured (can the measurement be repeated, reproduced, and how variable is the
distress). Example: mean rut depth is an estimate of the true mean value)
24
CRP Project 20-07/Task 422
the flexible and JPC examples are noted under each heading, comments related to both
examples are noted here:
• Both examples are based on an ‘expedited time frame’ and do not have enough samples.
• They each show a ‘simplified’ sampling template, not a recommended template.
• The factorials are unbalanced and do not have replication in either example.
• Estimated number of segments needed are noted for both examples but how they were
computed was not shown.
• The use of 50% reliability as noted in Step 7 is not clearly noted.
• Wording in the examples repeats wording found in the Local calibration guide, without
adding additional context.
• No examples were provided for how statistical parameters were computed. The sections
chosen had very little distress in some cases.
• The equations noted under Table A3-5 use different nomenclature than the equations in
Step 3 of the Guide.
25
CRP Project 20-07/Task 422
• The discussion on the faulting Coefficients (page A-111 and A-114) and what they affect
would be more valuable in the body of the report
• A-117 recommends a comprehensive sensitivity analysis be performed but provides no
references or other recommendations
The minimum information necessary to perform local calibration and whether it is in the current
Local Calibration Guide (LC Guide) is noted in Table 8:
Not all of the calibration factors are included in the current Local Calibration Guide, and very
few of the transfer equations are included. The Manual of Practice (MOP) Chapter 5 includes
the transfer equations but does not always define the relationship between the calibration factors,
in some cases the original NCHRP 1-37A documents and Appendices are required. The statistics
that are included in the Guide could be explained better with figures and consistent
nomenclature. Statistics that are mentioned but not shown should be presented with examples
for clarity.
The calibration steps described in the Local Calibration Guide have been used by many researchers
and although improvements have been incorporated by others, the basic steps have stood the test
26
CRP Project 20-07/Task 422
of time. With more entities performing local calibration, the relation of practitioners being
involved in local calibration to the success of implementation has been acknowledged. Over the
last decade, statistical methods and concepts have become more mainstream, and so the
terminology needs to be updated to address and engage the non-statistician. The Examples
provided in the Guide were based on the best data at the time, but the amount and quality of data
that was used in the Examples over 10 years ago is now considered inadequate. Due to these
considerations, and since the body of the LCGuide (Sections 1-7) is only 33 pages in length, (only
16% of the volume of the entire document- the entire document is 202 pages, including front matter
and back cover) and the Examples make up the bulk of the Guide, it is appropriate to revise and
update the Local Calibration Guide.
Some concerns with the current Guide document is that it does not lay out the
framework/procedure until almost the last Section (Section 6) and the Guide does not have enough
information to perform a local calibration. The Manual of Practice (MOP) is necessary to even set
up the equations to do any sort of analysis. The MOP is now included in the AASHTO Pavement
ME Software as part of the Help screens but searching for equations and copying them from a help
screen is not ideal. It is noted that some calibration factors can be adjusted outside the software
and other must be adjusted by running the software, but this is not clearly delineated. Specific
concerns:
As such the Local Calibration Guide is not a stand-alone document, since it relies on the Manual
of Practice (MOP -specifically Chapter 5) and the APPENDICES to the NCHRP 1-37A report,
and even though it does reference the MOP and NCHRP 1-37A it does not clearly state that they
are absolutely necessary for understanding and performing local calibration correctly.
27
CRP Project 20-07/Task 422
The statistics noted in Sections 2, 4 and 5 are mainly traditional statistics, but they could be
explained better with some simple graphs and consistent nomenclature. The Examples do not
provide examples of exactly how the statistics are computed, and in cases the nomenclature is
different for the statistical terms in Section 6 and the Examples in the Appendix.
28
CRP Project 20-07/Task 422
The following provides recommended revisions to the Guide for Local Calibration of the
Mechanistic -Empirical Pavement Design Guide. First, general recommendations are presented,
then each section of the proposed revised Local Calibration Guide is covered in detail. The Title
of the Guide is also recommended to be revised: ‘Guide for the Local Calibration of the
AASHTOWare Pavement ME Design Software’ is suggested.
NOTE, these are the standard conventions used in the recommendations that follow, others can
be used in the Guide, just as long as they are defined and used consistently:
• MOP is used for the Manual of Practice
• LCGuide is used for the Local Calibration Guide
• LCPMED is used for the new, revised Guide for Local Calibration of
AASHTOWare Pavement ME Design Software
• The Local Calibration Flow Chart (Existing LCGuide Fig 6-1 and 6-2) should be in the
Introduction. Each new Section will be one part of the Step-by-Step procedure (or:
current Section 6 of the LCGuide turns into new Sections 1-11 of the LCPMED)
• New Sections 1-11, (which describe the step-by-step procedure) should provide
examples of generic tables and matrices, statistical values, with computations or
reference location in Appendix with examples
• Statistical terms should be described for a more general audience (not
statisticians) using graphics if possible and including an example of how to
specifically compute
• Statistical terms that are used repeatedly should be in an Appendix (i.e. appendix
would describe model error terms, hypothesis testing, calibration and validation
and provide a general example of each, if one is not in the Body of the document)
• The Appendices should include:
• A. General “Lessons Learned” in performing local calibration
• B. Statistical definitions and discussions
Existing LCGuide Sections 1-5 are removed. Some parts of these sections can be included in the
new Sections as noted specifically below in the Detailed Recommendations, or part of the new
Appendix B related to statistical terms. Others, including the Examples in the existing Appendix,
will not be included or used.
29
CRP Project 20-07/Task 422
Each Step in the procedure needs more guidance and some specific examples of what is reasonable
or expected. Statistical explanations in the Body and Appendix B of the document should include
figures and graphs to better explain their meaning. The three PMED Local Calibration webinars
provided some of these types of examples in both generic and State specific cases. A very simple
example from the PMED webinar, that is not in the body of the existing LCGuide, is shown in
Figure 5.
Generic examples should be used as much as possible in the Guide, since State specific examples
may not always translate well to other conditions/States. Consistent nomenclature is imperative
for ease of use and understanding. Using the specific terms and calibration constants consistent to
what is in the PMED software would provide a needed link to the software. Additional quality
control should be performed to assure that the equations and default calibration factors are
accurate, as problems with local calibration due to errors in the MOP have been noted in the
literature.
Specific State reports are noted in the Detailed Recommendation section below to provide
potential examples to be used or considered in the revision (the State is noted in italics and
corresponds to the list of State Calibration Reports in the Appendix of the final Review report,
these reports will be placed on the AASHTOWARE website for ease of access). Additional
guidance should be mined from the three PMED local calibration webinars located on the me-
design.com website.
It is also recommended that the rewrite of the Guide only be performed by an entity that
has actually completed a local calibration effort. The intricacies and interrelationships of the steps
and statistical methods can be underestimated and not fully understood by someone that has not
been fully involved in local calibration. The final rewrite should be reviewed by a layperson that
is not as familiar with the statistics or the local calibration process.
The following specific revisions (by proposed new Section) are recommended for the new
LCPMED:
New Title:
Guide for the Local Calibration of the AASHTOWare Pavement ME Design
Software
30
CRP Project 20-07/Task 422
Introduction
The current description in paragraph 1 needs to be updated but overall it is appropriate. Other
paragraphs need to be revised. Content is too repetitive, too technical/statistical and in places
unclear. This section needs to include the Local Calibration Flowchart. The Introduction should
include a description of how Steps 1-6 relate to Steps 7-11 and how setting up Steps 1-6
correctly and judiciously could assist in the inevitable redoing of Steps 7-11 when needed (i.e.
major software changes). A caution on adding bias into recalibration due to improper site
selection is also warranted. A description of how the LCPMED is arranged should be provided.
31
CRP Project 20-07/Task 422
primary tier, identify potential unusual factors by looking at your data. The thickness and
climate typically lead as primary tiers.
Replicate projects should be described as benefitting minimizing error of the model, using the
explanation of replicate from Note 2 from the existing LCGuide.
ADD: A generic Sampling Template should be presented for both Flexible and Rigid pavements
to provide a relatively consistent method to address sampling design. Reports from states such
as Missouri (Table 1), Arizona (Table 11 & 12), Virginia (Table 4 & 5), Louisiana (Table 5)
should be reviewed for examples of Templates to use, but a Generic Sampling Template or
Templates should be provided in the LCPMED. Kansas (Table 3.13 and 3.14) had an unusual
sampling template based on Subgrade Mr, it would also be good to include an example of an
unusual template like this and the reasons behind it.
The equation noted appears related to the classis Cochran’s sample size formula (Cochran,
William G. (1977) Sampling Techniques, 3rd Edition, John Wiley and Sons, New York, NY.),
which is based on proportions needed for a representative sample, based on the estimated
proportions expected to be encountered. Cochran’s sample size is computed by multiplying a Z2
value to the result of dividing the variability (p*q) squared (which a maximum variability could
be construed as a threshold) by the desired precision squared (or tolerable bias), as shown below.
$ %&'
Cochran’s Sample size ! = (%
It is not clear if this value recognizes or is affected by the method of validation (i.e. spilt sample
vs jack-knifing). It was shown in one report (Michigan, Table 3-3) that if you use the same value
(i.e. 90%) for the confidence interval and reliability (which many State reports did) the equation
32
CRP Project 20-07/Task 422
in 6-1 simplifies to the square of the distress threshold divided by the standard deviation of the
distress as noted below.
+,-./(-- .0/(-012+ ;
!~ *3 (-.67+6/+ +(8,6.,17)
: if C.I. interval = reliability (1)
4
The values that are included in the current LCGuide example (Table A2-5) does not appear to
follow this equation, the N values noted are only the threshold values divided by the Se, they are
not squared, making the N values much lower. In all the State reports reviewed the N computed
for Flexible pavements IRI is much higher (3 to 4 times) than that needed for the individual
distresses of cracking and rutting, but then in most cases the IRI value is noted as being
neglected based on the argument that if the distresses are accurately modeled then the IRI will be
accurate. This argument needs to be revisited and specifically addressed for reasonableness.
It should also be noted that using equation (6-1) from the LCGuide, if a State has a lower
distress threshold than the Nationally calibrated model, the required N for that distress for that
State will be lower. At the same time, it has been documented that it is harder to calibrate
correctly with low levels of distress, so this result does not appear appropriate and it may require
the use of a different equation, or the use of a minimum value for sample size in all cases, like
the “standard minimum” 30 samples used in most statistics text books.
ADD: Many State local calibration reports did reference the minimum values shown on Page 6-5
of the current LCGuide. The report that is referenced in the existing LCGuide for these values
(Rada, 1999) does not have an example of how these were computed. Since the minimum values
given in the LCGuide apparently are being used the most they should also be re-evaluated and
any stipulations with using these instead of computing a value should be emphasized. The
actual distress values and SEE (Se)values that were used in calculating the minimum values
should also be included in the LCPMED to provide an ability to judge if these are applicable to
different situations.
33
CRP Project 20-07/Task 422
Step 5.1: The AASHTO References need to be reviewed and updated to the current standards.
The two options under Step 5.1 need to be updated, with the new HPMS requirements States
have already had to identify some method to convert some of their data to LTPP definitions
(especially cracking) or they are collecting it specifically in LTPP format for the HPMS sections,
either way this section needs to be updated in light of that information.
Step 5.2: The existing description in the LCGuide appears appropriate, but it does not address
what to do if the maximum distress values are lower than the threshold values. This was
identified in many of the local calibration research reports. Do you reexamine your non-selected
pavements to try to identify pavements with higher distress, at a potential cost of more variability
of input data? Do you specifically identify pavements that can be left unmaintained to gather
additional distress data for the future?
ADD: A recommendation or suggestions should be provided.
Step 5.3: The existing description in the LCGuide is appropriate but does need to be updated and
the last three paragraphs, which are heavy on statistical terms needs to be written in a more
general manner. Reference to sections that are not included in the new LCPMED (i.e. Section 5)
should be revised.
ADD: Chapter 9 of the MOP should be referenced for the field test portion. An additional
paragraph should be added recommending the researcher to label any time series graphs used for
analysis with location (road number, county), year and age of the pavement section and not just a
number label and age. The DOT personnel may be able to assist in analysis of outliers if they
have these details easily accessible when viewing the graphs. This also warrants a discussion on
the need for the modelers and data owners to discuss and review any anomalies in the data
together.
Step 6.2: The existing description in the LCGuide, in general, is appropriate. Taking cores for
HMA cracking initiation was identified in the research reports. An issue with this that should be
noted is that it is difficult to confirm bottom-up cracking, but top-down cracking can be
confirmed. Potentially due to issues with the rutting model and the changes to the rut model that
only recently allowed the rutting to be characterized by layer, or the cost and inconvenience of
trenching, trenching was rarely identified in the research reports reviewed. An exception was
Colorado. Colorado (Table 29) trenched 3 sites and identified 50-70% of the rutting in the
asphalt layers, 5-20% in the aggregate base and close to 25% for the top 12 inches of the
subgrade.
34
CRP Project 20-07/Task 422
Step 6.3: The existing description in the LCGuide is acceptable but needs to be updated.
ADD: A new Step 6.4 should be added. This step should recommend that the documentation of
the sites, PMED software project designs that are developed, Distress values from Section/Step 5
and any other data related to the local calibration should be organized and a method put in place
to maintain and update the performance measurements for these sections in preparation for future
recalibrations. It should be recognized, that depending upon future software modeling changes
the PMED projects themselves may need to be redone for future calibrations (that is the case
with some projects developed with PMED version 1.1) but having all the information readily
available will make the entire process much easier.
The sentence related to using 50% reliability should be more prominent. A general discussion on
bias and errors is warranted here like what is in the existing Section 4, but the details of the
statistical components (i.e. existing Section 2 and 5) should be referenced in an Appendix. Parts
of the existing Section 4 (which includes calibration and validation) would be appropriate to lead
off this section, including a discussion on the different approaches to calibration (direct vs
incremental damage), but they need to be rewritten for a more general audience.
Step 7.1 should cover each distress model separately, matching the order of the new ‘Distresses
and Transfer Function’ Table. The definitions noted in the existing LCGuide Section 2.4 should
be reviewed/updated and moved to this section, but included with each distress discussion, not
all in one place. See the Example for JPC Transverse Cracking distress in Appendix C.
ADD: An example of how to compute the residual errors, bias and SEE (Se) should be included,
including graphs to support the examples.
Step 7.2 should provide an example of each hypothesis test (Difference =0, Intercept = 0, Slope
= 1) for one of the distresses noted in Step 7.1, and it should provide an example of a paired t-test
35
CRP Project 20-07/Task 422
for IRI. The meaning of the p-values should be addressed. A format to document the results
should also be provided, like that shown in Figure 8.
Figure 8 - Potential format for Step 7.2 results (From PMED Local Calibration Webinar 3)
ADD: Clarify the two approaches noted, and when they are appropriate. PMED Webinar 3 did
this well for JPC transverse cracking, where the basis of C1 and C2 were defined and it was
36
CRP Project 20-07/Task 422
recommended to leave those values and only change C4 and C5. (See Figure 10)
This section should also include a discussion of using graphing software to look for any trends in
the data to assist in identifying cases where potentially different trends are going on that can be
seen graphically like the one shown in Figure 11 (Figure A2-17 of the existing LCGuide).
The new construction projects shown as diamonds in Figure 11 appear to have two distinct
patterns, one that overpredicts and one that underpredicts rutting. If the individual projects can
also be identified graphically by different factors, i.e. thickness, construction date, mix type etc.
or the individual projects can be identified and discussed with the DOT personnel involved in
their maintenance or construction, it may clearly show the underlying cause for the differences.
This section also needs to cover each distress model separately, in the same order as
shown in Section 7, but this time concentrating on the calibration factors that should be adjusted
for a certain outcome. Existing Table 6-1 from the existing LCGuide provides some guidance,
but additional guidance is needed, like what is found in the existing LCGuide page A-48 for
rutting coefficients, page A-53 for fatigue cracking coefficients, page A-111 and A-114 for
faulting coefficients and, page A-57 for the fact that IRI needs to be adjusted after adjusting the
other distresses. Also, the existing Table 6-1 does not include all the calibration factors, and the
difference between coefficients that are used in local calibration (Option 1 in Figure 5) and
coefficients (Option 2 in Figure 5) that should only be changed due to material related properties
based on laboratory testing needs to be clearly discussed and differentiated.
Computing new standard deviation values for the distresses is not discussed in the
current LCGuide. It was discussed in the PMED webinar 3 and the method and reasoning for
this should be added to the LCPMED. Figure 12 is from that portion of webinar 3.
37
CRP Project 20-07/Task 422
Step 10.1: This step needs to be rewritten and tied to the changes in Section 7.
Step 10.2: This step needs to be rewritten using wording from Section 2 (‘blocking’ is used here)
and tied to the changes in Section 7.
38
CRP Project 20-07/Task 422
Step 10.3: Not quite sure what Step 10.3 does? Does this mean go back to Step 8 (Section 8) and
recalibrate? If so, it should be written that way.
• How local calibration has driven changes in the software models, and therefore sharing
local calibration information is a necessary part of continuous improvement of the
software
• How State collected cracking for both flexible and rigid pavements has been adjusted to
fit the PMED definitions (All States have had to address this, at least for flexible
cracking)
• How using cut-off values or defaults in PMS can affect local calibration, such as
Louisiana and faulting (Louisiana, page 55-59)
• Different specific methods or procedures that have been used in performing local
calibration
o Purdue/Indiana’s method of using a Grid Search method to calibrate and
identify optimum sample size
o Iowa’s use of a sensitivity index (Iowa, page 189)
o Michigan’s use of transverse profiles to assist in rutting prediction
(Michigan, page 96-102)
o Louisiana’s simplification of the rutting model (Louisiana, page 65))
39
CRP Project 20-07/Task 422
40
CRP Project 20-07/Task 422
REFERENCES
American Association of State Highway and Transportation Officials (AASHTO). 2008. Mechanistic-Empirical
Pavement Design Guide: A Manual of Practice, Interim Edition. Washington, DC.
American Association of State Highway and Transportation Officials (AASHTO). 2010. Guide for the Local
Calibration of the Mechanistic-Empirical Pavement Design Guide. Washington, DC.
Chow, L.C., Mishra, D. and Tutumluer, E. 2014. Aggregate Base Course Material Testing and Rutting Model
Development. FHWA/NC/3013-18. North Carolina DOT. Raleigh, N.C.
Mu, F., Mack, J.W. and Rodden, R.A. 2016. Review of National and State Level Calibrations of the
AASHTOWare Pavement ME Design for New Jointed Plain Concrete Pavement. International Journal of
Pavement Engineering.
Pierce, L.M. 2018. AASHTO Pavement ME National Users Group Meetings. Technical Report: Second Annual
Meeting-Denver, CO. FHWA. Washington, D.C.
Pierce, L.M., and McGovern, G. 2014. Implementation of the AASHTO Mechanistic-Empirical Pavement
Design Guide and Software: A Synthesis of Highway Practice. NCHRP Synthesis 457. Transportation Research
Board of the National Academies, Washington, D.C.
Von Quintus, H.L., Mallela, J., Sadasivam, S., and Darter, M. 2013. Literature Search and Synthesis—
Verification and Local Calibration/Validation of the MEPDG Performance Models for Use in Georgia.
GADOT-TO-01-Task 1. Georgia Department of Transportation, Forest Park, GA.
Wu, Z., Yang, X & Zhang, Z. 2013. Evaluation of MEPDG flexible pavement design using pavement
management system data: Louisiana experience, International Journal of Pavement Engineering, 14:7, 674-685,
DOI: 10.1080/10298436.2012.723709
The following 36 states were identified as having either performed a local calibration, being in
the process of performing a local calibration, or noted as having a local calibration project in
the Transportation Research Board Research in Progress (RIP) database. Under the State name
the authors of the latest local calibration research report are noted. The date next to the State
name is the published date of the latest report. The software version that was used for local
calibration is identified if it was included in the report. The actual local calibration research
reports have been located for the states with a Report Icon (19 of the 36). These reports will
be placed on the AASHTOWare site for future reference.
41
CRP Project 20-07/Task 422
ARKANSAS (In TRB RIP database -Survey also noted that it was in process)
University of Arkansas (Kevin Hall) Local Calibration of the MEPDG (TRC-1003)
42
CRP Project 20-07/Task 422
KENTUCKY (In TRB RIP database, survey noted they have not calibrated)
University of Kentucky (Graves, L). “Local Calibration and Strategic Plan for Implementation
of AASHTO MEPDG”, “AASHTO MEPDG Calibration Continuation“ and “MEPDG
Implementation” (three projects listed in RIP, latest starting date noted as 2009)
MAINE
(Noted in Survey that they are currently calibrating in-house)
MINNESOTA
Noted in survey only calibrated for simplified rigid pavements.
MISSISSIPPI
Survey noted that only a preliminary calibration was performed and the final report is still in
review and not yet available
43
CRP Project 20-07/Task 422
NEBRASKA
ARA (Survey noted they are currently calibrating with ARA)
NEVADA
(Noted in the Survey that they had calibrated but that the reports are not available)
NORTH DAKOTA
(Noted in the Survey that they had calibrated but did not provide any other information)
OKLAHOMA (2011)
University of Oklahoma (Hossain, Musharraf Zaman, Curtis Doiron, Steven Cross) Development
of flexible pavement database for local calibration of MEPDG final report. ODOT SPR No 2209.
Oklahoma City, OK. [AASHTO Pavement ME Version 1.1 based on 2015 TRB paper] {Do Not
have copy of report}
44
CRP Project 20-07/Task 422
TENNESSEE (2015)
University of Tennessee (Baoshan Huang and Xiang Shu) Summary for RES2013-33 and Thesis
located, no report. [Student’s Thesis noted that MEPDG Version 1.1 was used]
TEXAS
(Survey noted that the University of Texas/San Antonio and ARA are currently calibrating)
VERMONT (from TRB RIP, survey noted a calibration was done with PMS data)
Vermont Agency of Transportation (In-house researcher Nick Meltzer) SPR 711: Correlating
ME-PDG with Vermont Conditions, Phase II. (start date is 2010)
45
CRP Project 20-07/Task 422
NCHRP Report 719 (2012) Calibration of Rutting Models for Structural and Mix Design
NCHRP RRD 308 (2006) Changes to the Mechanistic-Empirical Pavement Design Guide
Software Through Version 0.900. NCHRP Project 1-40D.
NCAT 17-07, (2017) Summary of local calibration efforts for flexible pavements
NCAT 17-08, (2017) Impact of local calibration, foundation support, and design and reliability
thresholds
Five different student Thesis (3 Phd and 2 MS) related to local calibration were identified. Their
references are provided below.
Abdullah, Ali Qays. (2015) Development Of A Simplified Flexible Pavement Design Protocol
For New York State Department Of Transportation Based On AASHTO ME Pavement Design
Guide. University of Texas at Arlington. PhD diss.
46
CRP Project 20-07/Task 422
Guo, Xiaolong. (2013) Local Calibration of the MEPDG Using Test Track Data. Auburn
University. MS thesis.
Nabhan, Peter. (2015) Calibration of the AASHTO MEPDG for Flexible Pavements to Fit
Nevada’s Conditions. University of Nevada, Reno. MS thesis.
Rahman, Md Shaidur. (2014) Local calibration of the MEPDG prediction models for pavement
rehabilitation and evaluation of top-down cracking for Oregon Roadways. Iowa State University.
PhD diss.. http://lib.dr.iastate.edu/etd/14295
Zhou, Changjun, (2013) Investigation into Key Pavement Materials and Local Calibration on
MEPDG. University of Tennessee. PhD diss.. http://trace.tennessee.edu/utk_graddiss/2504
47
CRP Project 20-07/Task 422
APPENDIX A
Global Calibration Factors (MOP 2008 and MOP 2015) and Equations
(MOP 2015)
The values for the global calibration factors over time, and the latest transfer equations affected by
these factors are included in this Appendix. Tables A-1 to A-5 show the documented flexible
global calibration factor values in the two different versions of the Manual of Practice (MOP) 2008
and 2015 and the values in the current software. Tables A-6 to A-8 show the documented JPC
global calibration factor values for the MOP versions and also two values identified for the Task
288 NCHRP recalibration project, the Task 327 version are the same values in the current software.
Table A-9 and A-10 show the documented CRC global calibration factor values for the MOP
versions and a version found in the Task 327 NCHRP recalibration report. As shown, the values
have changed, and calibration values have been added. The changes in individual calibration
factors are shown as shaded cells. The cell is only shaded the first time it changes (i.e. in Table
A-2, the Coarse-Grained, ks1 factor changed from 2008 MOP to the 2015 MOP but is the same for
MOP 2015 and the software, so it is only shaded in the MOP 2015 column).
These calibration factor values are included in Chapter 5 of the MOP and somewhat noted
in the Examples in the Local Calibration Guide, but they are not summarized as noted here in either
document. It is not even clear which factors are important and what their influence is in the Local
Calibration Guide, since the equations that use the calibration factors are not included in the Local
Calibration Guide document. The Examples in the Appendix of the Local Calibration Guide do
provide some insight on the calibration factors and what their influence is, but this information is
within the examples and not easily found.
FLEXIBLE PAVEMENTS
A-1
CRP Project 20-07/Task 422
A-2
CRP Project 20-07/Task 422
A-3
CRP Project 20-07/Task 422
RIGID PAVEMENTS
A-4
CRP Project 20-07/Task 422
0.07162*PO
(0.00761*FAUL
Standard [0.0097*FAULT(t)]0. W
T(t) + 5178
Deviation + 0.014 (FAULT,0.36
0.000008099)0.445
8) + 0.00806
*Task 288 values shown are as defined in the Task 327 report, NNC was defined as the Task
288 values in a paper (Mu et al. 2016), current software uses the Task 327 values
A-5
CRP Project 20-07/Task 422
A-6
CRP Project 20-07/Task 422
APPENDIX B1.
Local Calibration Survey 2018: Summary
B-1
CRP Project 20-07/Task 422
B-2
CRP Project 20-07/Task 422
APPENDIX C
Examples for Section 7 and 8 of New Guide
The following examples are intended to provide a format/outline for the changes recommended to
Section 7 and Section 8 of the New Local Calibration Guide as described in Section 3.2 of this
report. The examples only cover one distress (JPC Transverse Cracking Model) but the intent is
that the New Guide will cover each distress in detail like shown here. The distress itself needs to
be defined and the equations relating to calibration need to be shown. The specific values that are
compared to assess the global values needs to be described like shown here for Section 7. Section
8 needs to clarify which calibration coefficients are mechanistically modeled and which should be
identified for local calibration. Specific effects of changing the calibration coefficients need to
be discussed, along with an example of how to set up a regression to perform the calibration (if
appropriate).
Transverse Cracking, Bottom-Up (JPCP)—When the truck axles are near the longitudinal edge of
the slab, midway between the transverse joints, a critical tensile bending stress occurs at the bottom
of the slab under the wheel load. This stress increases greatly when there is a high positive
temperature gradient through the slab (the top of the slab is warmer than the bottom of the slab).
Repeated loadings of heavy axles under those conditions result in fatigue damage along the bottom
edge of the slab, which eventually result in a transverse crack that propagates to the surface of the
pavement. A reasonable standard error of the estimate for total transverse cracking or total percent
slabs cracked is seven percent. The PMED predicts the total percent slabs cracked which includes
both bottom-up and top-down cracking of JPCP.
Transverse Cracking, Top-Down (JPCP)—Repeated loading by heavy truck tractors with certain
axle spacing when the pavement is exposed to high negative temperature gradients (the top of the
slab cooler than the bottom of the slab) result in fatigue damage at the top of the slab. This stress
eventually results in a transverse or diagonal crack that is initiated on the surface of the pavement.
The critical wheel loading condition for top-down cracking involves a combination of axles that
loads the opposite ends of a slab simultaneously. In the presence of a high negative temperature
gradient, such load combinations cause a high-tensile stress at the top of the slab near the critical
pavement edge. This type of loading is most often produced by the combination of steering and drive
axles of truck tractors and other vehicles with similar axle spacing. Multiple trailers with relatively
short trailer-to-trailer axle spacing are the other source of critical loadings for top-down cracking.
The equations that include the four calibration factors (C1, C2, C4 and C5) for the
transverse cracking model are shown as follows:
C-3
CRP Project 20-07/Task 422
Cracking is computed separately for top-down and bottom- up cracking and the total cracking is
computed using the following equation:
Measured Cracking and month: The dates of the measured cracking should be identified, and the
month of the measurement is computed by calculating the time from construction date to
measurement date. (Note that the equation above is from the MOP 2015 and the typographical
error that is in the MOP should be remedied, therefore it should be CRKBottom-up, not CRKBottop-up .
Predicted Cracking and month: % Cracking by month is computed in the software and included
in the JPCP_Cracking.xls file. The % Cracking used should correlate to the same months
defined for the Measured Cracking.
The Measured Cracking and Predicted Cracking are compared by graphing the % cracking and
the residual errors as shown below. The measured and predicted cracking should also be
statistically compared as described in Appendix B. The Global calibration factors used for the
PMED runs, R2, Standard error and N should all be documented, along with the bias and residual
errors, as described in Appendix B.
C-4
CRP Project 20-07/Task 422
PMEDCracking.xls document from the PMED analysis runs provides Total_BU and Total_TD
fatigue damage by month. These values are based on the fatigue damage predictions which then
uses the transverse cracking transfer function and then the C4 and C5 coefficients are used to
determine the total transverse cracking prediction. The fatigue damage values themselves are not
based on C4 and C5, so the values in the PMED runs computed using the global calibration
factors from Section 7 can be used directly here. Based on the TCrack equation noted in Section
7 the TCrack predicted is:
=>> =>> =>> =>>
TCrack predicted= =?@ BC DE + =?@ GHDE − *=?@ BC DE : *=?@ GHDE :
A A A A
And the regression is:
Ypred = TCrack predicted at month I based on the formula above, and
Ymeas = Measured % transverse cracks at month i, and
X= Total_BU + Total_TD
A typical graph of measured % cracking vs fatigue damage is shown below:
Changes in C4 is expected to reduce the bias and changes in C5 is expected to improve the
precision of the model. The values can be adjusted individually based on the needs identified in
Section 7 (i.e. if the global values were particularly biased or imprecise), and then at the same
time, and the results compared.
An example from an excel spreadsheet (only a portion shown) is below. Microsoft Excel Solver
can be used to minimize the Sum of the Squared error by varying calibration coefficients C4 and
C5. The resulting calibration factors identified, R2, standard error and N should all be
documented, along with the bias and residual errors, as described in Appendix B.
C-5
CRP Project 20-07/Task 422
C-6