Nine Questions on Energy Decomposition Analysis
[a] J. Andrés
Departament de Ciències Experimentals Universitat Jaume I, 12080,
Castelló, Spain
[b] P. W. Ayers
Department of Chemistry and Chemical Biology, McMaster University, 1280
Main Street West, L8S 4M1, Hamilton, Ontario, Canada
[c] R. A. Boto
CICECO, Aveiro, Portugal
[d] R. Carbó-Dorca
Institut de Química Computational i Catàlisi, Universitat de Girona, C/M
Aurelia Capmany 69, 17003, Girona, Spain
[e] H. Chermette
Université Lyon 1 et UMR CNRS 5280 Institut Sciences Analytiques,
Université de Lyon, 69622, Paris, France
[f] J. Cioslowski
Institute of Physics, University of Szczecin, Wielkopolska, 15, 70-451,
Szczecin, Poland
[g] J. Contreras-García, J. Pilmé, B. Silvi
Sorbonne Université, CNRS, LCT, UMR 7616, 4 place Jussieu, 75005, Paris,
France
E-mail:
[email protected]
[h] D. L. Cooper
Department of Chemistry, University of Liverpool, Liverpool, L69 7ZD,
United Kingdom
[i] G. Frenking
Fachbereich Chemie, Philipps-Universität Marburg, Hans-Meerweinstr. 4,
35032, Marburg, Germany
[j] C. Gatti
CNR-ISTM Istituto di Scienze e Tecnologie Molecolari, via Golgi 19, 20133,
Milan, Italy and Istituto Lombardo Accademia di Scienze e Lettere, via Brera
28, 20121, Milan, Italy
[k] F. Heidar-Zadeh
Physics and Materials Science Research Unit, University of Luxembourg,
L-1511 Luxembourg, Luxembourg and Department of Chemistry, Queen’s
University, Kingston, Ontario K7L 3N6, Canada
[l] L. Joubert, V. Tognetti
COBRA UMR 6014 & FR 3038, INSA Rouen, CNRS, Université de Rouen
Normandie, Mont-St-Aignan, France
[m] A. Martín Pendás
Departamento de Química Física y Analítica, Universidad de Oviedo,
33006, Oviedo, Spain
[n] E. Matito, E. Ramos-Cordoba
Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU), and Donostia
International Physics Center (DIPC), P.K. 1072, 20080, Donostia, Euskadi,
Spain
[o] E. Matito
IKERBASQUE, Basque Foundation for Science, 48011, Bilbao, Euskadi, Spain
[p] I. Mayer
Institute of Organic Chemistry, Research Centre for Natural Sciences,
Hungarian Academy of Sciences, Budapest 1117, Hungary
[q] Alston J. Misquitta
School of Physics and Astronomy, Queen Mary University of London, Mile
End Road, London E1 4NS, United Kingdom
[r] Y. Mo
Chemistry Department, Western Michigan University, Kalamazoo, Michigan,
49008
[s] P. L. A. Popelier
Manchester Institute of Biotechnology (MIB), 131 Princess Street, Manchester
M1 7DN, United Kingdom
[t] P. L. A. Popelier
School of Chemistry, University of Manchester, Oxford Road, Manchester
M13 9PL, United Kingdom
[u] M. Rahm
Department of Chemistry and Chemical Engineering, Chalmers University of
Technology, 412 96, Gothenburg, Sweden
[v] P. Salvador
Institut de Química Computacional i Catàlisi, Universitat de Girona, C/M
Aurelia Capmany 69, 17003, Girona, Spain
[w] W. H. E. Schwarz
Theoretical Chemistry Center at Tsinghua University, Beijing 100084, China
[x] W. H. E. Schwarz
Physical and Theoretical Chemistry Laboratory, Faculty of Science and
Engineering, University of Siegen, Siegen 57068, Germany
[y] S. Shahbazian
Department of Physics, Shahid Beheshti University, P.O. Box 19395-4716,
G. C., Evin, 19839, Tehran, Iran
[z] M. Solà
Institut de Química Computacional i Catàlisi, Universitat de Girona, C/M
Aurelia Capmany 69, 17003, Girona, Spain
[aa] K. Szalewicz
Department of Physics and Astronomy, University of Delaware, Newark,
Delaware
[bb] F. Weinhold
Theoretical Chemistry Institute and Department of Chemistry, University of
Wisconsin-Madison, Madison, Wisconsin, 53706
[cc] E.Zins
Sorbonne Université, UPMC Univ. Paris 06, MONARIS, UMR 8233, Université
Pierre et Marie Curie, 4 Place Jussieu, Case Courrier 49, 75252, Paris, France
1
The paper collects the answers of the authors to the following
questions:
1. Is the lack of precision in the definition of many chemical
concepts one of the reasons for the coexistence of many
partition schemes?
2. Does the adoption of a given partition scheme imply a set
of more precise definitions of the underlying chemical
concepts?
3. How can one use the results of a partition scheme to
improve the clarity of definitions of concepts?
4. Are partition schemes subject to scientific Darwinism? If so,
what is the influence of a community’s sociological pressure
in the “natural selection” process?
5. To what extent does/can/should investigated systems influence the choice of a particular partition scheme?
6. Do we need more focused chemical validation of Energy
Decomposition Analysis (EDA) methodology and descriptors/
terms in general?
7. Is there any interest in developing common benchmarks and
test sets for cross-validation of methods?
8. Is it possible to contemplate a unified partition scheme (let
us call it the “standard model” of partitioning), that is proper
for all applications in chemistry, in the foreseeable future or
even in principle?
9. In the end, science is about experiments and the real world.
Can one, therefore, use any experiment or experimental data
be used to favor one partition scheme over another? © 2019
Wiley Periodicals, Inc.
A few bars of introduction
perturbation operators. In this way, perturbation-based energy
terms can be related to physical effects. Put differently, these
methods explain interactions in terms of physical arguments
based on properties of monomers. For this reason, and by construction, perturbation-based methods only apply to weak
interactions, where they are typically quite successful.[19,20]
Variational-based EDAs require quantum chemical descriptions of the entire system as well as the considered fragments.
This approach is able to treat weak intermolecular interactions
as well as multiple kinds of bonds.[21,22] A decomposition of the
interaction energy of the water dimer was published in 1957 by
C. A. Coulson.[23] However, the first acknowledged variational
EDA method is due to Morokuma and co-workers.[24–27] This
first EDA method is limited to the Hartree–Fock level of theory and
it suffers from the presence of nonphysical contributions to the
interaction energy. The latter problem is significantly reduced in
the reduced variational space (RVS) EDA.[28] A few years after
Morokuma’s seminal article, Ziegler and Rauk proposed an energy
decomposition of the energy calculated in the framework of the
Hartree–Fock–Slater method known as Transition State (ETS)EDA.[29,30] This method enables the analysis of both weaker and
stronger bonds. There are several approaches in which localized
orbitals are used to define EDAs. For example, Natural Energy
Decomposition Analysis (NEDA),[31,32] Block Localized Wave functions (BLW-EDA),[33,34] and Fragment Molecular Orbitals (FMO) in
the Pair Interaction Energy Decomposition Analysis (PIEDA),[35]
Absolutely Localized Molecular Orbitals in ALMO-EDA,[36–39] and
Natural Orbitals for Chemical Valence[40] (NOCV-EDA).[41] The use
of the variational-based EDA is mostly limited to HF and DFT calculations although post-Hartree–Fock correlation methods do exist
(for example, for partitioning energies within the Local Pair Natural
Orbital Coupled Cluster Framework,[42] for MP2 wave functions,[43]
and for evaluating dispersion corrections.[44]).
A third family of EDA only requires electronic data (wave function
or electron densities) of the entire system. The “Chemical Hamiltonian” approach,[45] makes use of atomic projection operators to
express the total Hamiltonian as a sum of one-center and two-center
terms. The resulting energy decomposition yields true intra-atomic
Bernard Silvi
During the preparation of the second European Symposium on
Chemical Bonding held last summer in Oviedo, I have been
asked by the organizers to propose a reflection topic for the
Bond Slam session. I chose the Energy Decomposition Analyses
(EDAs) because these methods are among the most useful as
well as controversial tools helping to get insights on the electronic structure of molecules. I further thought that it should be
helpful to collect opinions on the epistemological issues of
these methods. I had the experience of a collective paper in
which a panel of scientists was invited to give their opinions on
the topological approaches in Theoretical Chemistry[1] and I
proposed to renew this exciting approach. Nine questions have
been selected and proposed to the contributors.
Bernard Silvi, Eduard Matito, and Martin Rahm
We begin with a brief overview of EDA methodology and development. This introduction is not meant to be a comprehensive
review and the interested reader is encouraged to consult the
literature for more details. One comprehensive description of
several different EDA schemes can be found in the article of
Phipps et al.[2] In their 2015 review, Phipps et al. describe two
groups of EDAs, classified depending on the nature of their
underlying theories. Perturbation-based methods express the
interaction energy in terms of corrections to a noninteracting
description. Variational-based methods explicitly require the
use of intermediate fragment wave functions corresponding to
idealized nonphysical situations.
Perturbation-based methods stem from the theory of intermolecular forces pioneered by Eisenschitz and London in
1930.[3,4] The method has been consistently improved over
decades[5–14] yielding the Symmetry-Adapted Perturbation Theory (SAPT) which appears to be the latest link in the evolution
process.[15–18] For each order of perturbation, the energy
contributions are derived directly from the expressions of
2
and true interatomic energy components and basis extension terms.
As the projection operators are defined in the LCAO-MO formalism,
the method is restricted to this kind of calculations. In position
space partitioning, the interaction energy is defined in terms of
contributions of the one-electron and of the two-electron density
distribution functions. The considered domains may have either
sharp or fuzzy boundaries. The quantum theory of atoms in molecules (QTAIM) considers domains bounded by zero-flux surfaces
of the density as defined by Bader, which are open systems that,
among other properties, have associated energies.[46] In this
sense, the QTAIM provides an atomic partition. An extension of
this partition, considering atomic and diatomic terms, as well as
individual energy components, is known as the Interacting Quantum Atoms scheme[47–50] and considers domains bounded by
zero-flux surfaces of the density as defined by Bader.[46] However,
other nonoverlapping partition schemes, such as ELF basins,[51]
can be considered at the expense of the determination of the
fragment kinetic energies. Methods for fuzzy atom partitioning
have been developed by Salvador and Mayer.[52,53]
Finally, Experimental Quantum Chemistry (EQC) is an energybased partitioning unique in that it can interchangeably rely on
quantum chemical calculations, at any level of theory, as well as
on experimental thermochemical data, vibrational and photoelectron spectroscopy, and X-ray diffraction and absorption
measurements.[54,55] EQC, which is developed by Rahm and
Hoffmann, EQC makes use of observable “reference frames.” For
example, when studying chemical bonding, the EQC partitioning refers to the bond dissociation process. EQC is, in principle, applicable to any chemical or physical transformations yet
consists of relatively few energetic terms. One of these terms is
defined as the electronegativity of the system.[56]
EDA methods are multipurpose methods used to quantify,
characterize, and explain interactions between fragments of
quantum systems. Whereas we have not mentioned all varieties
here, most EDAs can provide important pieces of information
on chemical interactions. This class of methods can also have
other practical uses, maybe most notably for the design of
molecular mechanics force fields.[14,57,58] The explanations
offered by EDAs are seldom “chemical explanations” in the
sense that interactions are not directly described in terms of
“atom in molecules” chemical properties, such as electronegativity, valence, ionic, and covalent radii. The latter properties are
determined by the location of the constituent atoms in the
periodic table. Most EDA methods instead rely on either physical or quantum chemical concepts. In EDAs, interaction energies
are described as the sum of contributions arising from a
sequence of equations that typically are specific to each
method. The physical (or quantum chemical) interpretations of
these equations, and the balance of the resulting energy contributions, yields a deterministic explanation focused on the dominant terms. Challenges for EDA methodology and future
development of the field are what we will discuss in this article.
Alston Misquitta and Krzysztof Szalewicz
We will address the nine questions from the perspective of
SAPT, sometimes also called exchange perturbation theory.
SAPT has been presented in detail in many original
papers,[15,59–68] reviews,[16,18] and textbooks.[20,69,70] Thus, there
is no need for its extensive description here. However, we will
briefly lay out the main features of SAPT, particularly those that
do not seem to have been understood in the non-SAPT literature. We will also dispel some myths about SAPT as these can
cloud our understanding of this method in the context of the
questions posed in this article.
SAPT is a perturbation theory that calculates interaction
energy directly, starting from isolated monomers. Thus, in contrast to the supermolecular approach, no subtractions are
involved and, in consequence, SAPT is free of basis-set superposition error. The interaction operator V is the sum of all Coulomb
interactions between particles of different monomers. The simplest approach is to use Rayleigh–Schrödinger (RS) perturbation
theory, but such approach is unphysical at short separations
since it does not predict the existence of the repulsive wall. To
include repulsion, one has to properly antisymmetrize the cluster
wave function, that is, enforce Pauli’s exclusion principle,
resulting in SAPT. There are several ways to perform such adaptation, the simplest one is to symmetrize the wave functions of
the RS method, leading to symmetrized RS (SRS).[59]
SAPT is the theory of intermolecular forces and most textbooks discussing intermolecular interactions use SAPT concepts
even if the SAPT acronym is not mentioned. SAPT is also known
under several other names, for example, the effective fragment
potential (EFP) method.[71] SAPT provides what can be called
the standard model of EDA for intermolecular interactions (also
called noncovalent interactions). This is because SAPT by design
defines the interaction energies in terms of electrostatic, induction (polarization and charge-transfer, or charge-delocalization),
dispersion, and exchange terms. These terms are defined in a
unique way and can be calculated with potentially arbitrary
accuracy and at complete basis set (CBS) limits.
Since exact wave functions are unknown for larger monomers, SAPT approximates these functions at several available
levels of electronic structure theory. In fact, SAPT is a double
perturbation theory with the other perturbation due to the
intramonomer correlation operator W = WA + WB, where WX is
the Møller–Plesset (MP) fluctuation potential of monomer X. If
W is neglected, monomers are described at the Hartree–Fock
(HF) level. Higher levels include consecutive powers of W, possibly with selective summations to infinite order applying the
coupled-cluster (CC) method. A version of SAPT, denoted as
SAPT/(DFT), uses monomers described at the Kohn–Sham
(KS) density-functional (DFT) level (however, interaction energies are computed using wave-function theory).
While SAPT has become a mainstream electronic structure
method, various myths about SAPT are in circulation and these
are far from the truth. In particular, it is sometimes stated that
SAPT is “just” one more EDA method, useful only for interpreting interaction energies. This is not strictly correct. An EDA
takes a total interaction energy and, by various manipulations
of the density matrices or basis sets, seeks to decompose this
energy into terms that reflect something physical. What is done
in SAPT is conceptually quite different: one starts from physical
components, term-by-term, and sums them up to get the total
3
interaction energy. This process does not involve any arbitrary
choices. Thus, instead of decomposing a quantity, SAPT assembles it from well-defined components. What is important, if this
assembly is performed up to sufficiently high order (second
order in V and third order in W are adequate for most purposes), SAPT gives interaction energies similarly accurate to
those given by the supermolecular approach in the fourth order
(MP4) or the CC method with single, double, and nonpertubative triple excitations [CCSD(T)].
SAPT, in fact, is an exact method in the sense that it reproduces the exact interaction energy as the order in V goes to
infinity provided that appropriate symmetry-enforcing techniques and exact monomer wave functions are used (or the
number of excitations in the CC method used to describe
monomers approaches the number of electrons in each monomer). These statements are based on high-order calculations for
small systems, see Ref. [65] for a review of this work.
Since SAPT is a perturbation expansion in powers of V, one
may expect that SAPT will start to diverge as R becomes small
and consequently V is no longer a small perturbation. Apparently, with proper symmetry enforcing, this divergence is not
observed even for interactions as strong as those characteristic
of chemical bonds.[65] Also, low-order SAPT calculations for
larger systems show only a minimal worsening of convergence
when one goes to small R’s, see Ref. [72] for an analysis of the
Ar2 SAPT results at R = 1.5 Å, where the interaction energy is
more than three orders of magnitude larger than the absolute
value of this quantity at the van der Waals minimum, R = 3.76 Å.
Another myth is that the programmed general version of
SAPT[73–75] is at the best equivalent to MP2. In fact, the version
of SAPT that is applicable to interactions of arbitrary closedshell molecules and some open-shell ones includes terms up to
third-order in V and a high-order treatment of electron correlation in monomers. By analyzing individual terms, one can show
that this version of SAPT is approximately equivalent to the
CCSD(T). This is confirmed by the agreement between these
two methods to within a few percent found in most calculations. In particular, for the helium dimer, the calculations of
Refs. [76,77] performed at CBS limits and including also a
benchmark all-order calculation estimated to be accurate to
about 0.01% show that the potential from SAPT at the level
available in the SAPT codes[73] and the CCSD(T) potential are
similarly accurate, although SAPT is slightly more accurate at
the van der Waals minimum.
A broader comparison of SAPT with CCSD(T) was performed
in Ref. [78]. In this work, CCSD(T)/CBS benchmarks were computed for 10 dimers, containing up to 28 atoms, varying R from
asymptotic to repulsive configurations. The median unsigned
percentage error of SAPT/(DFT) is only about 1% larger than
that of CCSD(T), both methods computed in the same basis set
[the CCSD(T) error here is, of course, entirely due to basis set
incompleteness]. SAPT (DFT) performs significantly better than
all other DFT-based methods investigated in Ref. [78].
The next myth is that SAPT/(DFT) is an approximation to the
KS DFT supermolecular approach. While it should be already
clear from the discussion above that this is not the case, a dramatic illustration is presented in Refs. [72,79]. Figure 1 in Ref.
4
[72] shows that CCSD(T) and SAPT [SAPT/(DFT) is almost indistinguishable from SAPT] potential energy curves for Ar2 are very
close to each other. In stark contrast, supermolecular DFT calculations produce curves spread all over the place.
Question 1: Is the Lack of Precision in the
Definition of Many Chemical Concepts One of
the Reasons for the Coexistence of Many
Partition Schemes?
Ramon Carbó-Dorca
Lack of precision is a mild description term. Chemistry has a
heavy historical influence of intuitive concepts, which possess
no well-defined physical basis. If such a physical basis must rely
on quantum mechanics, then there might be one can chemically
consider well defined just isolated atoms, molecules, and molecular swarms (a molecule surrounded by other molecules, for
instance) only. These systems cannot be separated into parts or
fragments from the quantum mechanical point of view. Atomic
and bond contributions to the electronic energy of molecular
systems are to be considered approximate and mainly related to
the LCAO MO theoretical structure under Born–Oppenheimer
approach. A simple situation might illustrate the difficulties of
energy (or other molecular characteristics) partition. Whenever
in LCAO MO theory one allows using AO or basis set functions in
a general manner, allowing them being not centered in an atom,
but a point within the three-dimensional space of the molecular
neighborhood. For example, one can choose the molecular center of charge, or even better: the center of charge of every
atomic pair, as a locus where to center one electron basis functions. In this case, there might be defined as a one center contribution, which cannot be associated with any physical atomic
electron source. At the same time, there could be bicentric partition contributions, made by a hybrid basis set center and the
physical atom centers. This might illustrate the arbitrariness of
any partition scheme.
Shant Shahbazian
It is usually perceived that evolution of a “qualitative” chemical
concept to a “quantitative” one is the hallmark of precision; this
is the business of the indices in computational chemistry, for
example, indices probing the presence and/or strength of
bonding/aromaticity. However, lack of precision may have
another face: “over-quantification”, which is mathematically rigorous but “nonequivalent” definitions to the of a chemical concept; the mentioned bonding and aromaticity indices belong to
this category (although sometimes it is tried to sell this nonequivalence as revelation of the “complementary” nature of
definitions, it seems hard to conceive how “contradictory”
results must be avoided in case studies). There is no bound on
the number of proposed indices, so the over-quantification
grows with time and ruins the whole initial program of
reaching precision by the qualitative to quantitative transition.
This is the same situation for a large number of proposed
energy partitioning schemes in the last decades. Therefore, the
reverse question is more legitimate to me: “Is the coexistence
of many partitioning schemes the reason for lack of precision in
the definition of many chemical concepts?”. As I stressed
elsewhere,[80] as far as one does not have a comprehensive theory for the concept of interest and just tries to make the qualitative to quantitative transition intuitively, the problem of overquantification will be prevailed.
István Mayer
Physicists consider the molecule as a set of electrons and
nuclei, chemist treats it as a set of chemically bonded atoms.
These conceptually different approaches can be connected by
performing partition of different physically (chemically) relevant
quantities in terms of atoms or pairs of atoms. However, while
the electrons and nuclei used in the calculations are quite welldefined entities (at least at the energies relevant to chemistry),
the individual atoms within a molecule are in some sense only
constructions of the human mind. They represent very good
generalizations of the enormous chemical experience but, if
looking closely, are somewhat fuzzy concepts: one cannot tell
apart exactly where one atom is ended and the other is
started—if that question has a meaning at all. The absence of a
unique definition of the atom within the molecule makes inevitable “the coexistence of many partition schemes”. (Note that
the bond order index also emerges from a partitioning: it is the
integral of the diatomic component of the exchange density.[81])
Martin Rahm
I think so, yes. However, I do not see an inherent problem with
trying to quantify the same “fuzzy” concepts in several different
ways. Future cross-comparison efforts, discussed in questions
six and seven, will hopefully indicate which precise EDA definitions are more predictive and chemically useful.
Frank Weinhold
On the contrary, we contend that the dubious physical assumptions underlying EDA partitions (i.e., existence of mutually
exclusive and simply additive “components” whose labels correspond to chemical concepts as broadly understood) are the
issue. This is particularly so when the partition is formulated in
terms of nonorthogonal “reference fragment” orbitals and their
attendant conceptual ambiguities. Dubious premises lead inevitably to a multiplicity of (equally dubious) EDA partitions.
Angel Martín Pendás
In a sense, the lack of precision in defining concepts will inevitably lead to different partition schemes. However, there are
several levels at which differences will arise, and much as in
other fields of Chemistry, a set of minimal rules for an EDA to
be acceptable for the community should be given. Are there
references? If so, are they well defined, or may they be chosen
at will by the user? Similarly, are there intermediate states from
which particular energy components are defined? If so, are they
well behaved, that is, are they compliant with the quantum
mechanical framework? It is my opinion that, in many cases, it
is not the fuzziness of chemical concepts that multiplies the
number of available EDAs, but on the contrary, the somewhat
forced construction of partitioning schemes fitting available
computational or methodological levels.
Julien Pilmé
Yes, it can be argued that the lack of precision or the lack of
physical basis of some simple chemical concepts such as the
“lone pair” concept, promotes the coexistence of numerous partition schemes. However, even if these concepts were better
defined, one might think that the abundance of partition
schemes will be sustained due to the difficulty to build a rigorous bridge between a unique definition of an atom and the
quantum mechanics. Therefore, the lack of clear relationship
between the chemical concepts and the quantum mechanics
leads to an arbitrary character in the definition of partitions dictated by a compelling need to rationalize the diversity of interactions observed in the matter at the microscopic level.
Carlo Gatti
In my view, rather than the lack of precision in the definition of
many chemical concepts, it is the quite different perspective of
the various partitioning schemes that lead to and motivate, to
some extent, their coexistence. Indeed, broadly speaking,
energy decomposition analyses (EDAs) may be grouped in two
main categories, according to whether the decomposition is
performed in Fock (orbital) space or in the position space R3,
using some convenient partitioning of R3 in subdomains
(e.g., QTAIM). Advantages and shortcomings of the two
approaches have been masterfully analyzed and discussed by
Martín Pendás et al.[82] In the former, attention is directed to
the composing energies of the (often fictitious) intermediate
steps through which the analyzed system is formed from some
initial moieties, whereas in the second kind of approaches,
attention is focused on dissecting intra and intersubdomains
energy contributions for the very final step of such system (and
with a similar analysis performed on a given initial step of the
system, if the approach is applied to the interaction energy
also). The proposers of the real space EDAs are like a film director or a mystery writer which focus and analyze the last scene
of the movie or of the murder based only on what they see,
using unbiased scissors and zooming lenses. Instead, those of
the orbital space EDAs are eager to reconstruct a sequence of
Gedanken facts which have led the actors to the final outcome.
Actors keep changing (and often losing) their identity through
this process and in most cases, represent purely imaginary characters. Clearly, in the case of “orbital space” EDAs, the film
directors or mystery writers enjoy more freedom in their work
and their products may more largely differ among each other
and raise more vibrant, yet often nonsense, debates.
Paul Popelier
Yes, is the short answer. I can think of one important example
of a chemical concept that causes a proliferation of EDAs. The
concept is that of the molecule itself and, in particular, the
identity of a molecule when in close contact with other molecules. More precisely, nontopological EDAs suffer from an
5
unclear definition of a molecule at short range. At close intermolecular distances, the separation of charge transfer and
polarization then becomes increasingly ill-defined. Of course, at
long range, the identity of a molecule is not problematic. RS
perturbation theory is based on this clear idea of a molecule at
long range but this theory’s vulnerability is that it breaks down
at short range. In that regime, molecules stop “owning” their
electrons and strong delocalization (i.e., exchange) start spoiling
the classical picture of what a given molecule is within a molecular assembly. In other words, if one is uncomfortable with
finite (bounded) subsystems (i.e., the “real space” approach)
then the challenge is to determine where a given molecule
stops and starts. On the other hand, if one is inclined toward
infinite and overlapping subsystems (i.e., the “fuzzy” or “orbital”
or “Hilbert space” approach) then the challenge is to determine
which orbital or basis function still belong to the molecule in
question. In any event, without a clear decision on the matter
of how to carve out a molecule from a molecular assembly, one
will face ambiguities down the line, such as the one mentioned
above. It is important to make the right decision upfront in
order to avoid issue that needs fixing later. I still like to think
that the topological partitioning offers a clear definition of a
molecule at short range and is thus a good starting point for an
EDA. Indeed, IQA defines charge transfer and polarization in a
well-defined way.
Finally, I had to comment on Carlo’s nice metaphor. Of
course, at first sight, it is true that IQA does not invoke any
intermediate steps unlike nontopological EDAs. However,
strictly speaking, this is not really true, depending on one’s
starting point. Granted, IQA does not introduce a state that violates the Pauli principle but it could do this, and still apply its
topological partitioning to it. To me, how one partitions and
which reference states one brings in are two independent
things. In fact, one can argue that IQA also refers to an artificial
reference state, by the fine structure of the second-order
reduced density matrix. This object contains an electrostatic
part, an exchange part, and an electron correlation part. The latter is a by-product of the fictitious Hartree–Fock state, while the
exchange part can be seen as a by-product of the fictitious Hartree state. So maybe the last scene of the IQA movie had some
predecessor scenes after all…
Pedro Salvador
To some extent, yes. But, I think it is worth to start by pointing
out that we are referring here to two main families of energy
decomposition schemes, as the reason for not having a unique,
unambiguously defined, scheme is different in each case. I do
not consider either of these two approaches superior from a
conceptual point of view, they merely provide different insight.
On one hand, there are approaches that decompose the total
energy into a number of global contributions that bear some
physical/chemical significance. In this case, the main issue
appears to be that some of these global energy contributions
are defined with respect to a reference. Another family of
methods decomposes the total energy into domain contributions, the latter often identified with the atoms within the
6
molecule. In this case, we essentially have different realizations
of the same scheme, using one or another atom-in-molecule
(AIM) definition. Indeed, by introducing real-space atomic
weight functions one can accommodate both disjoint and overlapping AIM approaches. Furthermore, there is no need to further distinguish between “Hilbert-space” and real-space
methods in this context. Some years ago[83,84] it was shown that
on the (numerical) one-electron basis set formed by the socalled effective atomic orbitals (obtained for a given real-space
AIM definition), the classical “Hilbert-space” and the real-space
formulae yield exactly the same results (even beyond the
LCAO-MO framework, for example, for plane-wave calculations[85]). Hence, the arbitrariness in this scheme comes solely
from the definition of AIM, as is the case of many other descriptors such as partial atomic charges or, to some extent, bond
orders.
However, it is also worth to point out that, contrary to, for
example, electron population analyses, energy decomposition
schemes may differ from one particular electronic structure
method to another, just because the way the total energy is determined can also be different. This adds an additional source of
ambiguity even if we merely consider the case of formally exact
theories such as density functional theory of full-CI methods.
Jerzy Cioslowski
The coexistence of many schemes is a direct consequence of
the concept of energy partitioning being replete with ambiguities. To begin with, it encompasses two very different
approaches, namely, (1) partitioning into contributions due to
physical phenomena and (2) partitioning into contributions due
to subsystems and clusters comprising them. This observation,
already made by Pedro Salvador in his answer above, deserves
further elaboration.
In the first case, the interaction between two subsystems is
analyzed by application of a sequence of contrived processes
such as geometry relaxation, charge transfer, polarization,
etc…, each giving rise to a particular energy component. It is
important to understand the distinction between the terms
“sequence” and “superposition” in this context as the contributions of these processes to the total energy are not commutative. To further complicate matters, not all permutations among
the members of the sequence in question are allowed. Thus, for
example, one can relax the nuclear and electronic degrees of
freedom in arbitrary order (by unfreezing the geometries of
subsystems while keeping their electron densities frozen, or
vice versa), each time obtaining different values of the respective energy components. On the other hand, if one defines
polarization as complete relaxation of the electron density and
charge transfer as relaxation of the total charges of the subsystems, then obviously that the latter has to precede the former.
To summarize, the plethora of the possible energy partitioning
schemes of the first kind arises not only from certain degree of
arbitrariness in the definitions of the individual physical processes (polarization, charge transfer, etc…) but also from the
(limited) arbitrariness of the order in which they are applied.
Another layer of ambiguity is added by the necessity of
specifying the definition of the partitioning of the electronic
properties into subsystem contributions that underlie the entire
energy partitioning scheme.
In the second case, one attempts to write the total energy as
a sum of contributions due to individual subsystems, pairs of
subsystems, clusters of three subsystems, and so on. Usually,
these subsystems are atoms or functional groups. Again, all of
such schemes derive from particular definitions of properties of
atoms in molecules. However, there is another conceptual difficulty that has to be considered. The electronic Hamiltonian is
composed of one- and two-particle terms that give rise to the
respective one- and two-electron energy densities. Integration
of these densities over the entire Cartesian space produces the
corresponding energy components (i.e., kinetic, electron–
nuclear attraction, and electron–electron repulsion) whose
atomic and diatomic contributions are obtained by analogy
upon multiplication of the integrands by atomic projection
functions (whose sum over all atoms equals one; the functions
themselves can be smooth or not). Consequently, the atomic
contributions can be defined for all the three energy components, whereas the diatomic contributions arise strictly from the
electron–electron repulsion energy (though one may further
partition the electron–nuclear attraction energy by separating it
into terms due to individual nuclei). Thus, the “many-body”
energy contributions due to clusters of more than two atoms
cannot be defined in a meaningful way, which runs contrary to
expectations from chemists (who would often prefer to deal
with atomic and diatomic energies transferable from one system to another, the residual energy being accounted for by
interactions involving more than two atoms) and physicists
(who are inspired by the cluster expansions of energies of systems composed of noble gas atoms, etc…). Even worse, this
observation appears to contradict at the first glance the results
of perturbative treatments of the dispersion interactions that
produce closed-form expressions for, for example, three-body
interactions (the Axilrod-Teller potential).
Gernot Frenking
The coexistence of many partition schemes is due to the fact
that different concepts and bonding models exist, which come
from different viewpoints and which address different questions.
Chemical bonding is a very complex phenomenon, which can be
interpreted in various ways. There will always be several chemical
concepts in chemical research, which consider diverse aspects of
chemical phenomena. Chemical concepts and bonding models
are not right or wrong, but they are more or less useful and the
usefulness depends on the question that is asked. Partition
schemes are tools in the arsenal of bonding models, which serve
as a bridge between the numerical results of quantum chemical
calculations and the human desire to understand them in terms
of classical concepts. Chemistry has rather a problem with the
coexistence of (1) historically developed and poorly defined heuristic concepts and (2) more recently suggested quantum chemical partitioning schemes. The conclusions of the two models
may contradict each other. In such cases, it is important to examine the origin of both approaches.
Julia Contreras
Indeed, there is an inherent lack of precision in chemistry, but I
would not call them “physical,” but rather “mathematical.” It is
mathematically impossible to univocally define an atom in a molecule or the point-wise energy in a molecule. However, I
completely disagree there is no physics behind. There are physics behind from the moment people have been able to find patterns and make predictions in chemistry in terms of atoms and
functional groups way before Quantum Chemistry entered the
scene. This makes some partitions more physically sound than
others, and the use of some bases more physically sound than
others. Atomic basis sets are used in molecules because atoms
are still a good physical entity/model. Although mathematically
we could choose any function, atomic basis sets (located at the
atomic nuclei) are a good option to describe the molecule in the
sense that it accelerates convergence and reduces the number
of functions to use. This basically means there is a physical truth
behind the choice, even though other functions would also be
mathematically good options to expand a function. Of course,
this is not elegantly mathematically defined, but one should not
forget the physics behind it, and deny the predictive power of
chemistry because it is neither mathematically nor elegantly
univocally defined. So just like in any other theory, we should
look for partitions that contain the physics we need to reveal following a very simple principle: “as simple as it can be, but not
simpler”. Just like many models coexist for describing other problems in physics, I also agree that what we probably need is not
less models, but a good hierarchy of them, so that we know
when a given model is valid or we should go for a higher rung
and more complex-description of chemical reality. I agree with
C. Gatti that equivalencies among partitions as the one carried
out in Ref. [82], and the limits of each method, are absolutely
needed in this respect.
Henry Chermette
The ambiguity of the bond concept relies on the fact that only
nuclei and electrons are well-defined objects whereas the
atoms in molecules are not, since the electrons in molecules
are not stuck to nuclei. Therefore, although some definitions
look more reasonable than others, the coexistence of several
definitions is unavoidable. The lack of precision in the definitions is therefore a by-product.
Émilie-Laure Zins
In contrast with most of the previous answers, I think that the
lack of precision of many key chemical concepts is not the main
reason for the existence of many partition schemes. The phenomena that theoretical chemists seek to describe nowadays
are highly complex, which fully justifies the development of
suitable and tunable tools adapted to each type of problem. It
seems to me that this phase of coexistence of many partition
schemes could be temporary: a convergence toward a single or
a small number of partition schemes could take place in the
coming years via fundamental theorems such as Hellman–
Feynmann’s, Bethe–Salpeter’s equation, or quantum field theory. This “unification” could be an objective in itself for the
7
community, leading to a simplification of concepts and descriptions. Shant Shahbazian proposed another question: “Is the
coexistence of many partitioning schemes the reason for lack
of precision in the definition of many chemical concepts?” I
think that the development of a “universal” partition scheme
could lead to a simplification of chemical concepts and an easier dialogue between theoretical and experimental chemists.
Laurent Joubert and Vincent Tognetti
We think that the reciprocal question is also interesting: do we
need an energy decomposition to define chemical concepts?
For instance, conceptual DFT has often provided firm physical
ground to empirical concepts using very simple equations:
Pearson’s molecular hardness and Parr’s electrophilicity index
can be derived in two lines, in contrast with many EDAs that
require much more maths…
Paul W. Ayers
Yes and no. Insofar as people cannot agree on the precise definition of induction, or dispersion, or electron transfer, or polarization, there can never be a unique EDA. (As a pernicious
example, electron transfer energy can always be viewed as an
extreme form of polarization, where the electron density of a
fragment becomes extremely delocalized. Similarly, the mere
idea of a local or regional kinetic energy is mathematically illdefined.) So yes. However, the existence of many partitioning
methods leads to ambiguity in chemical concepts, since different EDA methods give qualitatively different explanations of
chemical phenomena in many cases. So no—it is not fair to
“blame”, the coexistence of many partition systems on the
ambiguity in chemical concepts any more than it is fair to
blame the ambiguity of chemical concepts on the existence of
myriad partitioning schemes.
Farnaz Heidar-Zadeh
Given that chemical concepts are nonempirical, any attempt
toward quantifying them is doomed to be nonunique. Partitioning schemes are no exception! Considering the fact that
partitioning schemes cannot be defined accurately, putting too
much effort into the precision of various definitions is futile. Each
scheme starts from a different set of assumptions (some of
which may be unknown or not carefully laid out at the beginning) and consequently gives a different set of results. The best
one can hope for is having a rigorous mathematical definition of
partitioning schemes (and other concepts) which confirms chemical trends and aids us in rationalizing the behavior of molecules
and materials. So, even though it is interesting to quantitatively
compare various schemes (i.e., the so-called precision of various
schemes), only their qualitative comparison can truly testify to
their value (i.e., assessing how various schemes comply with or
improve chemical reasoning is the closest thing to an accuracy
check we can dream of). As G. Frenking clearly explained, these
schemes are tools for making sense of numerical results of quantum chemistry calculations in terms of familiar classical chemical
concepts and depending on the problem at hand, some of these
tools will be more/less useful than the others.
8
Juan Andrés
Many chemical concepts (aromaticity, chemical bonds, oxidation
states, and atomic charges) employed in chemistry today can be
traced to the early stages of the field, notwithstanding the significant developments, refinements, and extensions made during
the last years. Chemical nature aside, the terminology is introduced by these chemical concepts is now ubiquitous and has
had a pronounced effect on the way that chemistry is practiced
and taught. These concepts remain invaluable in providing
frameworks that allow us to rationalize trends in chemical structure and reactivity, as well to realm current knowledge, consideration of new observations, and finally as pedagogic instruments.
These concepts described above were established without
fully understanding the physical principles underlying the
interactions between electrons and atomic nuclei. As Robert
Heinlein wrote, “The difference between science and the
fuzzy subjects is that science requires reasoning while those
other subjects merely require scholarship”. The work of Jansen and Wedig[86] serves as an example of how chemical concepts (in this case, atomic charge and oxidation state) can
achieve progressively improved operational definition. Unfortunately, their concepts are not observables, that is, there is
no quantum mechanical operator that would work on the
wave function to give the corresponding value as an observable. They can be criticized for being unphysical and nonobservable; although are highly useful but often not backed
by solid theory, in particular, not by quantum mechanics. This
a very common situation approach in chemistry and it is an
inherent part of various quantum tools utilizing chemical concepts in molecules and crystals. One way of overcoming this
conundrum was proposed recently by Ayers et al.[87] in which
an axiomatic approach to chemical concepts is introduced.
Therefore, it is lack of precision in the definition of many
chemical concepts one of the reasons for the coexistence and
proliferation of many partition schemes. Moreover, our future
findings have immediate implications for the development of
the next generation of physically motivated procedures to be
able to capture the physics of the chemical concepts properly, a new strategy is needed to take the benefits of fully
into account.
Yirong Mo
The literal definitions and conceptual understanding of various
concepts are often consistent and shared by all chemists. But in
the process of realizing these concepts, one need to use
approximations and set up rules within one’s field (e.g., either
molecular orbital theory or valence bond theory or different
MO methods). This can be understood as getting more precise
definitions. As a consequence, different approximations lead to
different partition schemes with different (sometimes conflicting) outcomes. For instance, we often regard charge transfer
as a process occurring from one monomer to another monomer, or from one fragment to another fragment of one molecule, or approximately from one fragmental HOMO to another
fragmental LUMO. But, in computations, we need to define the
fragmental orbitals. Here the disparity comes up because we
usually get only canonical MOs which are extended over the
whole system. Approximations are introduced to get fragmental
orbitals and different approaches lead to different solutions and
eventually different energy terms.
Eduard Matito
Yes, but in my opinion, it is actually one particular chemical
concept that is most responsible for the proliferation of
energy partitions. Since an atom in a molecule (AIM) is a fuzzy
chemical concept, it is only natural that many partition
schemes coexist. This most evident in the case of real-space
energy partitions: we can have as many such energy partitions
as definitions of AIMs. In this sense, one way to reduce the
number of energy partitions would be to focus on the reliability of the AIM definition. For instance, we have found that
some AIMs do not provide correct predictions when they are
used to analyze certain properties (see my answer to question 4).
Eloy Ramos-Cordoba
Since traditional chemical concepts are not observables, they
are not well defined in the context of quantum mechanics.
As a consequence, a given chemical concept can, in principle, be described by many different mathematically rigorous
partition schemes. The number of reasonable schemes can
be reduced by imposing a series of physical constraints on
the partition formalism. For instance, in variational EDA, one
can impose that all the intermediate wavefunctions must be
antisymmetrized.
W. H. Eugen Schwarz
Many chemical concepts should not only comply with the general laws of basic physics, but at best also match the set of specific chemical materials of interest, the properties, and reactions
of main interest, and the ways different chemists are trained to
understand and intuitively guess and predict them. The chemical concepts should be designed in a clear, unique, rational,
consistent, and purposive manner. Accordingly, there will and
shall emerge related, slightly different concepts, without lack of
conceptual quality. A simple example is the partitioning of
interatomic distances, modeled by sums of various types of
“atomic radii” (van der Waals radii opposite and orthogonal to a
bond, ionic radii for different formal charges and coordination
numbers, covalent radii for different formal bond orders, hydrogen bond radii, etc.). The richness of chemistry requires a richness of concepts and schemes.
Namely, Physics is the science of matter in simple, prototypical, ideal sectors of reality, to be described in the most general
and most accurate way. Chemistry is the science of matter in
complex, special, realistic, and “human” cases, to be described
in a useful, simple and appropriately and reasonably reliable
way. (“Human” conditions here typically mean matter around
temperatures of 300 K × 10±2 and pressures of 1 atm × 10±4
during timespans of 1 h × 10±6.) Physics and chemistry are two
hard sciences of different kind.
In principle, physics is more distinct and chemistry is fuzzier.
That is, “lack of precision” may exist in cases where one is still
in the phase of development of the chemical concepts, but it
is misleading and biased discrediting the principle of fuzziness
as a lack of some improper requirement in mature scientific
cases. Also note that the clear physical concept of, for example, an electronic particle (in a molecule) in the low-energy
approximation fades away in the high energy regime (near
heavy nuclei), where it must be replaced by the electron–
positron matter field of noncoinciding charge, mass and spin
distributions. Or the clear physical concept of a sharp spatial
boundary (sometimes postulated between “atoms” in a molecule) in the low-velocity approximation fades away in the realistic relativistic regime.
Therefore, many chemical concepts cannot be that general
and unique as most of the basic physical concepts. They have
to be appropriately adjusted to the ranges of materials, to the
typical conditions and to the useful purposes, which the
researchers want them to apply to. Experienced researchers
with some special interest in some field of chosen materials will
search for useful parameters as a quantification of their experience-guided fuzzy ideas. Sets of related concepts of different
scholars may be analyzed with the help of statistical techniques, see below.
Finally, we must distinguish between descriptive, analytic, and
explanative concepts. Description is the qualitative or quantitative specification of the interactions of some specimen with its
surrounding, as given by nature and its laws. (The specimen
may be a molecule or nanoparticle or droplet or crystallite or
surface layer etc. of some material or compound. The interaction properties may be described by mass, charge, polarizability, color, chirality, various specific reactivity parameters, etc.)
Respective probability distributions and source functions of
these interaction properties can be analyzed in spatial detail, as
for instance in the QTAIM. As long as no restraints of the
description are imposed, such as monopole approximations, or
same average values for whole sets of specimens or homologous series, the descriptive parameters should be comparatively
unique.
There are then two further steps toward a deeper understanding of WHY is WHAT is. First, there is the question of the
internal structure of the data, and relations between them. That
is so to say an “autonomous” intrachemical approach. One
approach is by additive increment systems, approximating the
property parameters of compounds by sums over atomic,
diatomic, and possibly multi-atomic (three-center or ring) contributions. Examples are effective atomic radii, effective atomic
charges, ionic conductivities, diamagnetic (Pascal) increments,
diatomic bond energy increments, spectroscopic ligand field
parameters, stereospecific ligand parameters, and so
on. Depending on the chosen set of compounds, the chosen
types of properties, and the perspectives of the researchers,
related alternative partition schemes may emerge, where the
increments may have somewhat different meanings. As long as
observable properties Pj of specimens j can reasonably well be
approximated by different sets of increments bk and dl as in
eq. (1), it is fine.
9
We remember the interchange theorem of double-perturbation
theory, where a response property for two perturbations can be
represented by two different types of expressions, giving different
perspectives of insight. The success in the development of
research and teaching of chemistry should decide which schemes
survive. Suggestions as by the Bader school that chemists should
give up those useful analysis tools that do not fit into their special
frame of the QTAIM approach do not appear fruitful.
Second, one shall search for connections between specific sectors of chemical experiences and the general framework of physical theories. That is the reductive interphysicochemical approach.
In order to construct intuitively convincing and theoretically sound
arguments, one usually needs a two-step analysis. To understand,
why a chemical compound system forms and behaves in this
manner, one also needs to understand why the chosen fragments
or reference states relax or response in the manner they do. To
this end, measuring or calculating the stationary molecule of interest or the overall changes upon a chemical reaction is not
enough. One must choose both appropriate references and appropriate intermediates, which both are not uniquely determined by
nature, for instance when analyzing covalent or dative or ionic or
various secondary interactions. The real world is a quantum world
(with the emergence of classical features due to quantum
decoherence), yet it is admissible to apply classical physical concepts in real chemistry, as more or less excellent approximations.
Similarly, one may, at least for “separated” fragments and even for
overlapping intermediates, discuss what would happen in a classical world with states that are “nonexistent” in reality, since they
violate the Pauli principle, and talk about Pauli forces.
In summary, the main reason for the coexistence of related
chemical concepts is that chemists may take different viewpoints
and ask nonidentical questions, looking into real space and/or
onto the quantum field in space. The development of computational methods may open the birth and survival of more options,
while unreliable approaches of extreme computational simplicity
(the arbitrary AO basis in the Mulliken population analyses) may
disappear. Modern scientific chemistry was given birth in the
1780s (by Lavoisier, his wife Marie-Anne Pierrette Paulze, and
their Parisian colleagues) with the invention of the chemical elements as the conserved entities in chemical reactions, and the
representation of macroscopic materials by atoms in molecules
(by Dalton in the 1800s). The very basis of scientific chemistry is
the fuzzy concept of microscopic elemental atoms in macroscopic
stuffs; therefore the typically chemical concepts are fuzzy. Introducing physical theory to explain chemistry in an intuitive manner
thereby supporting intuitive predictions, which are the basis of
fruitful chemical science and technology for the benefit of society, requires the smart choice of physical reference states and
more or less physical intermediate states for discussion of the
specific physical situation in the case at hand.
Alston Misquitta and Krzysztof Szalewicz
As we have shown above, the physical components of SAPT are
uniquely defined and can be computed to potentially arbitrary
10
accuracy. These terms have a precise physical interpretation.
The electrostatic energy is the Coulomb interaction of
unperturbed charge distributions. The induction energy of second order in V results from response of monomer A (B) to the
field of unperturbed charge distribution of monomer B (A). The
dispersion energy results from correlations of electron positions
between monomer A and B. All these components are precisely
defined at all R’s, not only in the asymptotic region. Finally, the
exchange-repulsion component results from exchange tunneling of electrons between interacting systems. Thus, in the case
of SAPT, there is no lack of precision in defining these chemical
concepts. Consequently, SAPT can be used as the standard
model for EDAs in the intermolecular interaction sector and
EDAs inconsistent with SAPT should be discarded.
For induction interactions, one should always consider the
sum of induction and exchange-induction corrections, sometimes denoted as Eindx. The reason is that at R near the van der
Waals minimum and smaller, the overlap contributions in
induction energies become large, leading to large discrepancies
between the asymptotic expansion of induction energy and
SAPT values,[88] larger in magnitude than typical damping
effects and of opposite sign. This is due to the fact that for systems with one of the monomers having more than two electrons, the interacting system is submerged in the continuum of
Pauli-forbidden states unless such states are projected out by
enforcing antisymmetry,[64] which is not done in RS. If the
exchange-induction term is added, the contributions coming
from the violation of symmetry are canceled out to a large
extent. When higher-order induction effects are important, as
they are in strongly bound systems with a large polarization
and charge-delocalization, one should include Eindx computed
to third order in V, as well as the δHF
int term. This term is defined
as the difference between the Hartree-Fock (HF) supermolecular
HF
interaction energy Eint
and the sum of the first-order, induction,
and exchange-induction components computed with neglect
of intramonomer correlation effects.
Question 2: Does the Adoption of a Given
Partition Scheme Imply a Set of more Precise
Definitions of the Underlying Chemical
Concepts?
Ramon Carbó-Dorca
Looks like it does not. The impression is that the great number
of partition schemes worsens the definitions of underlying
chemical concepts, adding a bit of more fuzziness to their
already fuzzy character.
Shant Shahbazian
In my opinion, there is no such thing as “more” or “less” precise
quantitative definition of a chemical concept and all mathematically rigorous definitions of a concept are equally precise and
definitions lacking mathematical rigor do not deserve to be categorized as quantitative. Assuming that the intended partitioning schemes are rigorously constructed, the concept of
interest is also precisely defined within the “context” of each
given partitioning scheme. The problematic situation appears
when one tries to compare the nonequivalent definitions not
within, but between various partitioning schemes. As I stressed
also in answer to question 1, this is the result of “over-quantification”.
At a personal level, one may simply dismiss all available partitioning schemes but a single one and adopt the preferred
scheme trying to avoid the dilemma though, at the level of
community the problem retains (the relevant literature however hardly supports that even at a personal level this is the
preferred strategy). Since part of what we mean by precise
definition is the “consensus” of a scientific “community”, not a
single person, to use the “preferred” partitioning scheme,
without a consensus the above-mentioned formal mathematical viewpoint on the precise definition is at best handicap.
These are the motivations to address questions 8 and 9.
to have somewhat different meanings in equally valid alternative approaches. I agree with Frank Weinhold that supposed
correlations with underlying chemical concepts need to be
demonstrated, without any such links simply being assumed. I
am also struck by a point that is reiterated in a recent perspective article[89]: the use of models (including EDA schemes) can
risk blurring the distinction between what is really mathematical modeling and what is, at least in some sense, an underlying
chemical/physical “reality” or a “meaningful set of concepts”.
[That particular article classifies exchange, Pauli repulsion, and
orbital interactions as being part of the mathematical model,
and it also addresses the extent to which there is really any
proper distinction between charge transfer and polarization.
Then again, picking (say) exchange, it can be important to
remember that not everyone agrees as to what such entities
really signify even in a qualitative sense.[90]]
István Mayer
Any partition scheme is based on a selected well-defined definition of the atoms within the molecule.
Martin Rahm
In one way of looking at it, yes. Chemical concepts are what we
make of them. Different chemists can and will have different
opinions on the best definition of what is covalence, electronegativity, etc. Differences in such definitions are also a consequence of the development of the concepts over time.
Different partition schemes will naturally be a reflection of this.
As such, different EDA schemes can provide precise definitions
within their respective frameworks. If a particular incarnation is
chemically useful is another question.
Frank Weinhold
Not in any rational process of advancing science. Any EDA component “label” is a language construct, not a “concept” per
se. The correlations (if any) between EDA component labels
and more broadly understood chemical concepts should be
demonstrated, not assumed.
Angel Martín Pendás
If an EDA is seen as a way to compress (or to compact) the
complex energy information content of a wavefunction (or set
of wavefunctions if references are needed), different EDAs
should provide different readings of the same physical
(or mathematical) objects. In this sense, all EDAs should be
compatible among themselves (and that is why I advocate partitions which can be applied to any or at least to a large class
of wavefunctions). By understanding how different methods
read the same function providing different answers, the limits
and windows of applicability of the underlying chemical concepts might probably be sharpened.
David L. Cooper
There are indeed senses in which the adoption of a particular
partitioning scheme leads to more precise definitions of particular EDA “labels” within that scheme, but such “labels” are likely
Carlo Gatti
Not necessarily and surely not in the present state of affairs.
Chemical concepts are in general very much intertwined in the
energetic terms of the various EDAs based on orbital space
decompositions and they may be so to a different extent,
depending on the given scheme adopted. A clear and enlightening analysis of this rather convoluted problem is presented in
82. I’m personally in favor of retaining only those energy partitioning schemes where each energy component has a clearly
defined physical basis and then of possibly observing whether
and which of these components may be roughly related to
chemical concepts, if any. This is the typical situation one is facing with position space EDAs, like IQA. To make an example,
charge transfer, which is a typical chemical concept, may be
clearly defined and easily evaluated with all these approaches.
However, it is not possible to isolate its energetic impact in
standard real space EDA schemes. One should make recourse
to the theory of resonance structures in real space to estimate
such energy component.[91]
Paul Popelier
I am inclined to answer yes. As a fan of IQA (a sentiment to be
updated when something better comes along) I am happy
(hopefully not naively) with the way this EDA defines the following chemical concepts: covalency (via exchange), ionicity
and polarity (via electrostatics), dispersion (via electron correlation), and steric effects[92,93] (via the intra-atomic or self-energy).
Because IQA is able to also provide intra-atomic “dispersion” it
has the potential to perhaps define a new chemical concept,
which focuses on stability of weakly bound van der Waals complexes but then from an (intra)atomic point of view.
Pedro Salvador
If we assume an atom-in-molecule definition being a partition
scheme, I think the answer is yes. Not only the numerical results
differ from one AIM to another (sometimes quite dramatically),
but some particular chemical concepts may only be achieved
by making use of a given AIM. This is, for instance, the case of
the so-called “overlap population”, which can only be
11
accounted for with “fuzzy” atomic domains. This, however, compromises this particular concept, which probably should be
considered deprecated. In a way, one should stick to chemical
concepts that can be achieved, at least quantitatively, with any
reasonable (see question 7) partition scheme.
Jerzy Cioslowski
As put succinctly by István Mayer in his answer above, “any partition scheme is based on a selected well-defined definition of
the atoms within the molecule.” However, in the case of energy
partitioning schemes formulated in terms of (imaginary) physical processes, specification of the order of their application is
equally important (see my response to question 1). Thus, the
implication is always one way, namely definitions of chemical/
physical concepts => energy partitioning.
Gernot Frenking
I agree with Martin Rahm that “Chemical concepts are what we
make of them.” A useful partitioning scheme should indeed
lead to a more precise definition of the underlying chemical
concept. Five conditions are to be fulfilled by a reasonable partitioning scheme: (1) it should be based on accurate quantum
chemical calculations; (2) it should be mathematically unambiguously defined; (3) the results should be largely independent of
the level of theory used; (4) the different terms should lead to a
plausible interpretation; (5) it should be useful for chemical
problems. The adoption of a particular partition scheme comes
from its usefulness. The agreement with chemical concepts is a
fuzzy condition because chemical concepts are fuzzy. I think
that this comes out of the necessity to bring the pandemonium
of chemical facts into an ordering scheme in terms of rules and
models, which are accessible to the human mind. I was puzzled
by the statement in Ref. [82], that the use of bonding models
“sometimes leads to a blurring of the distinction between
mathematical modeling and physical reality.” In the quantum
world, physical reality of an electron is not entirely assigned to
its electronic charge distribution, which represents only a projection onto a space of lower information content; its completeness is only provided by its wave function Ψ. The wave
function Ψ contains more information about the behavior of
the electron than ρ. In chemistry, this comes to the fore for
example in the outcome of pericyclic reactions, or in any spectroscopic investigation, which can only be explained when the
symmetry and sign pattern of Ψ are considered.
Émilie-Laure Zins
Most of the current chemical concepts are based on the observations, interpretations, and intuitions of experimental chemists.
One of the roles of theoretical chemistry is to explain and
develop chemical concepts based on the fundamental equations of quantum physics and to link these concepts to those of
the experimental chemists. Thus, the use of any partition
scheme to explain any empirical chemical concept should lead
to more precise definitions of the underlying chemical concepts
from the perspective of our community, provided that the
energy partition is based on variables that have physical
12
significance. However, experimental chemists will not necessarily be immediately convinced by the increase in precision that
we can bring to empirical concepts they work with on a daily.
Miquel Solà
It is an advantage, not a problem, to have different partitions
schemes as far as they prove to be useful, to be rooted in quantum mechanics, to be mathematically unambiguous, to provide
physically meaningful energy terms, to give insight, and to possess predictive power. Let me quote Dewar who said: “the only
criterion of a model is usefulness, not its truth”.[94]
Paul W. Ayers
In a narrow sense, once one chooses a partitioning method
(either for the atom-in-molecule or the energy-into-fragments)
then those choices can be profitably used as a “model chemistry” to elucidate chemical phenomena, to observe trends, and
to draw inferences. In this sense, all computed quantities within
the selected partitioning are “precise” (in the sense of being
exactly defined) within the context. However, a different partitioning method might reveal different trends and different
insights that are not less precise, but merely more or less useful
(as a matter of preference and opinion).
Farnaz Heidar-Zadeh
A well-defined scheme is based on a rigorous set of physical
assumptions, consequently, it consistently prescribes the definition of underlying chemical concepts. So, I would use the term
“consistent” instead of “precise.” This gives an elegant and
unambiguous framework for further developing concepts. Even
if a concept was proposed on heuristic grounds, but proved to
be useful, establishing a framework within which it is mathematically justified is essential.
Juan Andrés
As Solà remarked recently[95]: “My usual answer is that the most
fruitful concepts in chemistry share the same lack of strict definition.[96]” In addition, there is not a unique way to compute
quantities related with such intuitive chemical concepts and
therefore, partition schemes. As Martín Pendás et al. write[97] “A
chemical bond has an energetic strength (its bond energy) that
is somehow connected to a particular electron count (its bond
order). Interestingly, neither bond energies nor bond orders are
(Dirac) observables. The former vanish into thin air once we
pass from diatomics to polyatomics, whereas the latter too
often rely on the orbital approximation. Notwithstanding,
chemists feel comfortable with such an edifice otherwise built
on shifting sands”. Therefore, the different partition schemes
need to be rooted in quantum mechanics, to be mathematically
unambiguous, in order to provide physical basis of the underlying chemical concepts.
Yirong Mo
Due to the lack of direct experimental data to endorse any partition scheme as individual energy terms are not observables, it
is hard to reach any consensus in adopting any particular partition scheme. Users adopt certain partition schemes often based
on the accessibility and their own familiarity. Nevertheless,
there are indirect experimental data to justify partition
schemes, though it is everyone’s taste whether to believe
or not.
Eduard Matito
As it has been repeatedly said, a model should be judged by its
usefulness. In the context of this question, the usefulness refers
to the faithfulness with which it represents the underlying
chemical concept. Since I understand the question as “can the
energy partition go beyond the definition of some concepts”, I
am inclined to say that it pretty much depends on the fuzziness
of the concept. Fuzzy concepts can be surpassed (and replaced!) by the model, the temperature is a nice example of this
kind, as Shant indicated earlier. Aromaticity could be a more
current example of a fuzzy concept that has been influenced
by computational models and tools. However, it is difficult to
imagine that energy partitions can go beyond long-standing
and more consistent concepts. In particular, concepts that
transgress computational chemistry (and even chemistry) are
difficult to change.
Eloy Ramos-Cordoba
By selecting a particular partition scheme, the ambiguities on
the underlying chemical concepts disappear within the framework of that particular decomposition. As Prof. Mayer stated
above, energy partitioning schemes are mathematically welldefined, and so are the energetic components that one can
extract from them.
W. H. Eugen Schwarz
No: In principle, experimental and theoretical inquiries are autonomous, with theoretically defined and empirically originated concepts to be connected as well as possible. Empirically motivated
concepts need improvement if theoretically shown to be internally inconsistent, while theoretical constructs are senseless if
unrelated to empirical concepts. The correspondence of a particular theoretical partitioning scheme to some particular chemical
observation-coupled concepts attaches quantum chemical meaning to the latter ones. This may in some cases help more precisely specifying the empirical concepts. Since most chemical
concepts are fuzzy to become broadly applicable, this correspondence will remain somewhat fuzzy.
Alston Misquitta and Krzysztof Szalewicz
As stated in the answer to Q1, SAPT partition scheme does provide a precise definition of the chemical concepts such as electrostatic, induction, dispersion, and exchange energies. While
EDAs that are in a significant disagreement with SAPT should
not be used, the question arises what is the threshold for such
cutoff. While it is difficult to set any strict limits, perhaps a few
percent agreements should be the goal. The agreement is best
for Morokuma-type methods which are based on iterations of
Hartree–Fock equations starting from monomer orbitals. See
Ref. [98] for recent comparisons. On the other hand, methods
decomposing supermolecular interaction energies using localized molecular orbitals (LMO) can only agree with SAPT in an
approximate way.
Another criterion is the asymptotic behavior. The exchange
components should decay purely exponentially, that is, should
not involve any 1/Rn terms. The electrostatic, induction, and dispersion components should decay as appropriate powers of 1/R.
In methods based on LMOs, these criteria are difficult to satisfy at
very large intermolecular separations since, by the very nature of
LMOs, the dispersion terms always have a small component originating from the intramonomer correlation contribution to electrostatic energies, so that for polar systems at a large R, the 1/R3
decay of the latter energies will dominate.
The electronic structure theories that are most advanced, are
most complex, and therefore, are most difficult to interpret
physically. Thus, one sometimes chooses to use manifestly inaccurate theories like Hückel theory because of the understanding
it yields. Here is where SAPT shines as it constructs the interaction energy from the sum of physical components. Importantly,
there is nothing ambiguous about these definitions, although
there are some issues we need to be aware of, as described in
the next paragraph.
There is no ambiguity in the asymptotic region where orbital
overlap effects can be neglected, and, what is very important
for physical interpretation, in this region the multipole expansion can be used to cast the interaction energy components
(electrostatic, induction, and dispersion) in terms of the molecular properties like the electrostatic multipoles, and static and
frequency-dependent polarizabilities. SAPT interaction energies
agree with those from the multipole expansion to arbitrary
accuracy provided that R is large enough. Thus, SAPT is seamlessly connected to the multipole expansion. Since the multipole expansion of interaction energy is expressed in terms of
multipole moments and static and frequency-dependent polarizabilities of monomers, this adds another level of physical
insight into SAPT interaction energies. Furthermore, there is a
smooth transition between the overlap region and the asymptotic region. The monomer densities used to calculate the electrostatic energies can be replaced by a set multipole moments
at large R. Similarly, the density–density response functions can
be replaced by polarizabilities. The multipole moments as well
as static and frequency-dependent polarizabilities are measurable, thus providing a strong link of SAPT components to
experiment. This is, at least in the region of small density-overlap, an unambiguous link. This feature alone separates SAPT
from all EDA methods since, to the best of our knowledge, no
relation to experiment is possible in these methods.
While all acceptable variants of SAPT must have the same
asymptotic behavior as SRS, and therefore, the components are
asymptotically unique, one may question the uniqueness in the
overlap region. Fortunately, all acceptable SAPT variants give
identical first-order energies and second-order induction and
dispersion energies. The nonuniqueness appears only in the
second-order exchange corrections. Of those, the exchangedispersion energies are relatively small so the potential differences can usually be safely ignored. The differences in
13
HF
exchange-induction corrections are eliminated by the δint
term.
Also, some variants that perform best in all orders are equivalent or very close to SRS in low order. In conclusion, whereas
there is a small nonuniqueness in definitions of physical contributions resulting from the flavors of SAPT, differences between
best theories can be ignored.
Question 3: How Can One Use the Results of a
Partition Scheme to Improve the Clarity of
Definitions of Concepts?
Ramon Carbó-Dorca
In case one can observe such a publication phenomenon, time
which provides with various research fashions and hypes the
research panorama will act as the way partition schemes
appear from previous techniques and evolve into new schemes,
according to the increasing number of researchers in quantum
chemistry and their publication needs. Perhaps leaving apart
the real research task of understanding molecular behavior
within a general framework valid in any circumstance.
Shant Shahbazian
In my opinion, there is no direct relationship between the
results of a partitioning scheme and the clarity of chemical concepts. As I stressed in answer to question 1, to have a welldefined concept, a comprehensive theory for the concept of
interest must be developed. Let me give an example. While
humans had always an intuitive qualitative understanding of
temperature, since the time of Galileo people tried to quantify
this intuition through constructing various thermometers. In
one sense, in this period, temperature was an index of
hotness/coldness but temperature only conceived as a physical
“observable” when thermodynamics was formulated and the
absolute temperature was introduced independently from thermometers, based on its relationship with internal energy and
entropy. In other words, thermodynamics is an organized web
of connections between various thermal concepts and the position of each concept, for example, temperature, in the web
makes it a well-defined and clarified concept.[99] What currently
lacks in theoretical chemistry is a similar comprehensive theory
(or theories) that not only introduce each chemical concept
quantitatively but also makes a web of relationships between
various concepts. Index-based view in computational chemistry
that focuses only on quantitative definition of a single concept
lacks such capability and, in my opinion, current partitioning
schemes are also no exceptions.
Martin Rahm
It would depend on the concept in question. For example, I
have, together with Roald Hoffmann, redefined the chemical
concept of electronegativity within the framework of the
“Experimental Quantum Chemistry”-partitioning.[54] Together
with Tao Zeng, this precise definition allowed us to revise the
scale of atomic electronegativity in a way that compares well
with previous scales, such as Pauling, Mulliken, and Allen.[56]
Other chemical concepts such as “covalence” and “ionicity” are
14
less straightforward. One way toward clarifying such concepts
is to use EDA-descriptors to creates maps of chemical interactions. In well-known materials under ambient conditions we
mostly know what to expect: NaCl should come out as ionic, a
C C bond better have some covalency, and the helium dimer
should be different from the previous two.[55] A partitioning
scheme that agrees with conventional wisdom, while providing
new insight, has a better chance of improving the clarity of definitions of chemical concepts.
Frank Weinhold
Reference [100] serves as an example of how a concept (in this
case, “hydrogen bond”) can achieve progressively improved
operational definition. Can any component of any current EDA
partitioning scheme meet the operational criteria of mutually
consistent correlations with experimental properties, as illustrated in this work?
Roberto A. Boto
On the one hand, a partition scheme is built from some theoretical framework, which at the same time, is built from a set of,
in principle, well-defined concepts. Therefore, the quality of the
results should be determined by the theory behind. It is hard to
imagine a feedback process.
On the other hand, energy partitions could be constrained to
follow some conditions, such as produce energy contributions
within the chemical scale, or equalize energy terms obtained by
different partition schemes. These would not improve or
worsen the definition of concepts but would add some uniformity into the different definitions of the same chemical
concepts.
Angel Martín Pendás
Uhm, well, some of the most cherished chemical concepts do
implicitly rely on some kind of partitioning. Covalency, for
instance, is one of them. Whatever source is used to find an
operational definition (including IUPAC’s gold book) of what we
mean by a covalent bond will include the word “sharing.” And
sharing implies at least two objects which share, so a partition.
Typically those objects are understood as atoms, so in some
sense, partitioning schemes may help develop a concept in a
bootstrapping process, as in the temperature example commented by Dr. Shahbazian.
Julien Pilmé
In my opinion, results obtained from only one partition scheme
are probably not sufficient to really clarify the definition or the
meaning of simple concepts commonly used in chemistry
because most of these concepts go beyond any partition
scheme. Maybe, if the targeted concept has a typical “signature” which can be identified through several partition
schemes, the confrontation of results arising from numerous
partition schemes would be useful to improve the definition of
the concept. I think for example that the case of the covalent
bond, already reported by Angel, falls into this category.
Carlo Gatti
I see a risk in this process as chemical concepts evolve and generally become wider and more general with time. One good example is aromaticity. Though hardly definable, the concept of
aromaticity has now largely expanded and it is no longer limited
to π-orbital organic chemistry but proved useful in describing
bonding and energy stabilization in many inorganic molecular
compounds and also in the solid state. Not to talk about recently
discussed organic molecules where π- and σ-aromatic chains
seem to coexist. Therefore, in my view, a physically grounded partition scheme should not be aimed at improving the clarity of
definition of a concept, which may possibly evolve, but be able
to include and to some extent predict the future evolution of that
concept. It is only through this process that the partition scheme
will help to improve the clarity of a concept. Non-nuclear
attractors and their properties were defined while studying Li
clusters as a straightforward extension of Bader’s space and virial
partitioning. But have then recovered in many other chemical situations, both at ambient or at high pressure and featuring the
broad concept of interstitial or “isolated” electrons.
Paul Popelier
An example of how to use the results of a partition scheme is
that of the EDA called IQA being combined with the newly proposed Relative Energy Gradient (REG) method.[101] The REG
method is able to handle, automatically and exhaustively, the typically hundreds or even thousands of individual energy contributions that IQA generates. REG ranks atoms to the degree that
they act like the total system they are part of, in terms of energy
changes. This minimal method can handle competing energy
contributions, which may appear contradictory and thereby
fuelled ongoing debates. For example, in our very recent biphenyl
case study,[102] we use REG-IQA to explain its planar rotation barrier. The central torsion angle in biphenyl prefers to be 45o and
when at 0o, biphenyl’s total energy profile reaches a local maximum. REG shows that IQA’s intra-atomic energies of the
orthohydrogens dominate the barrier, which is compatible with
the textbook explanation of a steric clash. However, at the same
time, the exchange energy between these two orthohydrogens
becomes most stabilizing at 0o, indicative of the formation of a
covalent bond. REG is not confused by these two opposing
effects and concludes that, while they largely cancel out, it is the
energy behavior of the ortho-carbons that causes the rotation
barrier. This is an example of the Dutch expression “als twee
honden vechten om een been loopt de derde er mee heen.”
(Note: just in case you GoogleTranslate this then know that
“been” actually means “bot”).
be devised to match with the so-called chemical intuition.
Another illustrative example is that of the “local spin”. In this
case, it is the nature of the mathematical object that needs to
be partitioned (which particular formulation of the expectation
value of the spin-squared operator), rather than the actual partition used that brings about meaningful numbers for this
concept.[104]
As I stated in the previous question, concepts that can be
achieved only by using specific partition schemes are undesirable. At the same time, a given partition scheme is put into
jeopardy when it cannot reproduce even qualitatively the
expected results/trends of a well-established concept. So,
instead, concepts could be (wisely) used to improve the definition of partition schemes.
Gernot Frenking
Chemical concepts are fuzzy objects, which may be defined in
different ways. Carlo Gatti mentioned already aromaticity,
which can be defined by energetic, geometric, magnetic, or
other criteria such as chemical reactivity. I refer to the five conditions given in my answer to question 2, which should be fulfilled by a partitioning scheme. Other than this, I see no further
clarification of the definition of a concept.
Julia Contreras
For a partition scheme to improve clarity, it should be able to
do just what any other theory is expected to do: describe what
we know and predict what we do not. Both Hilbert and real
space energetic decompositions have been focused on describing what we know, plaguing the literature with different views
of things for what we have an intuition. However, in my viewpoint, more efforts should be paid in describing things for
which we do not have an intuition (e.g., high pressure) and predict what will happen in those cases. After all, that is what most
chemical concepts were born for. I have the impression we
have been focused on giving mathematical definitions to concepts that were born without the need for a mathematical
framework, and we have barely gone beyond that.
Émilie-Laure Zins
I am not convinced that there can be a single way to use the
results of a partition scheme to improve the clarity of all chemical concepts as suggested by the question. In the case of definitions of weak intermolecular and intramolecular interactions, it
seems to me that a quantitative approach based on energy
decomposition analysis followed by a principal component
analysis may be a promising way to clarify the definitions and
to properly classify the interactions.
Pedro Salvador
As Martin stated, that would depend on the particular concept.
For instance, the concept of Oxidation State has lacked a clear
definition for years, but the problem was not related to any partitioning, but to the rather vague terminology used. After IUPAC
has recently revised the concept (albeit not in a fully satisfactory way in my view), new first-principles schemes[103] (which at
the end of the day also make use of a partitioning scheme) can
Laurent Joubert and Vincent Tognetti
It is not obvious from our point of view that the exactness of
an energy partition (or even its usefulness) is correlated to its
use for deciphering chemistry. To expand this point, it is important to recall that (as already stated by Jerzy Cioslowski in discussion for question 1) EDAs can be divided into two main
categories: those obtained during the generation of
15
wavefunction or molecular energy, and those coming from a
subsequent post-treatment. Let us, for instance, consider a MPn
calculation: it will naturally provide a decomposition into various additive contributions, from the zeroth to the n-th perturbation orders. Alternatively, KS energy is by definition split into
the KS kinetic energy, the interaction energy between electrons
and nuclei and the Hartree and exchange-correlation contributions. One can then wonder whether such decompositions may
convey chemical information. For instance, the second-order
correction in MP treatments is often linked to dispersion
(London) effects. Conversely, the chemical meaning of KS
kinetic energy (related to the fictitious noninteracting system,
and not to the real one) is far from being obvious, as well as
exchange-correlation since it involves corrections to kinetic
energy and electron repulsion. KS thus provides direct energy
decomposition without meaningful chemical information. Note
that a second exact additive KS decomposition could be
straightforwardly obtained from first principles with the orbital
energies and exchange-correlation potential. However, it is not
deprived of drawbacks: (1) what is the meaning of KS orbital
energies? (2) how to interpret the other terms? For the first
point, the only exact result is that the HOMO energy is opposite
to the vertical ionization potential if the exact exchangecorrelation functional is used. The second problem can be
cured using Mel Levy’s recent potential shift[105] that allows for
expressing the total energy as the only sum of orbital energies.
Such a scheme would certainly simplify energy decompositions,
but it is still in the youth age.
Paul W. Ayers
If the “clarity of definition” of a concept was dependent on the
“results of a [one specific] partitioning scheme” then I am reluctant to embrace that definition. On the other hand, if the “clarity of definition” of a concept is supported by the “results of a
[many nonspecific] partitioning schemes”, then that concept is
well-founded and defined qualitatively (even if the partitionings
might give different quantitative results).
Farnaz Heidar-Zadeh
The results of the partitioning schemes (and other concepts)
can clarify their usefulness in capturing chemical and physical
phenomena. That is, numerical results can demonstrate the
domain of applicability of a scheme and lead us to improve our
definitions. So, I believe the results can act as a feedback loop:
guiding us to a better formulation of the problem and
strengthen our intuition.
Juan Andrés
An underlying theme of the above questions has been the gap
that exists, in general, between quantitative quantum theory
and chemical concepts. In this context, we agree with the comment by Grunenberg[96] “I am not writing against the use of
qualitative chemical concepts per se, but against their quantification. In many cases, qualitative concepts even in combination
with nonperfect experiments led to real progress in chemistry.
However, one striking attribute of the aforementioned disputes
16
in the literature is the fact, that many of these quantifications
are triggered by a conceptual farrago and by this, most of these
scientific quarrels are inherently insoluble Some even resemble
mock discussions. (Interestingly, in the course of such discussions, usually one side is referring to a medieval scholastic
“questio.” Therefore, we need partition schemes precisely
defined mathematically from the underlying physics to reach
clarity on the definitions of concepts.
Yirong Mo
The conflicting results from different partition schemes certainly
will certainly attract attentions and stimulate discussion and further research work. In this way, the definitions of concepts can
be progressively clarified and eventually quantified.
Eduard Matito
I refer to my answer to question 2: I believe this can only be
achieved in the case of concepts that lack consistency. In these
cases, there must be a consensus among different partitions
(and within the community) before walking the dangerous path
of changing (clarifying, if you prefer) the definition of concepts.
Again, I believe the concept of aromaticity serves as a nice example. In the 1990s, the definition of aromaticity given by the
IUPAC applied only to ground-state pi-aromatic compounds and,
although the current definition of aromaticity is not less blurry
than it used to be, now recognizes different aspects of aromaticity such as electron delocalization, particular reactivity, thermodynamic stability, and certain structural features. Many of the latter
features have been repeatedly confirmed by the corresponding
computational measures/models of aromaticity. In fact, the work
still continues. The “Aromaticity” conference organized in Riviera
Maya in 2018 by Gabriel Merino, Miquel Solà, and Henrik
Ottosson included a round-table session to find a consensus
among the members of this community (experimental and theoretical) about an updated definition of the concept.
W. H. Eugen Schwarz
Yes, for instance: statistical correlations and factor and cluster
analyses can work out whether one or more conceptual maincomponents are behind a group of related empirical or theoretical concepts, and how narrow the relations between them are.
Theoretical schemes may need revision in the case of poor relation to well-proven empirical concepts.
Alston Misquitta and Krzysztof Szalewicz
The SAPT partition scheme gives indeed a clear definition of concepts. Consider two polar systems. One of the main concepts
appearing in many undergraduate courses is that at large separations this interaction is determined by the simple interaction of
the permanent dipole moments with its 1/R3 decay. As
R decreases, contributions from higher multipole moments
become important. The sum of all these contributions agrees to
a high accuracy with the SAPT electrostatic energy. Once R is so
small that the 1/R6 terms matter, contributions from induction
and dispersion energies are becoming important. At these R,
such contributions can be expressed in terms of dipole moments
and of the dipole-dipole static and dynamics polarizabilities.
Again, both components are very accurately reproduced by
SAPT. As R decreases further, overlap and exchange effects come
into play. This does not mean any loss of physical insight despite
things getting a bit more complicated. For example, the electrostatic energy is still just the Coulomb interaction of two charge
distributions. For the induction and dispersion energies, we have
to use the concept of density–density response function which
also has a clear physics meaning. As the distance between monomers is now of the order of a few angstroms, electrons can tunnel
through the potential barrier. Tunneling is one of the main concepts of quantum mechanics, with a clear interpretation. Such
clarity of definitions of concepts as outlined above cannot be
achieved if a decomposition starts from the dimer wave function.
Question 4: Are Partition Schemes Subject to
Scientific Darwinism? If So, What Is the
Influence of a community’s Sociological
Pressure in the “Natural Selection” Process?
Ramon Carbó-Dorca
A chemically and physically bound piece of research cannot be
influenced by anything but the theoretical scheme itself. A partition scheme, if it really conforms according to quantum
mechanics, shall be appropriate to any electronic system. If a
system influences the construction of a partition scheme, then
there cannot be any hope to obtain a general procedure.
Shant Shahbazian
If at the level of a community there is a consensus on the “intrinsic” preference of a method or tool then such question would be
irrelevant. So, asking such question means that currently there is
no consensus on the intrinsic preference of the available partitioning schemes. As I stressed elsewhere,[80] at the extreme
level, this means the lack of the scientific “objectivity” and “realism” that scientists are proud of and is usually used to distinguish
science from other human endeavors like philosophy, politics,
and religion where the intrinsic preference is always disputable.
Accordingly, sometimes it seems to me that the implementation
of an index or a partitioning scheme in a well-known or a userfriendly software had been the prime factor in its dominance in
competition with similar indices or schemes. In short term, such
factors are tolerable and probably even inevitable since in the
end science is also a human activity, but when such factors are
dominant after decades, to me, it is a sign of a crisis…
pressure from chemists to produce a scheme in which the
diatomic bonding energies are on the “chemical scale”, that is,
not equal but comparable with the accepted bonding energies.
(Also see my answer to question 6.)
Martin Rahm
I hope we all can agree that EDAs should be subject to scientific
Darwinism. Cross-comparison and “benchmarking” of EDA
methods, even if it difficult to do, is one way forward that
should allow for more “evolutionary pressure”. I stress this point
further in my answers to questions six and seven. However,
rather than risking extinction, I suspect that EDA methods subjected to comparative studies will thrive. Comparisons will bring
out complementarities in different approaches and ultimately
allow us to get a better overall grasp of electronic structure and
chemical bonding. A nice example highlighting the benefits of
comparative studies of EDA methodology is that of Fugel
et al.[107]
Frank Weinhold
Yes, of course. By the evidence of their usage, adoption, and
cited-applications (or not) in the broader chemical community,
EDA approaches should be subject to “selection” according to
their impact on how chemistry is actually practiced and taught.
Roberto A. Boto
In my opinion, a partition scheme should not be influenced by
anything but its theoretical framework. However, in theoretical
chemistry, there has always been a balance between quality
and computational resources. It is often found that the more
elaborate is the theory, more demanding is the computation of
terms derived from it, and approximative routes should be
taken. Energy partitions are not an exception, and the pressure
of the community toward more complex, often larger, chemical
systems could bias the selection of EDAs.
Angel Martín Pendás
Like anything in Science, partition schemes are subjected to
Darwinism. Whether Darwinism in Science chooses the best
solution or only the fanciest one is another problem, since, as a
human activity, Science does not escape fashion. Since, unfortunately, many EDAs are intimately associated with particular
electronic structure paradigms (e.g., molecular orbital or
valence bond descriptions), the waves in the former are clearly
conditioned by those in the latter.
István Mayer
David L. Cooper
Yes, I think so. One needs results that help interpret the calculated and/or experimental quantities. The observation that Mulliken’s gross populations often fail to provide chemically
reasonable results, motivated the quest for alternative schemes
of population analysis. (This was the case although Mulliken’s
gross population is that definition which is consistent with the
internal mathematical structure of the LCAO formalism.[106])
When EDA methods are concerned, I have experienced a
Experience suggests that the partitioning schemes that are
likely to be the most widely used in the scientific literature will
not necessarily be the “best” ones, as determined by crosscomparison and “benchmarking”, nor indeed those that are
best suited to impact the practice and teaching of Chemistry by
nontheoreticians. Although “fashion” can indeed be an important factor, ultimately it is the availability of particular methods
in certain “standard packages” that could end up being the
17
deciding factor. This could, of course, be ameliorated to some
extent by the availability of free and easy-to-use facilities that
implement other schemes.
Carlo Gatti
Yes, they probably do, but are we sure that the more fitted to
survive are those schemes with largest scientific rigor? The
sociological pressure of the community may largely bias the
selection process. Factors like ease of use, simplicity of analysis
(few composing energetic terms rather than a potential plethora of progressively finer dissections like in the real space
EDAs), adoption by large and numerically dominant communities, may clearly bias the game, offsetting the purely scientific
selection process. Other counterweighting and disturbing factors might be the implementation of given schemes rather than
others in popular quantum-mechanical codes.
Julien Pilmé
Yes, I agree with that. In my opinion, the “natural” selection process, which should be conducted according to scientific
requirements, is hardly efficient owing to the lack of the
straight forward link with the experimental data. Maybe, this
process becomes more “fashion-driven” and more sensitive to a
sociological pressure. Yes, it seems also that the selection process is flawed by the ready availability (or not) of EDA methods
in the quantum chemistry software.
equation that puts a person on the moon rather than F = m/a or
F = ma2. However, equivalent theories (e.g., Matrix Mechanics
and Wave Mechanics, or Valence Bond and Molecular Orbital, or
String Theory and Quantum Gravity) can coexist as long as they
make the same predictions.
Jerzy Cioslowski
Like almost everything in science, the energy partitioning
schemes are subject to surges and ebbs in popularity, and even
extinction. However, I am reluctant to use the term “Darwinism”
in this context as the concept of the “survival of the fittest”
(if one defines the fittest as the most rigorous and scientifically
justified) obviously does not apply here. I am afraid that the
popularity of various definitions of chemical concepts is mostly
driven by the prejudices (politely called “chemical intuition”) of
those regarded as contemporary authorities in (not necessarily
theoretical) chemistry. This situation would correspond to the
evolution of species being due to supernatural powers (gods,
aliens, or whomever) eliminating living organisms according to
their preferences, which is not exactly what Darwin had on
his mind.
A simple prescription to avoid this undesirable status quo
would be axiomatization of chemical concepts. Spelling out a
set of axioms that all the concepts have to satisfy would greatly
reduce the room for personal preferences and thus diminish
the importance of the “human factor” in interpretation of electronic wavefunctions.
Paul Popelier
The metaphor of natural selection is useful to think about
where the zoo of EDAs is at, and where it should head for. The
answer to question 4 is yes because natural selection is already
happening. For example, the recent review by Skylaris et al.[2]
compares and discusses six test sets. The authors conclude that
“Overall the ALMO EDA scheme is shown to provide the most
chemically sensible EDA results for our systems relevant to drug
optimization.” Unfortunately, this comparative study was confined to nontopological EDAs. Building on the Darwinistic metaphor, this means that topological EDAs happily live on some
island or disconnected continent that has had no contact yet
with nontopological EDAs (although Angel and co-workers have
published such a comparative study.[108])
We should keep in mind how natural selection actually works.
Ultimately, it is the interaction between the creature (i.e., a given
EDA) and its environment (the other EDAs and the community
of users) that determines if the creature survives or not. I think
that as a community we should be a more demanding environment, even if that means that an EDA becomes extinct. Experimentalists can only take the work of theoreticians seriously if it
provides future-proof insight or correct predictions. There is no
harm in two different EDAs coming to the same conclusion; what
is a problem is if they contradict each other. Although I do not
have precise references in mind my feeling is that the community allows contradiction to exist, and even worse, allows them
to thrive under the false banner of diversity and richness. This is
dangerous for Science. I am still dreaming of a consistent world
of interpretations and predictions, one where F = ma is the only
18
Gernot Frenking
Darwinism means the survival of the fittest. In that sense, I do
think that those concepts will eventually be adopted by the
community, which are considered as the most useful ones.
However, I see an ongoing preference for simple models that
are intuitively easy to accept even when the underlying
assumptions are incorrect, instead of a more complicated
model, which agrees with a thorough quantum chemical analysis. There seems to be a human tendency even in science to
prefer a known disease to unknown health because one is
afraid of the work that comes along with the cure. The great
acceptance of the NBO method is at least partly due to the
smoothing tranquilizer effect for addicts of the Lewis model
who do not want to be bothered by the complexity of the electronic structure. The sociological pressure could lead to a situation where the well-known illness is preferred over the
unknown health. With other words, the frequent use of NBO
results may lead to the acceptance of the method even when
the shortcomings are well known.
Julia Contreras
Absolutely. I see that there are two main trends: mathematical
and physical Darwinism and I totally advocate for physical Darwinism. Theories can be very elegant, but what I really expect
from an energetic partition (or as well any other theory), is to
describe the physics of the system. To provide a descriptive
and predictive framework. What should not be interpreted as
social pressure (and we have seen much of these lobbies in this
community): MY method is BETTER than the others (and I reject
papers otherwise).
Henry Chermette
Darwinism…, yes or no: selection of the survival scheme(s) can
be biased by factors like ease of use, availability in (the widely
used) software, and simplicity of analysis. And a scheme can be
“rediscovered” 20 or 30 later after its (first) description in a (specialized, not popular) journal.
Émilie-Laure Zins
This question suggests a comparison between the theory of
evolution and the description of the chemical bond. This comparison seems to me to be particularly relevant and deserves a
short comment. Experimental chemists, during their observations or interpretations, have proposed and developed many
concepts, which can be compared (metaphorically) to different
living species: they can appear, persist, evolve, or disappear. It
can be hypothesized that some chemical concepts could be
merged, in particular, through the use and development of
adapted partition schemes. It seems to me that a “massive
extinction” in the zoo of chemical concepts, caused by a “universal” partition scheme, or by a limited number of partition
schemes based on the fundamental theorems of quantum
physics, would be beneficial to chemistry.
Laurent Joubert and Vincent Tognetti
We think that different communities may have preferences
guided by some historical reasons or, let us say, some traditions
in interpreting the same results. Assume that we are interested
in the energy difference between two conformations (that can
be measured experimentally in some cases). The virial theorem
actually brings us two explanations: (1) it is due to the electron
kinetic energy, (2) it is due to the potential energy. No one is
preferable, being both correct, since, quoting Godard, “The
essential difference between classical mechanics and quantum
mechanics is that in classical mechanics the kinetic energy and
the potential energy are independent (one is determined by
momentum, the other by position), whereas in quantum
mechanics T and V are simultaneously determined by the
wavefunction.” However, an experimental chemist is much
more accustomed to think of potential energy (linked to interactions between atoms) than of kinetic energy, and we are thus
facing different habits in various chemical communities. T features the advantage of being derived from a one-body operator, while V involves a two-body operator. When decomposed
into N atoms, it thus generates about N2 values, a number that
may make the analysis inextricable. Noteworthy, Popelier
recently proposed a powerful relative energy gradient approach
to select the most relevant contributions.[101] Certainly, such
analyses will clearly benefit from the big data and artificial intelligence fields. Maybe, they will thus lead to favor some partitions to the detriment of others. However, from our point of
view, there is nowadays rather coexistence of various theories,
within different frameworks (real-space analysis, wavefunction
analysis). The fact that there is such a debate indicates that
there is currently no natural selection process at work… One
can also say that natural selection actually requires a very long
evolution time, much more than the age of quantum
chemistry…
Paul W. Ayers
Yes, but in a strange way. Science is both “red in tooth and claw”
in the sense that the most vociferous, forceful, ruthless, and
prominent researchers (and referees) have an advantage. It is
also true that people who write/convey their ideas most clearly
(and forcefully) have an advantage. Yet (fortunately) vehemence
and salesmanship is not everything (though I do feel we often
underestimate its importance). Most scientists possess an idealism and thus the drive toward utility and simplicity is strong. I
think many of us seek decompositions/partitionings that “can fit
on a T-shirt” (Occam’s razor). And all of us seek definitions that
are helpful to experimentalists. That is, while I often call my work
on concepts “chemical philosophy,” just like traditional epistemology, the goal is to find precepts/concepts that have broad
and practical utility for everyone.
Farnaz Heidar-Zadeh
There is no doubt that partitioning schemes (and other concepts) evolve over time and the fittest survive, that is, the ones
which are well-defined and make better predictions. However,
this natural selection is commonly disturbed by our biases and
prejudices as humans, which makes the scientific discourse not
very scientific. (This was a very disappointing realization for me
as a young scientist!)
Juan Andrés
As Ayala wrote[109] “There is a contradiction between Darwin’s
methodology and how he described it for public consumption.”
Darwin claimed that he proceeded “on true Baconian [inductive] principles and without any theory collected facts on a
wholesale scale.” He also wrote, “How odd it is that anyone
should not see that all observation must be for or against some
view if it is to be of any service!” The scientific method includes
two episodes. The first consists of formulating hypotheses; the
second consists of experimentally testing them. What differentiates science from other knowledge is the second episode: subjecting hypotheses to empirical testing by observing whether
or not predictions derived from a hypothesis are the case in relevant observations and experiments. A hypothesis is scientific
only if it is consistent with some but not other possible states
of affairs not yet observed, so that it is subject to the possibility
of falsification by reference to experience” But, the more important yet is that Darwin discovered natural selection, the process
that accounts for the adaptations of organisms and their complexity and diversification, in a wide range of research fields,
including biology, geology, and also chemistry and physics. In
our case, it is necessary to remember that theories such Lewis,
Valence Shell Electron Pair Repulsion (VSEPR), molecular orbital
(MO), its extension to natural bond orbital (NBO), frontier
molecular orbital (FMO) of Fukui, valence bond (VB), or even
conceptual density functional (CDF). These theories have their
19
advantages and shortcomings, work in some cases but not in
another, and are still used in the current bibliography. Therefore, it is expected that in the partition schemes occurs the
same and that many of these are still used. The important and
desirable thing is to know if they are used correctly and as far
as you can get with the results obtained. A large dose of selfcriticism is necessary to overcome the sociological pressure.
Yirong Mo
This may be true. Most computational chemistry practitioners
are users of the software and tend to follow the majority and
use whatever put into the software designed by others, as in
this way, works can be relatively easy to be accepted by the
community. In this process, prominent figures may lead the
majority to particular partition schemes.
Eduard Matito
Yes, energy partitions and, in general, chemical bonding tools
are subject to extinction and, inevitably (and regrettably), they
depend on its “popularity.” As it has been pointed out, the popularity depends on its availability, its usefulness, if they are easy to
compute, the cost of its calculation, scientific “marketing” and, to
some extent, scientific rigor. As developers and experts of chemical bonding tools, we should be well aware of this and act
accordingly. In the field of aromaticity, NICS became the most
popular measure because it is available on a large package
(Gaussian) and it could be easily computed with a single keyword. NBO is known to computational and experimental chemists alike because there has been a large effort to advertise it
(books and online tutorials, reviews, workshops, and hands-on
sessions). I believe is our responsibility to work to facilitate the
use of the most useful and rigorous partitions, making them
available and as user-friendly as possible. Otherwise, they
became complicated and obscure objects that only a handful of
people (the so-called experts) can use and understand. Last but
not least, we should encourage benchmarks and comparisons
that put forward the boundaries and limitations of the energy
partitions. For instance, we identified that some atomic partitions
could not be employed to compute aromaticity indices[110] and
Ponec, Cooper, and others found that only with some atomic
partitions the bond index attained a maximum value close to the
avoided crossing of the two lowest-lying states of LiH.[111–113]
W. H. Eugen Schwarz
Yes. First, some sociological pressure may be induced by charismatic colleagues and their followers influencing the fashions of
a time. We all know it, concerning “overlapping VB concepts” vs
“orthogonal MO concepts,” concerning aromaticity as a singledimensional concept best represented by the NICS parameter,
concerning QTAIM based molecular partition schemes, or that
molecules only consist of atomic one-center parts and diatomic
two-center bonding parts, and so on. Second, the viewpoints in
common teaching are partly determined by historical traditions
and ideologies and by well-written and well-priced textbooks,
influencing the convictions of the majority in the scientific community. Third, the availability of technical options: an analysis
20
scheme will best survive if it can be applied with little investment of money or knowledge and with user-friendly tools; or if
it can be easily applied to available data sets. For instance, one
can better derive density distributions than wave-functions
from X-ray diffraction patterns, so X-ray diffraction research supports analyses of densities in three-dimensional space.
Alston Misquitta and Krzysztof Szalewicz
A necessary condition for a partition scheme to be of value is
that the sum of components should give an accurate interaction energy at all physically important dimer configurations,
so that a potential energy surface (PES) based on this
scheme can be used to predict observables in agreement
with experiment. Thus, partitions based on CCSD(T) satisfy
this condition, but based on DFT will not unless particular
care is taken to partly control the self-interaction error
(by using a hybrid or range-separated hybrid functional) and
an adequate dispersion correction is included. As discussed
above, SAPT satisfies this condition very well. The second
condition is that components are not excessively large in
magnitude so that there are no large cancellations in adding
them to form the total interaction energy. The third condition is that if components are meant to represent electrostatic, induction, dispersion, and exchange energies, they
should agree to within a few percent with SAPT. All methods
satisfying these conditions are basically equivalent from the
point of view of getting insights into physical mechanisms of
intermolecular interactions.
This set of conditions can be used to evaluate various EDAs
for the component called charge-transfer energy. While SAPT
includes all charge-transfer effects, it does not compute a separate charge-transfer energy, but rather this component is
included in the induction energy. As the other part of this
energy is the polarization energy, which is negative at the twobody level, the charge-transfer energy cannot be smaller, that
is, more negative, than the induction energy. For example, for
the water dimer at the minimum configuration, the sum of the
second-order induction, exchange-induction and δHF terms is
int
−2.24 kcal/mol.[114,115] This can be compared to the total SAPT
interaction energy of −4.65 kcal/mol and to the CCSD(T) interaction energy of −4.95 kcal/mol. Thus, SAPT gives a lower bound
for the charge-transfer term of −2.24 kcal/mol. Taking into
account the differences between SAPT and CCSD(T), one may
assign an uncertainty of ±0.3 kcal/mol to this value. However,
the noncharge-transfer part of induction energy, that is, the
polarization energy, is not negligible and it is by definition negative. This energy can be estimated from the classical polarization model and if the procedure developed in Ref. [116] is
applied to the water dimer, one gets −0.8 kcal/mol in the first
iteration, corresponding to the second-order in V, and an additional −0.2 kcal/mol from the further iterations. Thus, the total
polarization energy amounts to −1.0 kcal/mol, giving an estimate for the infinite-order charge-transfer energy of
−1.2 ± 0.3 kcal/mol. Values much larger in magnitude, often
found in literature (see a discussion in Refs. [117,118]), cannot
be considered to represent true charge-transfer energies. Some
EDA schemes are consistent with our estimate, for example, the
ALMO method based on CCSD[119] gives −0.8 kcal/mol.
The method of estimating charge-transfer terms based on the
regularized-SAPT(DFT) approach, developed by one of us,[120]
gives a smaller in magnitude value of −0.4 kcal/mol (however,
this estimate includes only the second-order terms and would
increase in magnitude if higher-order corrections were
accounted for).
Question 5: To What Extent Does/Can/Should
Investigated Systems Influence the Choice of
a Particular Partition Scheme?
Ramon Carbó-Dorca
It seems difficult to foresee how partition schemes clarify anything, being somehow (or quite) arbitrary. Perhaps they could
add more obscurity to the certainly not very clear usual chemical concepts.
Shant Shahbazian
In principle, a mathematically rigorous partitioning scheme
must be applied to any molecular system regardless of the size,
type of atoms, or complexity of its electronic structure. However, in practice, the “interpretational” problems emerging from
applying a scheme to certain systems may have a strong influence on favoring or dismissing the scheme. As an example, the
popular “misinterpretation” of the (3, −1) critical points (CPs)
emerging from the topological analysis of the molecular electron densities as indicator of bonds has been a source of confusion.[121,122] There are certain systems that if one insists that
(3, −1) CPs are “bond” CPs, that is, BCPs, then inevitably there
would be a clash between the quantum theory of atoms in
molecules (QTAIM) analysis and most of the other partitioning
schemes on the presence/absence of bonds between certain
atoms. In such problematic cases, people usually try to avoid
the use of the QTAIM analysis, though a proper reinterpretation
may fix the problem.[97] However, such problems are not confined to just misinterpretations and there are cases where a
partitioning scheme does not support (or is at odds with) an
established viewpoint regarding a system and people try
to avoid the scheme. Such “expectational bias” regarding
what “must” emerge for a system may unconsciously
(or even consciously) affect the preference/dismissal of a partitioning scheme. For me, this is an unpleasant element of
“subjectively”….
Martin Rahm
I suspect it often does not. This is in large part because most
EDA methods require quite some expertise to execute and
interpret. There are also a lot of methods out there, and it is
not always easy to evaluate pros and cons in an objective manner. Publishing work addressing chemical validation and crossvalidation of EDA methods, discussed in questions six and
seven, should help in this respect.
Frank Weinhold
In principle, the EDA method of choice should be independent
of the problem. To the extent that such influence exists (i.e., for
possible subjective choice of EDA method or “reference fragments”), it seems to represent a particularly dangerous aspect
of the partitioning approach.
Roberto A. Boto
A well-defined partition scheme should be valid for any chemical system. Energy partitions are based on chemical concepts
such as covalency, ionicity, and polarizability. These concepts
should be well-defined regardless of the nature of the system.
Otherwise, we may create a chaotic scenario with a panoply of
partitions, one for each chemical system.
Angel Martín Pendás
Ideally, it should not. However, it is usually the case that, as it
happens with density functionals, basis sets, or many other of
our computational knobs, ideas propagate that advice the use
of this or that method to deal with these or those problems. In
many cases, the partition scheme is chosen a posteriori, a practice that should not be allowed.
Carlo Gatti
Generally speaking, if someone believes that his or her favorite
partitioning scheme is suited for some classes of compounds
and (much) less for others, he/she should probably make a step
back and ask himself or herself what prevent his or her favorite
method to be general enough to be applied equally well to any
chemical system. This is an important exercise that may lead to
an improvement of the scheme or to abandon it in favor of a
more general one. I also believe that more than the investigated system, it is the chemical question to be addressed that
may influence the choice of a particular partition scheme.
Paul Popelier
I agree with the general consensus building up here, which is
that the nature of a system should not influence the choice of
the partition scheme used, in the end. Unfortunately, this is not
the case at the moment. For example, anionic systems or systems with large rings need diffuse Gaussian primitives in order
for their wave functions to be properly described. Partitioning
schemes that depend on the location of the center of these
primitives suffer from the use of diffuse functions (because the
mapping between center and ownership starts breaking down).
Hence such partitioning schemes cannot be used in that case
or they have to be modified. Anthony Stone did the latter by
injecting some real-space partitioning character into his original
DMA scheme.[123]
Pedro Salvador
Of course, I also agree that in principle any reasonable partitioning scheme should be applicable to any chemical system
at hand. However, I also have the impression that EDAs and
topological EDAs (borrowing Paul’s terminology) are somehow
21
designed to answer different type of questions, so the nature of
the system under scrutiny could drive one to use one or
another scheme. Unfortunately, there are not too many works
where both topological and nontopological EDAs are applied to
the same problem aiming at answering the same questions
(I do remember a nice poster at the ESB2 Oviedo this year
showing striking similarities of both approaches).
Jerzy Cioslowski
As illustrated by the recent proliferation of density functionals,
there is a great temptation (especially among those not wellversed in quantum chemistry) to select the theoretical
approach on the basis of the expected answer. It is quite disconcerting to observe the ongoing harkening back to the times
of semiempirical approaches when there was at least one
method for each set of electronic properties (CNDO/S for excitation energies, ZINDO for molecules with transition elements,
MNDO and its endless modifications for geometries and heats
of formation, etc…). Back then, this plethora of approaches was
justified by the very limited power of computer hardware that
dictated the use of various approximations. Since this is not an
excuse nowadays, whenever carrying out computations of electronic wavefunctions or their interpretations, one should strive
to limit the variety of the methods employed. Otherwise, there
will always be pressure to legitimize one’s interpretative prejudices and/or experimental results with a suitably chosen “theoretical justification”.
Gernot Frenking
The investigated systems and associated questions rightfully
influence the choice of a partitioning scheme. Different systems
and questions may request different methods. For example, the
chemical bond in LiF may be analyzed in terms of interactions
between the ions Li+ and F− or the neutral atoms Li and F. The
former choice of the fragments is better suited to investigate
the final bond, while the choice of Li and F as fragments
encompasses all changes in the electronic structure along the
bond formation/dissociation. It is strength and not a weakness
of partitioning schemes to be able to choose different fragments as interacting moieties. However, it is often only the
combination of several methods (charge and energy decomposition schemes) that provide a faithful account of the electronic
structure in terms of a model.
Laurent Joubert and Vincent Tognetti
An important related question is: can we compare EDA results
for systems that strongly differ? In question 3, we stated that
there are two categories of EDA and we discussed there the
first one. The second EDA category gathers those carried out
after the initial quantum chemistry calculation. They aim at
dividing energies into physicochemical components (charge
transfer, polarization, and induction) whose definitions are in
general not unique, and/or into subsystems (atoms, substituents…) for which various partitions also exist. All these energy
decompositions can be based either on the wavefunction
and/or the electron density, but they are performed
22
independently of how these functions were obtained. For such
reasons, they can reproduce or not the molecular energy
obtained at the previous step. In such a case, two corrections
are often implemented: (1) defining an ad hoc new contribution
to fill the gap, (2) scaling the energy sum to the targeted
energy. This strategy is very often used with the virial theorem.
Indeed, for the exact wavefunction, the molecular energy is
equal to minus the electron kinetic energy, or, equivalently, to
half the potential energy. Unfortunately, the scaling parameter
that is used in practice can be significantly different from one
system to another. It can lead to questionable conclusions
when comparing molecules of too much different type.[4] Coming back to question 5, it thus appears that some EDAs should
not be used to investigate a molecular dataset composed of
several different classes. From this point of view, the investigate
systems will influence the EDA choice by precluding the use of
some of them.[124]
Miquel Solà
I agree that ideally, the choice of the EDA method employed
should not be dependent on the system studied. What is critical, however, is the definition of the fragments to analyze a
given bonding situation. For instance, the answer to the question of how much covalent is the LiF bond may change from
14% to 91% depending on whether the fragments considered
in the EDA are Li+ and F− ions or Li and F atoms, respectively.[125] Moreover, in a series of similar bimolecular chemical
reactions, the use of the activation strain model[126,127] provides
deep insight into the origin of the energy barriers associated to
these chemical reactions by taking the reactants as the fragments of choice. However, if one wants to analyze the whole
reaction profile from reactants to products, then after the transition state, in the product region, the use of reactants as fragments is generally not a good choice. The whole reaction
profile is probably better analyzed considering the different
atoms as fragments. In addition, if reactions are unimolecular,
in most cases, it could be hard to define two fragments to perform an EDA and probably considering atoms as fragments
may be the smartest choice. Finally, the analysis of isomerization energies can be performed using different fragments. In
many cases, one can use the same fragments to build the two
isomers just placing them with different orientation (we called
this procedure the turn-upside-down approach).[128,129] With
this procedure, one usually gets a deep understanding of the
physical origin of the isomerization energy.
Paul W. Ayers
I wish there were a universally applicable and useful partitioning scheme, but I’m not convinced any such scheme exists
at present. For example, some methods are strongly dependent
on a specific electronic structure ansatz, and would not be
applicable, for example, to a diffusion quantum Monte Carlo
calculation, or to a calculation that use nonatom-centered basis
sets (e.g., plane waves). Others could not be applied to a lattice
(e.g., a Hubbard or Pariser–Parr–Pople model for an aromatic
system), or to a periodic solid. Even among the few methods
that are universal in scope (and there are very few such
methods), their utility is unlikely to be universal.
Farnaz Heidar-Zadeh
In theory, a partitioning scheme should be applicable to any
system, because the same physical laws apply to them all. Any
deviation from universal applicability is a warning sign that
should give one second thoughts about using the scheme. If a
scheme is suitable for only a specific class of systems, it is very
unlikely to be physically sound.
Juan Andrés
It is not appropriate or desirable to investigate a system in
order to choose a particular partition scheme. We must remember that history tells us that opening this path can cause great
confusion with the proliferation of different methods. One has
to remember the extensive number of semiempirical methods,
basis sets, functional hybrids, and so on, as also have pointed
by Profs. Martin-Pendás, Cioslowski, and Ayers.
simpler than reality. A scheme that is applicable to everything
will be too complicated. It is better to have different schemes
say, for primarily and for weakly bonded systems, or for weakly
and for heavily electron-correlated systems.
Alston Misquitta and Krzysztof Szalewicz
SAPT partition works for all closed-shell dimers and so should
all other schemes.
Question 6: Do We Need more Focused
Chemical Validation of EDA Methodology and
Descriptors/Terms in General?
Ramon Carbó-Dorca
Might be what it is needed is a reflection on the chemical practical use of EDA. Perhaps the problem lies on the fact that there
appears to be a large variety of procedures (see the recent
review by M. J. S. Phipps, T. Fox, C. S. Tautermann, and C.-K.
Skylaris[2]). Along the past 3 years since this publication, possibly, even more techniques have been defined.
Yirong Mo
May be not much. See question 3.
Eduard Matito
I believe it depends on the motivation. Making a choice of the
partition based on the premise that it provides the answer you
are looking for is obviously scientific misconduct. As Jerzy
points out, a scenario in which a large collection of energy partitions is available, each being adequate for a particular problem is highly undesirable. Ideally, any method should be
universally valid and provide a correct and complete description of the system. However, in practice, they are not. If validation tests (see questions 6 and 7) offer us some hints of flaws
and boundaries of current energy partitions, I find adequate to
use these results to select (or discard) an energy partition
scheme. For instance, a method having a slow convergence
with the basis set size should probably be discarded in situations where we cannot afford a sufficiently large basis set.
Martin Rahm
Yes, we do! And this one I feel quite strongly for. EDA methods
are most often used as descriptive tools, that is, they analyze a
given electronic structure and provide a picture of the bonding
situation. There are many elegant EDA definitions that can provide detailed information about electronic structure in this
manner. Whereas this can be useful, the ultimate goal of any
electronics structure analysis should be predictive utility.
“Chemical validation” can, of course, come in many forms, but
it offers the safest route to demonstrating predictive utility. One
approach toward “chemical validation” is to attempt thorough
answers to the following three questions:
1. What chemically relevant experimental observable does the
EDA-term [X] correlate with?
2. When does the correlation break down?
3. Why does the correlation exist [here] and not [there]?
Eloy Ramos-Cordoba
Shant Shahbazian
Ideally, the molecular system under study or the electronic
structure method employed should not influence the choice of
a particular EDA scheme. In practice, the system dependency
seems to be unavoidable since there are EDAs which are only
defined for some electronic structure methods, and some of
them are restricted to be used in conjunction with atomiccentered basis sets. In this sense, topological energy decompositions seem to be more general since they can always be
employed provided the 1- and 2-particle density matrices are
available.
I find the question to some extent vague. For the validation
process, we must have a reference set of data that the EDA
method under study may reproduce them properly. In the case
of validation of ab initio methods, these are thermodynamics or
spectroscopic experimental data that no one disputes their
authenticity as an objective reference set. What is the reference
set of data in the case of an EDA? Can we come to a “consensus” what is the “standard” reference set for such chemical validation? I am currently pessimistic about the whole idea (please
also check my answer to question 9).
W. H. Eugen Schwarz
István Mayer
They must. The field of chemical substances with static and
reactive properties is unboundedly rich. Useful and fruitful partition schemes should be intuitively understandable, that is,
Ideally, the energy decomposition produces diatomic energy
components representing the interatomic bonding (or repulsion) at the actual configuration of the molecule, and one
23
center ones describing the promotion of the atoms when the
molecule is formed. However, in several EDA schemes (including semiempirical, e.g., MNDO ones) one encounters the difficulty, that delocalizations responsible for bond formation also
give rise to ionic wave function components (in VB terminology) that increase the intra-atomic electron repulsion energies,
thus also the apparent atomic promotion energies. As compensation, one obtains very negative diatomic energy components,
that are not on the “chemical scale.” No doubt, such a straightforward energy decomposition may be quite useful for comparing different bonds, etc., or even making some predictions.
However, chemists are inclined “not to buy” these large
(in absolute values) numbers. This reservation of colleagues
chemists served for me as a strong stimulus to introduce a
corrected scheme.[130] The ionic terms are due to the bond formation, so their interelectronic energy was distributed between
the different bonds in accordance with partial bond orders
formed by different “effective atomic orbitals.”
schemes should not be “afterthoughts” to what experimentalists already know, nor should they confuse experimentalists
with contradictions. Instead, they should guide and boldly but
robustly confirm or correct the intuition of the experimentalist.
Frank Weinhold
“Chemical validation” is an ill-defined fuzzy expression. I agree
with Jerzy Cioslowski that chemical validation leads to the danger of using “chemical intuition” as measure for the validity of
the EDA results, which is one step toward alchemy. A useful
partitioning scheme should provide a self-consistent ordering
scheme for the pandemonium of chemical facts. The physical
interpretation of the energy terms will always be debatable.
Yes (see question 3).
Angel Martín Pendás
I tend to agree with the need of validation to properly screen
the different methods available, but I also acknowledge the difficulty in finding a suitable set of quantities that might be taken
as a reference validation set. To focus just a bit, let us simply
take the Pauling repulsion term of many EDAs. What chemical
relevant observable (In Martin’s words) does it correlate with?
In the absence of a consensus on what types of energetic terms
should be allowed/not-allowed in a partitioning scheme, validation is desirable but difficult.
Paul Popelier
Yes, I believe so. I come back to this question in question
7 because the latter question overlaps with question 6, in which
the word “focused” pops out. I like to interpret this word in a
sociological sense. The community of interpretative theoretical
chemistry, especially that of Quantum Chemical Topology
(which is younger and somewhat lags behind), should scour
more for “hot case studies” and work on them. These regularly
appear in the popular scientific magazines (e.g., Chemistry
World of the Royal Society of Chemistry). We can then to test
(and showcase) partitioning methods. In the medium and longer term we can find out what we can do for experimentalists
(e.g., material scientists and synthetic chemists). After all, there
is a reason why the largest scientific funding body in Britain,
called EPSRC, launched as one of its main research themes the
Grand Challenge of “Directed Assembly” (short title). The associated vision for the next 50 years is, in EPSRC’s own words, to be
able to control the assembly of matter with sufficient certainty
and precision to allow preparation of materials and molecular
assemblies with far more sophisticated and tuneable properties
and functions. To me, our goal should be to produce a minimal,
trustworthy and well-thought-through partitioning scheme that
delivers trustworthy and consistent insight. Partitioning
24
Jerzy Cioslowski
In my opinion, “chemical validation” should be limited to checking whether energy components (and other descriptors) computed for similar systems are themselves similar. This may also
include the “chemical scale” argument of István Mayer, that is,
that diatomic (bond) contributions should have values similar in
order of magnitude to those encountered experimentally (bond
dissociation energies, etc…). Anything more than that amounts
to falling back into the trap of “chemical intuition”, which is
what one presumably hoped to avoid from the start.
Gernot Frenking
Julia Contreras
YES! In this direction, chemical interpretation has always been
too much influenced by pre-QM concepts, trying to reproduce
what was already there. However, these concepts were introduced to predict composition, reactivity. I think we should go
back to these roots. Just like in many other fields where
theoretical/computational answers are difficult and the field is
still at a strong development stage (e.g., solvation energies,
molecular solid structure), we could propose “games” to predict
the outcome of a given molecular change (not easy to calculate). Extremely naive, but double-blind tests are a wonderful
way of testing methods! Of course, this means being able to
predict the behavior or energy terms upon perturbations, a
point that has not been paid much attention…, and which
chemists overcame long ago. However, it would provide a
clear-cut (and fun) way of taking the next step in energy
decomposition.
Paul W. Ayers
Hell yes. We should be careful about what we mean by “validation.” There are a few molecules that might be proposed as
such canonical examples of a concept that any EDA/partitioning
that disagrees with them should at least be heavily scrutinized,
and probably discarded outright. (For example, a method that
did not predict that benzene has an “aromatic stabilization
energy,” however one might define that, has questionable utility.) There are more sequences of molecules for which a clear
chemical trend may be asserted, and EDA can be validated
against that.
Farnaz Heidar-Zadeh
Definitely, and this is long overdue! The systematic study of
partitioning schemes in order to put them on equal footing is
necessary and gives us a better understanding of their
strengths and shortcomings. However, we first need to agree
on this “validation protocol”. As elaborated by many contributors, as a community, we need to make a comprehensive list of
desirable axioms/features (distancing ourselves from intuitive
measures) to assess and scrutinize various schemes and concepts. The five conditions suggested by G. Frenking are a great
starting point.
energy partitions, they also provide important hints to modify
and improve current energy partition schemes. Maybe a challenge for our community in the next editions of ECCB conferences (and bond slams) could be suggesting chemical
validation tests that would be subsequently openly debated in
a forum like this until a consensus test set is obtained.
W. H. Eugen Schwarz
Yes. Statistical data analyses (cluster analyses, factor analyses)
can clarify what is behind a group of related concepts, and
quantify the correspondence of different partition schemes.
Juan Andrés
Yes, chemical validation of EDA methodology is mandatory. But
this opens the door to start a path with many slopes to be able
to solve and know to what extent a method and/or model can
be used and gives good results in particular situations, which in
some cases coincide with the experimental results. With this,
the fundamental problem that must be managed is transformed and masked, that is to achieve a methodology based
on quantum mechanics, which manages observables, and
which is based on an adequate mathematical apparatus.
One can remember, for example, how the semi-empirical
method MINDO/3 failed in the study of systems involving
hydrogen bonds, or how, depending on the type of function
used, one can calculate band gaps values in solids that agree
with experimental values. This is a computational task.
On the other hand, we also need descriptors/terms in general, but many of these descriptors/terms derived from chemical concepts that can be considered fuzzy concepts, compared
to unicorns or even noumenons. This is because there exists no
physical observable associated with them. Therefore, is a very
challenging task to reach this aim, we need to, first, clarify EDA
methodology and descriptors/terms in order to obtain a chemical validation of both subjects.
Yirong Mo
Absolutely.
Eduard Matito
Indeed! This is probably a quite arduous task but is certainly
needed in the field. Given the proliferation of energy partitions,
“outsiders” from the chemical bonding community need guidance and, therefore, benchmarks (see question 7) and “chemical
validation tests” are essential. However, as many people
pointed out before, it is not straightforward to design a chemical validation test. In this sense, it is important putting the focus
on the reliability of the tests (for which we need consensus
within the community) rather than on having extensive tests
that cover the many facets of energy partitions. Indeed, some
aspects of energy partitions cannot be easily tested (for
instance, the Pauli repulsion term mentioned by Angel) and,
hence, the validation test is deemed to be incomplete. However, this should not preclude the search for such validation
tests because they do not only help in classifying and assessing
Alston Misquitta and Krzysztof Szalewicz
No, this is soft science with a weak connection to experiments.
EDAs as such have no predictive power (the methods that are
decomposed may have such power, but it is independent of an
EDA applied). Such research should be reduced to a minimum.
Question 7: Is there any Interest in
Developing Common Benchmarks and Test
Sets for Cross-Validation of Methods?
Ramon Carbó-Dorca
The fact is that every EDA technique must be described, and
probably it has been with a benchmark set of his own. However, the question is: to prove what? If the answer is: that it
works! Then one needs to continue asking what is the sense of
working: it means a given EDA technique explains better a
molecular situation (perhaps some kind of interaction) than
others? If so, why there are different abilities (as it seems there
are) to describe some particular EDA nuances?
Martin Rahm
I very much hope so. Reasons for validation against experiment
are outlined in the answer to the previous question. Benchmarks can help in this by including experimental data but could
additionally serve another important service to the community:
facilitating for more straightforward comparison of EDA
methods. This is beneficial for several reasons. Benchmarks will
allow newcomers to the field to more easily get acquainted
with advantages and drawbacks with the different methods,
which is of relevance to question four. Benchmarks will also
help the community to come to better terms with issues raised
in all previous questions, 1–6, and question nine. For example,
by revealing which EDA-terms and descriptors that do or might
relate to the same chemical concepts. In other words,
which terms that show the same trends in relation to relevant
chemistry. Ultimately, the ability to cross-correlate different
approaches should help highlight complementarities between
EDA methods and aid future development of them. One successful example of EDA-term comparison is work of Racioppi
et al.[131]
25
Shant Shahbazian
Paul Popelier
As I stressed in my answer to question 6, I find it extremely
unlikely that a standard set of data (I mean a set of numbers)
may be proposed that all scientists find them equally objective
and reliable. Think about the concept of bond energy
(or something similar to this concept) that probably most people would agree that a good EDA method must deliver as its
output. How we may find the proper set of bond energies to
start the cross-validation? If there is no such standard set then
any cross-validation study will simply reveal the similarities and
differences between the applied EDA methods, not the “objectivity” of any EDA method (please also check my answer to
question 9).
Yes, having common benchmarks and test sets would be nice.
Developers of force fields, density functionals, and machine
learning methods already work with quite a few test sets that
offer their development communities clarity on progress made.
Designing and using those sets is easy because there is always
a crisp and clear measure of success, that is, “golden reference”
such as CCSD(T)/CBS wave functions or experimental properties.
The problem with test sets for energy partitioning schemes is
the usual difficulty that ab initio calculations and experiment
typically deliver whole-system information only. Nevertheless, it
appears that some kind of test set has already naturally
emerged in the case of the bond critical point problem. In an
attempt to settle the controversial relation of this critical point
to chemical bonding papers often report on the same molecular systems. Closer to the subject of EDA comparison, the recent
review by Skylaris et al.[2] compares and discusses six test sets
containing ions, water, and biomolecules (with hydrogen bonding and ππ stacking interactions).
István Mayer
Yes, it could be of interest to have a selection of different molecules with fixed geometries and a few different basis sets, for
which the results of each method are tabulated. Results
obtained with different wave functions (HF, DFT, CAS-SCF, CCA,
etc.) could be included as well. For the Hilbert space analysis,
basis sets of sufficiently atomic character (as e.g., STO-NG,
6-31G**, or cc-pVTZ) should be considered and no diffuse functions (augmented basis sets) should be admitted.
Frank Weinhold
Self-correlation among closely related EDA variants is of little
value. Tests with experimental data (such as those suggested in
the reply to question 3) could give a more effective reality
check to cull the ranks of proposed partitions. The development
of the field would benefit from some common benchmarks that
are well chosen to represent a diversity of phenomena and species (cf. question 9). Only then can meaningful differences in
methods be illuminated and discussed.
Angel Martín Pendás
An interesting initiative might be choosing a selection of molecules, basis sets and methods to construct an EDA
benchmarking data set. Although, in agreement with Shant, it
would be difficult to find a proper set of values for the chemical
concepts that would then be cross-validated, a simple crosscorrelation among the different EDA energetic terms would
provide relevant data about their similarities and differences.
Carlo Gatti
Since aims might be quite diverse from method to method (see
my answer to question 1), I envisage complementary insights,
more than cross-validation from the suggested procedure.
However, common benchmarks and test sets could be useful to
observe which concepts and conclusions survive the various
methods. If concepts and conclusions were found to vary significantly within a class of related EDAs schemes, then this would
be a serious indication that these schemes might be deceptive
and seemingly unphysical.
26
Pedro Salvador
Coming back to my answer to Q1, topological EDAs only differ
on the underlying atom-in-molecule definition used. Thus, rather
than merely energy-based tests sets, which in agreement with
Angel and Shant are rather difficult to build, one could make up
a multidimensional test set aiming at finding the best AIM definition, analogous to the aromaticity test set put forward by Feixas
et al.[132] in order to grade the different aromaticity indicators.
Some work along this line has already been attempted. For
instance, the harpoon effect expected in the dissociation of LiH
cannot be recovered with Becke’s or Hirshfeld’s AIM partitioning.[112] Iterative Hirshfeld was also unable to reproduce the
higher carbon–carbon electron delocalization in para vs meta
position in benzene.[110] Semiqualitative energy-based tests could
be added to the mix. For instance, when using Hirshfeld-type
approaches in X–H bonds, the value of the atomic weight function of H at the nucleus significantly differs from 1, and consequently, that of the X atom is nonzero. Is the diatomic electron–
nuclear attraction contribution of X–H bonds reasonable?
Jerzy Cioslowski
The only reason for embarking upon cross-validation of different definitions of a given chemical concept (including energy
components/contributions) should be the detection of the
cases where the concept in question is (using physicist’s language) not a scalar. For example, as it is well known, all the
known definitions of ionicity are highly correlated, which means
that essentially ionicity is specified by just one set of values. A
counterexample is provided by aromaticity that is (at least) a
two-component vector, that is, it encompasses two sets of
values that are linearly independent. Thus, if one insists upon
cross-validation of energy partitioning schemes, it should be
carried out with a set comprising a large number of “unusual”
molecules, the results being subject to the principal component
analysis.
Gernot Frenking
It is a good idea to have a test set of species, which are then
used to explore the performance of a method for different
types of electronic structures. For molecules, this should
include, for example, compounds with polar and nonpolar as
well as localized and delocalized bonds and it should encompass transition metal complexes as well as main group compounds with “normal” valency and “hypervalent” compounds.
Miquel Solà
Benchmarks to prove the reliability of the different energy
decomposition analysis (EDA) approaches are highly desirable.
While the dissociation energy is an observable, the components
of the dissociation energy obtained from an EDA are not
observables. To validate concepts or quantities that cannot be
precisely defined mathematically from the underlying physics,
such as the components of the EDA, Ayers et al[87] proposed to
take an axiomatic approach, which consists on listing the chemical, mathematical, and computational properties that one
desires for a concept to possess. In our group, we followed this
approach to prove the reliability of a series of descriptors used
to quantify aromaticity, a quantity that is not observable, either.
To this end, we designed benchmarks containing a series of
tests.[132–134] The chosen tests fulfilled two requirements: first
and most important, they were based on the accumulated
chemical experience in such a way that most chemists would
agree about the expected aromaticity trend of the analyzed test
and, second, the size of the systems involved were relatively
small to facilitate a fast application. As an example, we considered different deformations of benzene, such as the bond
length alternation (BLA). Any good indicator of aromaticity
should detect a reduction of aromaticity of the benzene ring
when BLA increases. Or, for instance, when going from benzene
to pyridine (one heteroatom in the ring), pyrazine (two heteroatoms in the ring) and triazine (three heteroatoms in the ring),
aromaticity should decrease. In the case of EDA, one may proceed similarly. It is probably not a good idea to consider results
for a particular molecule instead of analyzing particular trends
in a series of molecules. Let us consider for instance LiF.
According to IQA calculations,[97] covalency, defined as the percentage between orbital interaction and the sum of electrostatic plus orbital interactions, is 14%. On the other hand, for
the same molecule, a Morokuma-like EDA considering Li+ and
F− as fragments indicates that covalency represents an 8% of
the total stabilizing interactions.[125] It is not possible to know
which of these two results is the correct one. To make things
more complicated, if fragments considered in the Morokumalike EDA are F and Li atoms, covalency of LiF increases to 91%.
The reader could ask whether the ionic or the radical fragments
is the best option to discuss bonding in LiF. One may argue
that radical fragments should be preferred because, for the gasphase LiF molecule, the homolytic dissociation costs less energy
than the heterolytic one, the latter being favored only if one
includes at least five water molecules, that is, for the LiF(H2O)5
species.[135] However, in the equilibrium geometry, the electronic distribution is closer to Li+ and F− ions than to F and
Li atoms, so maybe results employing ionic fragments are more
realistic. Anyway, using one or the other fragmentation scheme
is a matter of choice and, in principle, both are acceptable and
none of them is unphysical, although the results differ enormously. In this case, the IQA analysis in terms of atoms has the
advantage of not requiring a fragmentation scheme for its
application. Because of the difficulty to discuss EDA results for a
single molecule, except in some particular cases (like LiH, vide
infra), I consider that an EDA benchmark should discuss trends
and not particular molecules. For instance, for alkali metal chloride salts, the covalency should increase in the order LiCl > NaCl
> KCl > RbCl > CsCl, in the same order of increasing the ionization potential of the alkalimetals. Or for lithium halogen salts,
considering the trend of electron affinities of the halogen
atoms, one could reach the conclusion that the covalency
should increase in the order LiCl > LiBr > LiI > LiAt. Another
interesting example corresponds to the dissociation of LiH for
which a maximum of covalency should be found around the
avoided crossing at about 3.5–4 Å.[113] Pauli repulsion energy,
on the other hand, should increase in the order H2 < LiH < BeH
< BH < CH < NH < OH < HF, at least if all of these diatomic species are considered at the same bond length. Or whereas
orbital interaction should dominate the formation of H2 from
two H atoms, Pauli repulsion should be the main component of
the repulsive interaction between two RH molecules to form
the RH·· ·HR complex. These are possible tests to prove the reliability of EDA methods but I am sure the reader can think of
many others.
Paul W. Ayers
Yes. And the benchmarks should be very broad. It is not necessary to have consensus on all the systems (even things as simple as the interaction energy in the water dimer or the
energetic barrier to rotation in ethane are interpreted differently by different partitioning methods). But a panoply of
results helps establish the similarities/differences between
models and, perhaps, also the cases where their nuances are
most helpful. I do not want benchmark sets to become the battleground upon which religious wars about chemical concepts
are fought, but rather a proving ground upon which they are
understood. It is also important, even critical, that the benchmarks be provided together with data and software tools that
allow them to be easily used, so that few (if any) new EDA
methods are proposed without first being scrutinized against
said benchmark(s).
Farnaz Heidar-Zadeh
Benchmarking various schemes extensively is the way to go forward! These give us a better understanding of current partitioning schemes and sets that stage evaluating the future
schemes. As elaborated by many contributors, it is crucial to
have benchmarks that are diverse and comprehensive, both in
terms of systems studies and levels of theories considered. It is
also very important, even though less discussed, results generated for a specific schemes’ implementation (code) and
27
molecule (system) need to be reproducible, robust, replicable,
and generalizable as depicted in the image below.
something, which could transform the EDA problem into a precise description. I must confess that I cannot imagine what
might be the nature of this missing link.
Shant Shahbazian
Reliability of a given Scheme’s results
Juan Andrés
It is desirable to develop common benchmarks and test sets for
cross-validation of methods. Ayers et al.[87] propose an axiomatic approach, as it was previously noted by us (see question 1)
and by Prof. Solá (see question 7).
Yirong Mo
Not sure about this. Experimental evidences are always the gold
standards.
Eduard Matito
The short answer is yes, there is a large interest in designing
validation tests. There is some overlap between questions
6 and 7. I decided to comment on “chemical validation tests”
on question 6 and here, I will comment on another kind of validation tests. Benchmarking should also consider other essential
features of energy partitions such as basis set dependency (and
convergence toward CBS), size extensivity, and method dependency. Some of these features might be easy to anticipate from
the construction of some energy partition schemes (e.g., size
extensivity) but other require the design of tests that are appropriate to this purpose.
W. H. Eugen Schwarz
Yes, it would be very deserving. However, at first, a set of useful
decomposition methods and a set of empirical, valid, reliable
data must be agreed upon.
Alston Misquitta and Krzysztof Szalewicz
The only test that can be conducted are those outlined in the
answer to question 4, so each method can be tested individually since this is a pass/fail test.
This question is tightly connected to questions 1–3. If the
answer to this question is “no” in principle, then I find it really
hard to believe that currently used chemical concepts may
have any universally precise definition. This means that there
will be always an inherent fuzziness in chemical concepts that
personally, I find it quite an unpleasant situation. I am interested to see if anyone have a clue or a proposal for a “yes”
answer, at least in principle.
István Mayer
I do not think it possible to get a single “standard partitioning
model” right because it does not seem possible to get an ultimate
unique definition of an individual atom within the molecule. However, the introduction of two or three standardized procedures—
one for Hilbert space analysis and one or two for the 3D one—
seems to be quite possible. (In the latter case, separate standard
schemes for exclusive and fuzzy atoms can be contemplated.)
Martin Rahm
Unification seems unlikely at present, but that is not necessarily
a bad thing. There is strength in diversity. I suspect most in the
community strive toward development of as generally applicable methods as possible. In the long term, methods with higher
degrees of chemically relevant predictive utility are likely to see
more common use.
Frank Weinhold
Probably not. The idea of universally partitioning chemical phenomena into mutually exclusive and additive components is
inherently superficial, except as a tautological accounting device.
The fact that such “components” commonly exhibit greater variations than the energy difference they purport to analyze is
itself a telling indicator that their usefulness to the broader
chemical community will be marginal. The NAO-based NEDA
variant, which alone avoids the conceptual ambiguities of fragment overlap, seems to be the only plausible candidate for such
generality.
Roberto A. Boto
28
Question 8: Is it Possible to Contemplate a
Unified Partition Scheme (Let Us Call it the
“Standard Model” of Partitioning), that Is
Proper for all Applications in Chemistry, in the
Foreseeable Future or Even in Principle?
In my opinion, a unified partition scheme would require a unified theory of chemical bonding, something that as far as I
know, is far from being achieved. From a more pragmatical
point of view, the only way of accomplishing this uniformity in
partition schemes is not by means of theory, but by consensus.
Ramon Carbó-Dorca
Angel Martín Pendás
The previous question leads to the present one. One can
answer it like: if EDA techniques are somehow arbitrary, then it
seems difficult to obtain a unified universal partitioning
scheme. However, perhaps research on this topic is missing
Unification is probably not possible for the time being, but
thinking about the characteristics that would allow the different
available methods to “converge” might be a worthwhile enterprise. In my probably biased opinion, if a standard model can
be envisaged it should rely on orbital invariant quantities so
that one is not limited by any underlying computational methodology. In the end, this ultimately leads, in agreement with
István Mayer, to the atom-in-the-molecule conundrum.
David L. Cooper
I remains very deeply skeptical that a utopian model of partitioning could ever emerge that not only is applicable to, but
also (almost) universally agreed to be the “best” choice for, all
applications in Chemistry. There is even a sense in which it
would be more than a little disappointing if no new Chemistry
could ever be discovered for which such a “standard” model
might not be the most appropriate.
Carlo Gatti
Perhaps yes, but I doubt it would be the most appealing one
for most of the chemists. In principle, I would be highly in favor
of a unified approach and I fully agree with
Angel Martín
Pendás that it should rely on orbital invariant quantities. However, as I discussed in my answer to question 1, the aims
behind the present partitioning methods are different. Therefore, adoption of a standard model, while favoring scientific
rigor, could also result in a significant loss of richness of
interpretation.
Paul Popelier
I want to be optimistic about a “standard model of partitioning”
and indeed strive for it although it could be a long process. As
explained in question 4, schemes that make the same predictions can co-exist, but if they produce contradictory outcomes
then they cannot. Allow me to comment on the related and
perhaps less sensitive topic of population analyses. In the plethora of population analyses, the (original) Hirshfeld method and
the QTAIM typically produced answers at the two opposite
extremes: Hirshfeld was judged to give too small an answer
and QTAIM too large. The community often regarded both as
suspicious. However, over time Hirshfeld was modified
(in response to a theoretical deficiency related to the reference
state it invokes) and then gave less extreme values. This is an
example of convergence, which is a weaker form of unification.
A further step toward convergence would be to finally ditch
the Mulliken population analysis, which has been heavily criticized for decades but still regularly pops up. In my PhD thesis,
Mulliken charges served the purpose of creating a sufficiently
reliable crystal field in which solid state molecular geometries
could be obtained. However, when I saw a few years later that
the Mulliken population analysis assigned a non-negligible negative net charge (i.e., −0.26) to a boron atom[136] then I am
happy to ditch Mulliken because its answer violates any of the
dozen electronegativity scales. In terms of Darwinian selection,
a harsher environment consisting of the now more demanding
user leads to Mulliken not surviving ultimately. To make the
main point again: diversity is good provided it leads to a stronger end product. However, diversity for its own sake, in terms
of wallowing in contradictory interpretations and lauding this
situation as the richness of Chemistry is wrong. Yes, Chemistry
is a complex science, which is why we should do an utmost
effort to keep it clean and logical. When I look at typical undergraduate textbooks then I think there is still much work to
do. However, I think we will get there. The traditional Sciences
of Chemistry and Biology continue to undergo a physicalization
process: they become better and better connected with an
underlying physical and indeed quantum mechanical reality.
Whereas a typical biochemistry textbook of today is still naive
in its typically introductory chapters on physical chemistry, the
enzymology it reports in later chapters is full of protein crystal
structures that take away the yesteryear mysteries of the atomistic working of an enzyme. Optimistically I believe in an irreversible gradient of knowledge. Yes, there are temporary
regressions but I would be horrified if Science mere oscillated
between stagnating alternative theories.
Pedro Salvador
I agree with general view here that a unified partition scheme
is unlikely to be set in the near future. Yet, by gathering a sufficient number of “stress tests” for the existing partition schemes
as I suggest in question 7, one can probably narrow the search
to a handful of them, which hopefully will produce similar outputs for most purposes. On the other hand, in the present context of energy decomposition schemes, unification in the
formulation applied to different levels of theory is also desirable. In particular, a rigorous topological EDA for KS DFT that is
able to provide energy contributions comparable to those
obtained for correlated wavefunction methods is still lacking, in
my opinion.
Jerzy Cioslowski
I very much doubt that it is possible to design “the one and
only” energy partitioning scheme within each of the two classes
I discussed in my answer to question 1. However, it would be
very desirable to agree on a set of rules (or axioms) that have
to be satisfied by any admissible scheme. At present, some of
such axioms (like that the partitioned properties should
approach those of isolated systems as the intersystem separation goes to infinity) are both obvious and widely accepted,
whereas others (like that the partitioned properties should be
retrievable with equal ease from wavefunctions given on a grid
or in terms of atom-centered basis functions, single-centered
basis functions, or plane waves), while being equally obvious,
are ignored by a surprisingly large segment of practitioners of
quantum chemistry.
Gernot Frenking
No! The complexity and diversity of electronic structures in molecules and solids requests partitioning schemes that are appropriate for the given situation. The species may be grouped into
classes that have similar properties, for which a particular
model may be used, while it is less suitable for others. It holds
in general to use more than one partitioning scheme and to
compare the results before a statement about the best description of the bonding situation is made.
29
Paul W. Ayers
It is useful to contemplate, but it is a bit like contemplating nirvana. Useful, but it is best to live in the real (imperfect) world
most of the time. I tend to feel that while unified partitioning
schemes may exist (in the sense that there may be atom-inmolecule partitioning schemes and energy decomposition analysis methods with broad utility and applicability and, indeed,
some of our current tools approach this lofty standard) there
will always be room for improvement. At some point, though,
the “improvements” one might make will be achieved only by
adding complexity (“engineering” the model in a way that risks
overfitting), and some convergence may occur. However, as
every person has a different tolerance for model complexity
(in a different context, some prefer PBE, some BLYP, and some
M06L), the idea that our community could ever agree upon a
“standard model” seems…unfathomable. Indeed, it seems we
cannot even agree whether such a standard model should be
pursued!
Farnaz Heidar-Zadeh
Having a unified partitioning scheme is the holy grail. As such,
it is not possible to find a universal definition or even get close
to one. However, this should not lead one to underestimate the
usefulness and value of partitioning schemes (and other concepts) and the need for improving/validating the existing
approaches.
Juan Andrés
It is possible to contemplate a unified partition scheme, but it
must be recognized that this is still a pretension. In the current
state, I do not see a possible way to reach it.
Yirong Mo
I am not optimistic about this. Researchers always intend to be
unique and propose something different from others. So there
will be a constant endeavor to propose “novel” and “for the first
time” kind of partition schemes.
Eduard Matito
I highly doubt that an energy partition “to unite them all” will
ever be found. In the best case scenario, I would expect that
we find a partition (or a set of them) that gives reasonable predictions for “chemical validation tests”.
Eloy Ramos-Cordoba
I also agree that it is unlikely that a unique “standard model”
can be defined. However, as Prof. Cioslowski stated above, I
also think it would be convenient to establish a set of axiom or
requirements (e.g., well-defined basis set limit), based on mathematical or quantum mechanical arguments that every EDA has
to fulfill.
W. H. Eugen Schwarz
No: The various partition schemes yielding a few small numbers
to explain a given class of molecules w.r.t. a given type of
30
questions (e.g., concerning stability or reactivities) are quite
diverse. The universal cover approach consists of general quantum mechanics combined with a comprehensive set of questions, which is too demanding to be useful.
Alston Misquitta and Krzysztof Szalewicz
Not only contemplate, SAPT already provides the standard
model and we believe this has been generally recognized in
recent years.
Question 9: In the End, Science Is about
Experiments and the Real World. Can One
Therefore Use any Experiment or
Experimental Data Be Used to Favor One
Partition Scheme over another?
Ramon Carbó-Dorca
If experimental data could relate to EDA, then possibly the precise description of the theoretical scheme might be solved. The
adequate (ultimate) EDA will be the one adapting better to this
kind of experiment. Can one imagine any experiment of this
kind to be performed soon? However, if there is an experiment
which can be (completely) adapted to some EDA, this will mean
that the EDA terms will become observables. Therefore, a quantum mechanical operator (or operators) might be constructed
to describe the experiment. Can one foresee this observable
nature of the EDA partition terms?
Martin Rahm
Aside from valiant efforts toward X-ray constrained
wavefunctions,[137] which might move all EDA’s closer to experiment, most of what we do in the field requires a quantum
mechanical calculation to approximate a wave function or density. My personal preference is toward concepts and quantities
that are, at least in principle, experimentally measurable. For
this reason, I am exploring the possibilities of an EDA that can
interchangeably rely on both measurements and quantum
chemical calculations.[54] Of course, plenty of nonobservable
quantities are conceptually valuable. Time will tell when and
where an “Experimental Quantum Chemistry” EDA approach is
more advantageous. Experimental comparison and crossvalidation, discussed in questions six and seven, should help
to highlight complementarities between EDA methods and
be a good basis for making more informed choices for particular sets of systems and questions.
Shant Shahbazian
This question is tightly connected to questions 6 and 7. Without
any reference to experimental data, which are free from subjective judgments and chemical prejudice, it is hard to see how a
positive operational answer may be given to questions 6 and
7. I am interested to see if someone have any clue or proposal
on how a partitioning scheme may, in a nontrivial way, to be
connected to quantitative experimental data. However, if there
is no link, I see no way of real progress.
István Mayer
For H-bonding phenomena, the mentioned reference in question 3 suggests the correlative test that can be applied to the
“electrostatics” component common to most EDA partitions. A
recent critique of the SAPT partition[138] also shows how “steric”
or “induction” components can be tested for consistency with
measurable properties of prototype chemical species.*
little response, probably because it is very difficult to set up
experiments that can falsify a partitioning scheme. If one looks
at the review of Phipps et al.,[2] then it appears that the comparison between EDAs is not against some experiment but
rather a comparison of disadvantages and problems of the various EDAs. Examples of observations or judgments (see table 2)
sound like: “Observed overestimation of polarization and underestimation of charge transfer.” or “Presence of the DEMIX
energy unascribable to any particular component. Problems of
numerically unstable charge transfer and polarization energies
with large basis sets and at short intermolecular distance”. It
appears that we are still a long way off of making contact with
experiment.
Angel Martín Pendás
Pedro Salvador
Besides X-constrained wavefunction approaches, some EDAs
like IQA rely only on an atomic partition of space, that can be
retrieved from experimental charge densities, and first and second order densities, which despite being observables, are very
difficult to access experimentally. Even though a whole experimental energetic decomposition might still not be possible,
some of its components, like the electrostatic energies, indeed
are. Electrostatic potentials, which can be envisaged as a byproduct of EDAs, are routinely obtained from experiment and
partitioned into atomic contributions. So, although the global
answer to the question may be not, I expect some advances in
the near future.
My answer to questions 7 and 8 can also fit in here. In agreement with István and David, agreement with chemical intuition
is essential. We should be able to “quantify” such agreement
with the chemical experience, at least in a semiquantitative way
(e.g., this value should be larger than that other value, or this
value must be non-negative, etc…) to build up a survival-ofthe-fittest strategy.
Probably not directly. However, the different partition schemes
should be globally consistent with the chemical experience in
order to be practically useful.
Frank Weinhold
David L. Cooper
Much as it could be very interesting to live in a Universe in
which most of the components returned by a well-constructed
energy partitioning scheme could be directly related to expectation values of operators or even to experimental data, I
strongly suspect that we do not. Even if we did, it would also
be important that we could associate the relevant experimental
data with a realistic level of chemical interpretation. Otherwise,
we could just have decomposed one number into a sum of
others that might not really have brought with them any additional useful chemical/physical insights. In this sense, I agree
wholeheartedly with István Mayer that useful partitioning
schemes need to be consistent with chemical experience.
Paul Popelier
I wrote about the need[139] for falsification in the research of
interpretational theoretical chemistry, which is in the spirit of
this question. There I proposed the potentially falsifiable example of B2H6 where IQA states that the interatomic exchange
energy between the bridging hydrogen atoms is about three
times larger than that between the two borons. When presented with this information, Roald Hoffmann responded that
the HH interaction is something new to him and that some BB
bonding is easier to understand, based on a molecular orbital
argument. Since writing about falsification I have received very
*Bernard Silvi: the reply of A. Stone and K. Szalewicz has been
published in the same issue of J. Phys. Chem. A[118]
Julien Pilmé
Yes, this is a fundamental question, in principle any theory
needs to be supported (or refuted) by a “face-to-face” meeting
with experiment data. Currently, results obtained from EDA
methods globally skip this process, these results nevertheless
need to be in agreement with the chemical experience based
on numerous “fuzzy” concepts, so we go back to question 1. Of
course, this latter confrontation is very useful for our daily work
but it can be also a little “dangerous” when results contradict
the chemical experience, it can become a deadlock situation.
Jerzy Cioslowski
The only quantities that are presently amenable to experimental measurement are those given by matrix elements (including
expectation values) of global operators. In practice, this means
energies (and their differences), and the electric/magnetic
response properties such as multipole moments, polarizabilities,
etc… The one-electron densities have never been measured
experimentally as: (1) the number of experimental points is
always finite whereas the density is a function of a continuous
argument and (2) since the amplitudes (but not phases) are
measured in scattering experiments, the “measured” densities
are really model densities that fit best the amplitudes with the
phases approximately inferred from (admittedly clever) inaccurate methods. These model densities are very useful as a tool
for the location of nuclei and may even yield reasonable multipole moments but nevertheless, they have nothing to do with
the expectation values of the sum of one-electron Dirac deltas.
Keeping this in mind, one has to be very skeptical about the
possibility of (to use Martin Rahm’s words) “moving EDA’s closer
to experiment” as many of the partitioning schemes rely explicitly on both local and global properties of one-electron
densities.
31
Gernot Frenking
Laurent Joubert and Vincent Tognetti
The preference of a particular partitioning scheme is not
decided by an experiment, but by the interpretation of the
experimental results. This is done by the human mind of the
observer. “Real world” implies a definition of physical reality in
a region where quantum theory is valid but not classical physics. The outcome of a Diels–Alder reaction can only be
explained when the symmetry (sign) of the wave functions of
the interactions species is considered. This gives the wave function the status of physical reality. Three statements at the end:
(1) Physical reality becomes a fuzzy concept when quantum
effects are considered. (2) When chemical facts are reduced to
physical laws alone, they become mere stamp collection. Fuzzy
concepts are an integral part of chemistry. (3) Historically developed concepts must be examined with quantum chemical calculations, because they may be based on assumptions that are
not correct.
Another important point to emphasize, from our point of view,
is that experimental energies are often Gibbs energies. Most
energy decompositions discussed here only deal with electronic
ones, and thus do not include entropy. However, it is known
that entropy is a quantity of fundamental importance to
account for experimental results (see Ref. [140] for a recent
example in organic chemistry where the experimental selectivity in dipolar cycloadditions is governed by such factors). As
well known, entropy can be decomposed into electronic, translational, rotational, and vibrational ones. The last term is the
sum of contributions from each normal mode. Unfortunately,
the most important ones correspond to the lowest frequency
values, characteristic of vibrations of small amplitudes
delocalized over the whole molecule. They are thus difficult to
analyze from a chemical (regional) point of view. This is an
important limit to rationalizing experimental chemical results, in
particular for complex systems. In such cases, even if very accurate and meaningful EDAs are obtained for the electronic part,
the thermodynamic contributions remained an issue, notably
for condensed phases. From this point of view, EDAs cannot
guide us for selecting the most relevant physicochemical
properties.[140]
Julia Contreras
My answer to this is pretty similar to question 6.
Émilie-Laure Zins
I agree with the general opinion of the previous contributors: a
comparison and a dialogue between theoretical and experimental chemists are essential. But what experimental tools
could be used in comparison with theoretical studies on energy
partition schemes? Do the existing partition schemes allow a
comparison with observable or deductible quantities from
experiments? Would it be possible to develop new partition
schemes allowing an easier comparison with experimental
data? I think the answer to the latter question is “yes”, and that
it would be interesting to move in this direction, probably by
using a combination of complementary experimental
approaches, or even by developing new experimental
approaches. Of course, experimental techniques do not allow
an energetic decomposition, but in-depth investigations involving complementary experimental techniques allow to deduce
information on polarization and polarizability, the contribution
of the spin…Among the most versatile tools, we can mention
the technique of isolation of the investigated species in a
matrix (rare gas, para-hydrogen,…at cryogenic temperatures
(typically below 20 K). This technique allows to characterize
weak interactions, such as hydrogen bonds or agostic interactions. This technique is also useful to probe the spin state of a
metal atom in an organometallic complex, or even to induce
changes in spin states by photo-excitation. Isomerizations
between different inter or intramolecular complexes can also
be detected by annealing. This isolation technique is often
coupled with vibrational spectroscopy. One could imagine the
development of such an experimental set-up allowing to apply
a magnetic field. The use of such advanced experimental
approaches to deduce some of the physical components of an
energy partition scheme would need to be discussed with the
experimental chemists and/or physicists.
32
Paul W. Ayers
I often use the following quote from Willard van Orman
Quine,[141] [1953], “Our acceptance of an ontology is, I think,
similar in principle to our acceptance of a scientific theory, say
a system of physics; we adopt, at least insofar as we are reasonable, the simplest conceptual scheme into which the disordered
fragments of raw experience can be fitted and arranged.”
The real world provides the “disordered fragments of raw
experience” which we try to “fit and arrange” into our theories.
All of our arguments (at least the ones I judge to have some
value) are about which theoretical scheme is the simplest
(an aesthetic judgment) and how well experimental data fit and
arrange into various schemes (which standardized benchmark
datasets help us to quantify).
Farnaz Heidar-Zadeh
It can, but indirectly! The partitioning schemes can be used in
interpreting the outcome of experiments (i.e., justification) or
designing a specific experimental outcome (i.e., prediction).
These indirect experimental tests can ultimately leave us with a
smaller set of favorable schemes which perform better in
justifying/predicting the experimental results. Ultimately, these
schemes will help us design molecules and materials with
desired properties.
Juan Andrés
In this context, it should be noted that in principle a partition
scheme is more desirable if it is based on electron density,
since it is an observable and also can be derived from charge
density that is obtained experimentally.
Yirong Mo
It is the only way. Even there is little direct experimental data for
partition schemes, there are many indirect evidences to examine
individual energy terms. Structural and spectral parameters are
good indicators for partition schemes. In the study of intermolecular interaction, distance-dependent energy profiles are
often instructive for the verification of partition schemes. For
instance, in the void of orbital (electron transfer) interactions, the
optimal intermolecular distances should be comparable to regular van der Waals distances (unless strong electrostatic interactions exist). Unfortunately, so far, very few partition schemes can
perform geometry optimization. But at least numerical test calculations with small systems can be done for all partition schemes.
Eduard Matito
Maybe, but I doubt we will generate numbers that can be directly
compared to the experimental ones and, at the same time, provide undeniable chemical insight. For instance, in the future, perhaps we can obtain reliable electron density data that leads to
accurate prediction of, let us say, QTAIM atomic energies. However, the fact that we can measure these energies does not make
them any more useful to provide chemical insight. On the other
hand, I believe experimental evidence can provide qualitative
information that is useful in assessing energy partitions.
Eloy Ramos-Cordoba
In principle since energy components are not observables, it
seems not possible to quantify them by direct observation.
However, some energetic information can be extracted from
experiments. For instance, molecular-beam scattering experiments have been used to indirectly quantify the charge-transfer
stabilization energy.
W. H. Eugen Schwarz
Yes. Ultimately, the purposes of theory and partition schemes
are creating models that help to intuitively understand and
extrapolate (predict) the experimental facts. The answer to this
last question 9 therefore depends on three points: First, the partition scheme should appropriately explain the experimental
trends as seen by the chemists. If a theoretical model cannot
reproduce differences chemists are commonly talking about
(such as nonbonded repulsion vs chemical bonding attraction,
or strong vs very strong ionic or covalent interactions) then
probably the theoretical scheme should be modified. Second,
the observation of a positive value may be theoretically represented by the sum of one or two positive terms and several
small corrections, or as a sum of several large numbers of different signs. The latter model is not satisfactory. It may then help
to combine some numbers to get only medium-sized contributions of same sign, for instance summing large positive Pauli
repulsion and large negative quasi-classical electric attraction to
construct the “steric interaction” (or some other combination,
depending on the case). Namely, not only the values of a specific
partitioning characterize the real system, but also which type of
partitioning is simple in the given case. Third, whether a
partitioning is useful and efficient also depends on the cognitive competences and preferences of the users. Some experimentalists and theoreticians focus on the observable numbers only;
some other ones also consider the process of relaxation that
results in the observed outcome. Different partition schemes
may be required for different addressees.
Alston Misquitta and Krzysztof Szalewicz
Indirectly, due to SAPT-based PESs providing close interplay
with experiments and due to the fact that SAPT interaction
energy is built up from components (rather than decomposed),
comparisons with experiments provide a real-world connection
for these components. SAPT has been used to develop PESs for
a large number of dimers. SAPT PESs are among the most accurate ones published and if used in nuclear dynamics calculations, predict observables in excellent agreement with
experiment, for example, for the water dimer spectra.[142,143]
Also, SAPT PESs allow precise predictions of crystal structures.[144] Thus, there is a strong connection between SAPT and
experiment. Although these comparisons involve the total PESs,
there is a weaker connection to SAPT components as well. For
example, to predict correctly crystal densities, one has to have
the repulsive walls at the right places, which tests the
exchange-repulsion energy. Crystals of monomers dominated
by dispersion interactions, like for example the argon
crystal,[145] indirectly test this component of SAPT. There is a
further broad connection to the real world: construction of
force fields based on SAPT components and using forms of the
fitting functions that reflect the behavior of SAPT components.[88,146] One can fit intermolecular interaction energies by
several types of analytic functions or even use methods such as
neural networks, but fitting with physically relevant forms
enables such PES to be transferable. Use of SAPT to develop
biomolecular force fields has become increasingly popular.[147]
A particular example is water clusters. There is experimental data
available for such clusters, for example, the authors of Ref. [148]
performed measurements on hexamer, heptamer, and nonamer.
A very accurate force field developed in Ref. [149] was fitted to
CCSD(T) calculations for the water dimer and trimer. Predictions
of properties of clusters from this force field agree very well with
accurate ab initio data available for some clusters. Thus,
component-based force fields enable calculations for water clusters of essentially arbitrary size, whereas reasonably accurate
ab initio calculations are limited to about 20 water molecules. In
Ref. [149], not only the form of the fitting function was designed
based on the behavior of SAPT components, but also the longrange asymptotics was computed ab initio using SAPT codes. In
contrast, while damping and exchange-repulsion functional
forms are also consistent with SAPT, the parameters in these
terms are just free parameters of the fit. This can be improved
by performing SAPT calculations for close-range separations and
fitting component-by-component (as done for the water dimer
in Ref. [142]). While such direct fits can currently be done for
dimers and for small trimers, there remains an issue with higher
than three-body contributions. Reference [149] approximated
such contributions by a damped classical polarization model
33
iterated to convergence over the whole cluster. While the polarization model alone is a poor approximation to three-body interaction energies, it was shown in Ref. [149] that this model
recovers the four- to six-body interaction energies surprisingly
well. Since the many–many body polarization models is so critical for clusters and condensed phases, work on improved forms
of this model is essential. Here the work of Refs. [116,120] is
important since it both extends the model beyond the isotropic
dipole–dipole polarizability case and designs better damping
functions which are essential at shorter separations. Furthermore, the decomposition of induction energy into polarization
(including a part of the exchange components) and chargetransfer terms may lead to improved models of damping.
Concluding Remarks
Bernard Silvi
For this article, I tried to collect a large panel of opinions on the
use of EDA methods in Quantum Chemistry. I had no
preconceived ideas about the outcome and therefore, I have
been surprised by the diversity of points of view often apparently contradictory. Whereas some contributors reject EDA
methods, many others consider them as a fundamental contribution of Quantum Chemistry. The origin of this dispersion of
opinions is not a crisis of our discipline announcing the advent
of a new paradigm but rather a consequence of its good health.
As we wrote in the introduction, EDAs are tools (not theories)
providing pieces of information enabling to set up explanations. They belong to normal science processes and as tools,
they have not to strictly satisfy demarcation criteria. They are
mostly used to understand geometries and stabilities of molecules and molecular complexes on the basis of quantum chemical and physical arguments. Here quantum chemical is related
to systems of explanations based on quantum chemical concepts such as those of orbital, valence-bond structure, etc…
whereas physical concerns arguments rooted in the theory of
intermolecular forces. These systems of explanation may be
interdependent and complementary, never contradictory: they
address different meanings of a given question and are
intended for different scientific (sub)communities. Each system
corresponds to its own representation of the microscopic matter, adopts its own point of view and uses its own vocabulary.
Moreover, there is an inherent source of difficulty in our
attempt to explain the microscopic matter because we try to
understand the behavior of quantum objects which is not
deterministic in a deterministic fashion. Most explanations in
science belong to the deductive-nomological account[150]
which provides a scheme for any deterministic explanation of a
particular event and consists in a deductive derivation of the
occurrence of the event from a set of true propositions involving at least a scientific law or principle. The choice of rules and
principles leaves additional degrees of freedom.
Keywords: energy decomposition analysis · interaction
energy · partitioning · chemical bonding · status of the
methods
34
[1] P. L. Ayers, R. J. Boyd, P. Bultinck, M. Caffarel, R. Carbó-Dorca, M. Causá,
J. Cioslowski, J. Contreras-Garcia, D. L. Cooper, P. Coppens, C. Gatti,
S. Grabowsky, P. Lazzeretti, P. Macchi, A. M. Pendas, P. L. A. Popelier,
K. Ruedenberg, H. Rzepa, A. Savin, A. Sax, W. H. E. Schwarz,
S. Shahbazian, B. Silvi, M. Solà, V. Tsirelson, Comput. Theor. Chem.
2015, 1053, 2.
[2] M. J. S. Phipps, T. Fox, C. S. Tautermann, C.-K. Skylaris, Chem. Soc. Rev.
2015, 44, 3177.
[3] M. Eisenschitz, F. London, Z. Phys. 1930, 60, 491.
[4] F. London, Z. Phys. Chem. 1930, B11, 222.
[5] H. Margenau, Rev. Mod. Phys. 1939, 11, 1.
[6] J. O. Hirschfelder, R. Silbey, J. Chem. Phys. 1966, 45, 2188.
[7] J. Hirschfelder, Chem. Phys. Lett. 1967, 1, 325.
[8] J. Hirschfelder, Chem. Phys. Lett. 1967, 1, 363.
[9] A. van der Avoird, J. Chem. Phys. 1967, 47, 3649.
[10] J. N. Murrell, G. Shaw, J. Chem. Phys. 1967, 46, 1768.
[11] J. I. Musher, A. T. Amos, Phys. Rev. 1967, 164, 31.
[12] S. Epstein, R. Johnson, Chem. Phys. Lett. 1968, 1, 602.
[13] P. Claverie, Int. J. Quant. Chem. 1971, 5, 273.
[14] P. Claverie, In Intermolecular Interactions: From Diatomics to Biopolymers; B. Pullman, Ed., Wiley, New York, 1978, p. 69.
[15] K. Salewicz, B. Jeziorski, Mol. Phys. 1979, 38, 191.
[16] B. Jeziorski, R. Moszynski, K. Szalewicz, Chem. Rev. 1994, 94, 1887.
[17] B. Jeziorski, W. Kołos, In Molecular Interactions, Vol. III; H. Ratajczak,
W. J. Orville-Thomas, Eds., Wiley, New York, 1982, p. 1.
[18] K. Szalewicz, WIREs Comput. Mol. Sci. 2012, 2, 254.
[19] A. D. Buckingham, P. W. Fowler, J. M. Huston, Chem. Rev. 1988,
88, 963.
[20] A. J. Stone, The Theory of Intermolecular Forces, second edition ed.,
Oxford University Press, Oxford, 2013.
[21] M. V. Hopffgarten, G. Frenking, WIREs Comput. Mol. Sci. 2012, 2, 43.
[22] L. Zhaos, M. von Hopffgartens, D. M. Andradas, G. Frenking, WIREs
Comput. Mol. Sci. 2018, 8, e1345, https://doi.org/10.1002/wcms.
1345.
[23] C. A. Coulson, Research (London) 1957, 10, 149.
[24] K. Morokuma, J. Chem. Phys. 1971, 55, 1236.
[25] K. Kitaura, K. Morokuma, Int. J. Quant. Chem. 1976, X, 325.
[26] K. Morokuma, Acc. Chem. Res. 1977, 10, 294.
[27] K. Morokuma, K. Kitaura, In Molecular Interactions, Vol. I; H. Ratajczak,
W. J. Orville-Thomas, Eds., Wiley, Chichester, 1980, p. 21.
[28] W. Chen, M. S. Gordon, J. Phys. Chem. 1996, 100, 14316.
[29] T. Ziegler, A. Rauk, Theor. Chim. Acta (Berlin) 1977, 46, 1.
[30] T. Ziegler, A. Rauk, Inorg. Chem. 1979, 18, 1558.
[31] E. D. Glendening, A. Streitwieser, J. Chem. Phys. 1994, 100, 2900.
[32] E. D. Glendening, J. Phys. Chem. A 2005, 109, 11936.
[33] Y. Mo, J. Gao, S. D. Peyerimhoff, J. Chem. Phys. 2000, 112, 5530.
[34] Y. Mo, P. Bao, J. Gao, Phys. Chem. Chem. Phys. 2011, 13, 6760.
[35] D. G. Fedorov, K. Kitaura, J. Comput. Chem. 2007, 28, 222.
[36] R. Z. Khaliullin, E. A. Cobar, R. C. Lochan, A. T. Bell, M. Head-Gordon,
J. Phys. Chem. A 2007, 111, 8753.
[37] R. Z. Khaliullin, A. T. Bell, M. Head-Gordon, J. Chem. Phys. 2008, 128,
184112.
[38] Y. Mao, P. R. Horn, M. Head-Gordon, Phys. Chem. Chem. Phys. 2017, 19,
5944.
[39] Y. Mao, Q. Ge, P. R. Horn, M. Head-Gordon, J. Chem. Theory Comput.
2018, 14, 2401.
[40] A. Michalak, M. Mitoraj, T. Ziegler, J. Phys. Chem. A 2008, 112, 1933.
[41] M. P. Mitoraj, A. Michalak, T. Ziegler, J. Chem. Theory Comput. 2009,
5, 962.
[74]
[42] W. B. Schneider, G. Bistoni, M. Sparta, M. Saitow, C. Riplinger,
A. A. Auer, F. Neese, J. Chem. Theory Comput. 2016, 12, 4778.
[43] J. Thirman, M. Head-Gordon, J. Chem. Phys. 2015, 143, 084124.
[44] P. Su, H. Li, J. Chem. Phys. 2009, 131, 014102.
[45] I. Mayer, Int. J. Quant. Chem. 1983, 23, 341.
[46] R. F. W. Bader, Atoms in Molecules: A Quantum Theory, Oxford University Press, Oxford, 1990.
[47] P. Salvador, M. Duran, I. Mayer, J. Chem. Phys. 2001, 115, 1153.
[48] A. Martín Pendás, M. A. Blanco, E. Francisco, J. Chem. Phys. 2004, 120,
4581.
[49] M. A. Blanco, A. Martín Pendás, E. Francisco, J. Chem. Theory Comput.
2005, 1, 1096.
[50] E. Francisco, A. Martín Pendás, M. A. Blanco, J. Chem. Theory Comput.
2006, 2, 90.
[51] A. Martín Pendás, E. Francisco, M. Blanco, Chem. Phys. Lett. 2008,
454, 396.
[52] P. Salvador, I. Mayer, J. Chem. Phys. 2004, 120, 5046.
[53] P. Salvador, I. Mayer, J. Chem. Phys. 2007, 126, 234113.
[54] M. Rahm, R. Hoffmann, J. Am. Chem. Soc. 2015, 137, 10282.
[55] M. Rahm, R. Hoffmann, J. Am. Chem. Soc. 2016, 138, 3731.
[56] M. Rahm, T. Zeng, R. Hoffmann, J. Am. Chem. Soc. 2019, 141, 342.
[57] O. Demerdash, Y. Mao, T. Liu, M. Head-Gordon, T. Head-Gordon,
J. Chem. Phys. 2017, 147, 161721.
[58] J. Munárriz, R. Laplaza, A. Martín Pendás, J. Contreras-García, Phys.
Chem. Chem. Phys. 2019, 21, 4215.
[59] B. Jeziorski, G. Chałasinski, K. Szalewicz, Int. J. Quantum Chem. 1978,
14, 271.
[60] S. Rybak, B. Jeziorski, K. Szalewicz, J. Chem. Phys. 1991, 95, 6576.
[61] V. F. Lotrich, K. Szalewicz, J. Chem. Phys. 1997, 106, 9668.
[62] A. J. Misquitta, B. Jeziorski, K. Szalewicz, Phys. Rev. Lett. 2003, 91,
033201.
[63] A. Hesselmann, G. Jansen, Chem. Phys. Lett. 2003, 367, 778.
[64] K. Patkowski, B. Jeziorski, K. Szalewicz, J. Chem. Phys. 2004, 120, 6849.
[65] K. Szalewicz, K. Patkowski, B. Jeziorski, In Intermolecular Forces and
Clusters II. Structure and Bonding, Vol. 116; D. J. Wales, Ed., Springer,
Berlin, 2005, p. 43.
[66] A. J. Misquitta, R. Podeszwa, B. Jeziorski, K. Szalewicz, J. Chem. Phys.
2005, 123, 214103.
[67] A. Hesselmann, G. Jansen, M. Schütz, J. Chem. Phys. 2005, 122, 014103.
[68] P. S. Żuchowski, R. Podeszwa, R. Moszynski, B. Jeziorski, K. Szalewicz,
J. Chem. Phys. 2008, 129, 084101.
[69] I. G. Kaplan, Theory of molecular interactions, Elsevier, Amsterdam,
1986.
[70] P. Arrighini, Intermolecular Forces and Their Evaluation by Perturbation Theory, vol. 25 of Lecture Notes in Chemistry, Springer, Berlin,
1981.
[71] M. S. Gordon, L. Slipchenko, H. Li, J. H. Jensen, Ann. Rep. Comp. Chem.
2007, 3, 177.
[72] M. Shahbaz, K. Szalewicz, Phys. Rev. Lett. 2018, 121, 113402.
[73] R. Bukowski, W. Cencek, P. Jankowski, M. Jeziorska, B. Jeziorski,
S. A. Kucharski, V. F. Lotrich, M. P. Metz, A. J. Misquitta, R. Moszynski,
K. Patkowski, R. Podeszwa, F. Rob, S. Rybak, K. Szalewicz, H. L. Williams,
R. J. Wheatley, P. E. S. Wormer, and P. S. Żuchowski. SAPT2016: An
ab initio program for many-body symmetry-adapted perturbation theory
calculations of intermolecular interaction energies, University of Delaware and University of Warsaw (2016). http://www.physics.udel.edu/
szalewic/SAPT/SAPT.html.
H.-J. Werner, P. J. Knowles, R. Lindh, M. Schütz, P. Celani, T. Korona,
F. R. Manby, G. Rauhut, R. D. Amos, A. Bernhardsson, A. Berning,
D. L. Cooper, M. J. O. Deegan, A. J. Dobbyn, F. Eckert, E. Goll,
C. Hampel, A. Hesselmann, G. Hetzer, T. Hrenar, G. Jansen, C. Köppl,
Y. Liu, A. W. Lloyd, R. A. Mata, A. J. May, S. J. McNicholas, W. Meyer,
M. E. Mura, A. Nicklass, D. P. O’Neill, P. Palmieri, K. Pflüger, R. Pitzer,
M. Reiher, T. Shiozaki, H. Stoll, A. J. Stone, R. Tarroni, T. Thorsteinsson,
M. Wang, and A. Wolf, MOLPRO, version 2009.1, a package of ab initio
programs (2009). http://www.molpro.net.
[75] R. M. Parrish, L. A. Burns, D. G. A. Smith, A. C. Simmonett,
A. E. DePrince, E. G. Hohenstein, U. Bozkaya, A. Y. Sokolov, R. Di
Remigio, R. M. Richard, J. F. Gonthier, A. M. James, H. R. McAlexander,
A. Kumar, M. Saitow, X. Wang, B. P. Pritchard, P. Verma, H. F. Schaefer,
3rd., K. Patkowski, R. A. King, E. F. Valeev, F. A. Evangelista,
J. M. Turney, T. D. Crawford, C. D. Sherrill, J. Chem. Theory Comput.
2017, 13, 3185.
[76] K. Patkowski, W. Cencek, M. Jeziorska, B. Jeziorski, K. Szalewicz, J. Phys.
Chem. A 2007, 111, 7611.
[77] M. Jeziorska, W. Cencek, K. Patkowski, B. Jeziorski, K. Szalewicz,
J. Chem. Phys. 2007, 127, 124303.
[78] D. C. Taylor, J. G. Angyan, G. Galli, C. Zhang, F. Gygi, K. Hirao,
J. W. Song, K. Rahul, O. A. von Lilienfeld, R. Podeszwa, I. W. Bulik,
T. M. Henderson, G. E. Scuseria, J. Toulouse, R. Peverati, D. G. Truhlar,
K. Szalewicz, J. Chem. Phys. 2016, 145, 124105.
[79] M. Shahbaz, K. Szalewicz, Phys. Rev. Lett. 2019, 122, 213001.
[80] S. Shahbazian, Found. Chem. 2014, 16, 77.
[81] I. Mayer, Int. J. Quant. Chem. 1986, 29, 73.
[82] A. Martin Pendás, J. L. Casals-Sainz, E. Francisco, In Intermolecular
Interactions in Crystals. Fundamentals of Crystal Engineering;
J. J. Novoa, Ed., The Royal Society of Chemistry, London, 2018.
[83] E. Ramos-Cordoba, P. Salvador, I. Mayer, J. Chem. Phys. 2013, 138, 214107.
[84] I. Mayer, Chem. Phys. Lett. 2013, 585, 198.
[85] I. Bakó, A. Stirling, A. Seitsonen, I. Mayer, Chem. Phys. Lett. 2013,
563, 97.
[86] M. Jansen, U. Wedig, Angew. Chem. Int. Ed. Engl. 2008, 47, 10026.
[87] P. W. Ayers, S. Fias, F. Heidar-Zadeh, Comput. Theor. Chem. 2018, 1142, 83.
[88] V. F. Lotrich, H. L. Williams, K. Szalewicz, B. Jeziorski, R. Moszynski,
P. E. S. Wormer, A. van der Avoird, J. Chem. Phys. 1995, 103, 6076.
[89] T. Clark, J. S. Murray, P. Politzer, Phys. Chem. Chem. Phys. 2018, 20,
30076.
[90] M. A. C. Nascimento, Int. J. Quantum Chem. 2019, 119, e25765.
[91] E. Francisco, A. Martín Pendás, Mol. Phys. 2016, 114, 1334.
[92] A. L. Wilson, P. L. A. Popelier, J. Phys. Chem. A 2016, 120, 9647.
[93] B. C. B. Symons, D. J. Williamson, C. M. Brooks, A. L. Wilson, P. L.
A. Popelier, Chemistry Open 2019, 8, 560.
[94] M. J. S. Dewar, J. Am. Chem. Soc. 1984, 106, 669.
[95] M. Solà, Front. Chem. 2017, 5, 22.
[96] J. Grunenberg, Int. J. Quantum Chem. 2017, 117, e25359.
[97] A. Martin Pendás, J. L. Casals-Sainz, E. Francisco, Chem. A Eur. J. 2019,
25, 309.
[98] O. A. Stasyuk, R. Sedlak, C. F. Guerra, P. Hobza, J. Chem. Theory Comput.
2018, 14, 3440.
[99] I. Müller, A History of Thermodynamics: The Doctrine of Energy and
Entropy, Springer, Berlin, 2007.
[100] F. Weinhold, R. A. Klein, Mol. Phys. 2012, 110, 565.
[101] J. C. R. Thacker, P. L. A. Popelier, Theor. Chem. Acc. 2017, 136, 86.
[102] P. L. A. Popelier, P. I. Maxwell, J. C. R. Thacker, I. Alkorta, Theor. Chem.
Acc. 2019, 138, 12.
[103] V. Postils, C. Delgado-Alonso, J. M. Luis, P. Salvador, Angew. Chem. Int.
Ed. Engl. 2018, 57, 10525.
[104] E. Ramos-Cordoba, E. Matito, I. Mayer, P. Salvador, J. Chem. Theory
Comput. 2012, 8, 1270.
[105] M. Levy, F. Zahariev, Phys. Rev. Lett. 2014, 113, 113002.
[106] I. Mayer, Chem. Phys. Lett. 1983, 97, 270.
[107] M. Fugel, J. Beckmann, D. Jayatilaka, G. V. Gibbs, S. Grabowsky, Chem.
A Eur. J. 2018, 24, 6248.
[108] A. Martín Pendás, M. A. Blanco, E. Francisco, J. Comput. Chem. 2009,
30, 98.
[109] F. J. Ayala, Proc. Natl. Acad. Sci. U.S.A. 2009, 106, 10033.
[110] W. Heyndrickx, P. Salvador, P. Bultinck, M. Solá, E. Matito, J. Comput.
Chem. 2011, 32, 386.
[111] R. Ponec, D. L. Cooper, J. Mol. Struct. (THEOCHEM) 2005, 727, 133.
[112] E. Matito, M. Solà, P. Salvador, M. Duran, Faraday Discuss. 2007,
135, 325.
[113] M. Rodríguez-Mayorga, E. Ramos-Cordoba, P. Salvador, M. Solà,
E. Matito, Mol. Phys. 2016, 114, 1345.
[114] K. Patkowski, K. Szalewicz, B. Jeziorski, J. Chem. Phys. 2006, 125,
154107.
[115] K. Patkowski, K. Szalewicz, B. Jeziorski, Theor. Chem. Acc. 2010,
127, 211.
[116] A. J. Misquitta, A. J. Stone, J. Chem. Theory Comput. 2016, 12, 4184.
[117] A. J. Stone, J. Phys. Chem. A 2017, 121, 1531.
[118] A. J. Stone, K. Szalewicz, J. Phys. Chem. A 2018, 122, 733.
[119] R. J. Azar, M. Head-Gordon, J. Chem. Phys. 2012, 136, 024103.
[120] A. J. Misquitta, J. Chem. Theory Comput. 2013, 9, 5313.
35
[121] C. Foroutan-Nejad, S. Shahbazian, R. Marek, Chem. A Eur. J. 2014, 20,
10140.
[122] S. Shahbazian, Chem. A Eur. J. 2018, 24, 5401.
[123] A. J. Stone, J. Chem. Theory Comput. 2005, 1, 1128.
[124] V. Tognetti, L. Joubert, ChemPhysChem 2017, 18, 2675.
[125] F. M. Bickelhaupt, M. Solà, C. F. Guerra, J. Comput. Chem. 2007, 28, 238.
[126] F. M. Bickelhaupt, K. N. Houk, Angew. Chem. Int. Ed. Engl. 2017, 56, 10070.
[127] I. Fernández, F. M. Bickelhaupt, Chem. Soc. Rev. 2014, 43, 4953.
[128] M. El-Hamdi, W. Tiznado, J. Poater, M. Solà, J. Org. Chem. 2011, 76,
8913.
[129] M. El-Hamdi, O. El Bakouri Farri, P. Salvador, B. A. Abdelouahid, M. S. El
Begrani, J. P., and M. Solà, Organometallics 2013, 32, 4892.
[130] I. Mayer, Phys. Chem. Chem. Phys. 2012, 14, 337.
[131] S. Racioppi, R. Della Pergola, V. Colombo, A. Sironi, P. Macchi, J. Phys.
Chem. A 2018, 122, 5004.
[132] F. Feixas, E. Matito, J. Poater, M. Solà, J. Comput. Chem. 2008, 29, 1543.
[133] M. Solà, F. Feixas, J. O. C. Jiménez-Halla, E. Matito, J. Poater, Symmetry
2010, 2, 1156.
[134] F. Feixas, E. Matito, M. Duran, M. Solá, B. Silvi, J. Chem. Theory Comput.
2010, 6, 2736.
[135] S. Osuna, M. Swart, E. J. Baerends, F. M. Bickelhaupt, M. Solà,
ChemPhysChem 2009, 10, 2955.
[136] T. H. Richardson, S. de Gala, R. H. Crabtree, J. Am. Chem. Soc. 1995,
117, 12875.
[137] M. Woinska, D. Jayatilaka, B. Dittrich, R. Flaig, P. Luger, K. Woźniak,
P. M. Dominiak, S. Grabowsky, ChemPhysChem 2017, 18, 3290.
[138] F. Weinhold, E. D. Glendening, J. Phys. Chem. A 2018, 122, 724.
[139] P. L. A. Popelier, In Molecular Chemistry; R. Chauvin, C. Lepetit, B. Silvi,
E. Alikhani, Eds., Springer International Publishing, Cham, 2016, p. 23.
[140] E. Falkowska, V. Tognetti, L. Joubert, P. Jubault, J.-P. Bouillon,
X. Pannecoucke, RSC Adv. 2015, 5, 6864.
[141] W. V. O. Quine, From a Logical Point of View, Harvard University Press,
Cambridge, 1953.
[142] G. C. Groenenboom, E. M. Mas, R. Bukowski, K. Szalewicz,
P. E. S. Wormer, A. van der Avoird, Phys. Rev. Lett. 2000, 84, 4072.
36
[143] R. Bukowski, K. Szalewicz, G. C. Groenenboom, A. van der Avoird,
J. Chem. Phys. 2006, 125, 044301.
[144] A. M. Reilly, R. I. Cooper, C. S. Adjiman, S. Bhattacharya, A. D. Boese,
J. G. Brandenburg, P. J. Bygrave, R. Bylsma, J. E. Campbell, R. Car,
D. H. Case, R. Chadha, J. C. Cole, K. Cosburn, H. M. Cuppen, F. Curtis,
G. M. Day, R. A. DiStasio Jr, A. Dzyabchenko, B. P. van Eijck,
D. M. Elking, J. A. van den Ende, J. C. Facelli, M. B. Ferraro, L. FustiMolnar, C. A. Gatsiou, T. S. Gee, R. de Gelder, L. M. Ghiringhelli,
H. Goto, S. Grimme, R. Guo, D. W. M. Hofmann, J. Hoja, R. K. Hylton,
L. Iuzzolino, W. Jankiewicz, D. T. de Jong, J. Kendrick, N. J. J. de Klerk,
H. Y. Ko, L. N. Kuleshova, X. Li, S. Lohani, F. J. J. Leusen, A. M. Lund,
J. Lv, Y. Ma, N. Marom, A. E. Masunov, P. McCabe, D. P. McMahon,
H. Meekes, M. P. Metz, A. J. Misquitta, S. Mohamed, B. Monserrat,
R. J. Needs, M. A. Neumann, J. Nyman, S. Obata, H. Oberhofer,
A. R. Oganov, A. M. Orendt, G. I. Pagola, C. C. Pantelides, C. J. Pickard,
R. Podeszwa, L. S. Price, S. L. Price, A. Pulido, M. G. Read, K. Reuter,
E. Schneider, C. Schober, G. P. Shields, P. Singh, I. J. Sugden,
K. Szalewicz, C. R. Taylor, A. Tkatchenko, M. E. Tuckerman, F. Vacarro,
M. Vasileiadis, A. Vazquez-Mayagoitia, L. Vogt, Y. Wang, R. E. Watson,
G. A. de Wijs, J. Yang, Q. Zhu, C. R. Groom, Acta Cryst. B 2016, 72, 439.
[145] V. F. Lotrich, K. Szalewicz, Phys. Rev. Lett. 1997, 79, 1301.
[146] H. L. Williams, K. Szalewicz, B. Jeziorski, R. Moszynski, S. Rybak, J. Chem.
Phys. 1993, 98, 1279.
[147] J. A. Rackers, C. W. Liu, P. Y. Ren, J. W. Ponder, J. Chem. Phys. 2018,
149, 084115.
[148] C. Perez, M. T. Muckle, D. P. Zaleski, N. A. Seifert, B. Temelso,
G. C. Shields, Z. Kisiel, B. H. Pate, Science 2012, 336, 897.
[149] U. Gora, W. Cencek, R. Podeszwa, A. van der Avoird, K. Szalewicz,
J. Chem. Phys. 2014, 140, 194101.
[150] C. G. Hempel, P. Oppenheim, Philos. Sci. 1948, 15, 135.