CMS data processing workflows
during an extended cosmic ray run
The MIT Faculty has made this article openly available. Please share
how this access benefits you. Your story matters.
Citation
Chatrchyan et al. "CMS data processing workflows during an
extended cosmic ray run." Journal of Instrumentation 5 (March
2010): T03006 © 2010 IOP Publishing Ltd and SISSA
As Published
http://dx.doi.org/10.1088/1748-0221/5/03/T03006
Publisher
IOP Publishing
Version
Original manuscript
Citable link
https://hdl.handle.net/1721.1/121490
Terms of Use
Creative Commons Attribution-Noncommercial-Share Alike
Detailed Terms
http://creativecommons.org/licenses/by-nc-sa/4.0/
CMS PAPER CFT-09-007
CMS Paper
arXiv:0911.4842v2 [physics.ins-det] 15 Jan 2010
2009/11/21
CMS Data Processing Workflows during an Extended
Cosmic Ray Run
The CMS Collaboration∗
Abstract
The CMS Collaboration conducted a month-long data taking exercise, the Cosmic
Run At Four Tesla, during October-November 2008, with the goal of commissioning
the experiment for extended operation. With all installed detector systems participating, CMS recorded 270 million cosmic ray events with the solenoid at a magnetic
field strength of 3.8 T. This paper describes the data flow from the detector through
the various online and offline computing systems, as well as the workflows used for
recording the data, for aligning and calibrating the detector, and for analysis of the
data.
∗ See
Appendix A for the list of collaboration members
1
1 Introduction
The primary goal of the Compact Muon Solenoid (CMS) experiment [1] is to explore physics at
the TeV energy scale, exploiting the collisions delivered by the Large Hadron Collider (LHC) [2].
The central feature of the CMS apparatus is a superconducting solenoid, of 6 m internal diameter. Within the field volume are the silicon pixel and strip tracker, the crystal electromagnetic
calorimeter (ECAL) and the brass-scintillator hadronic calorimeter (HCAL). Muons are measured in drift tube chambers (DT), resistive plate chambers (RPC), and cathode strip chambers
(CSC) embedded in the steel return yoke. A detailed description of the experimental apparatus
can be found elsewhere [1].
A key element to the success of the experiment is the adequate design, implementation and
smooth operation of the data processing workflows from the detector to the end user analysis.
The month-long data taking exercise known as the Cosmic Run At Four Tesla (CRAFT) [3]
represented a major test for these workflows. This paper describes the technical details of the
data flow from the detector to the final analysis. It explains the data acquisition system and the
various online and offline computing systems, and describes the software and the workflows
used in the data taking chain.
Section 2 describes the online data taking environment and Section 3 the high-level trigger
chain including the binary raw detector output content. The computing infrastructure used
to handle the recorded data is detailed in Section 4, and the software and its special setup for
reconstructing cosmic ray events in Section 5. This is followed by the description of the data
quality monitoring and the various validation steps performed during data taking in Section 6.
The recorded data have been used to derive alignment and calibration constants, which are described in Section 7. The management and distribution of the constants via the CMS conditions
database system are addressed in Section 8, while the analysis of the recorded cosmic ray muon
data is described in Section 9.
2 Online system
The CMS trigger and data acquisition (DAQ) system is designed to collect and analyse the
detector information at the LHC bunch-crossing frequency of 40 MHz. The rate of events to be
recorded for offline processing and analysis is about a few hundred Hz. The first-level trigger
(L1) is designed to reduce the incoming data rate to a maximum of 100 kHz, by processing
fast trigger information coming from the calorimeters and the muon chambers, and selecting
events with interesting signatures. The DAQ system must sustain a maximum input rate of 100
kHz, or a data flow of about 100 GB/s, coming from approximately 650 data sources from the
different detector components. The DAQ system then reduces this rate by a factor of 1000 using
a high-level trigger (HLT, Section 3), a software filtering system running on a large processor
farm.
The architecture of the CMS DAQ system, described in detail elsewhere [1, 4], is shown schematically in Fig. 1. The sub-detector front-end systems store data continuously in 40 MHz-pipelined
buffers. Synchronous L1 accept signals are distributed to the front-ends via the timing, trigger,
and control system. When an accept signal is received, the corresponding data are extracted
from the buffers in the front-ends and pushed into the DAQ system through the links in the
readout systems. The various readout fragments coming from different parts of the apparatus are subsequently assembled into complete events (event building) in two stages inside the
high-performance builder network. Firstly, the front-end fragments are assembled into larger
groups called super fragments, which are delivered to readout units organised in eight indepen-
2
3
40 MHz
10 5 Hz
Level 1
Trigger
Event
Manager
High level trigger and data streams
Detector Front-Ends
Readout
Systems
Builder Network 100 GB/s
Control
and
Monitor
Filter
Systems
10 2 Hz
Computing Services
Figure 1: Simplified schematic view of the CMS Data Acquisition System architecture. Shown
are the key building blocks for a single slice of the DAQ system.
dent sets (DAQ slices). All super fragments belonging to the same event are given to the same
slice which are fed with events in a round-robin fashion. In each slice, individual events are
assigned to event buffers (builder units) by an event manager, and each builder unit assembles single events after having obtained all its super fragments from the readout units in the
slice. The builder unit hands over complete events to the filter systems running individual filter units upon request. The filter unit runs the HLT algorithms to select events to be accepted
for storage, and eventually hands over accepted events to the computing services. In the end,
storage managers, one for each slice, stream event data to disk and transfer complete data files
to the CMS Tier-0 (Section 4). More details about the HLT and data logging are discussed in the
following section.
3 High level trigger and data streams
The CMS high-level trigger algorithms are executed in a farm comprising 720 computing nodes,
the event filter farm, executing the HLT reconstruction and selection algorithm sequence on
individual events in parallel. The products of the HLT execution (e.g. reconstructed physics
objects, tracks, etc.) can be added to the event before it is sent to storage, thus facilitating later
debugging and analysis of the HLT performance.
The HLT reconstruction uses the same framework as the offline reconstruction [5]. The HLT
configuration (menu) is delivered to the individual processes by the run control system [6].
HLT configurations are managed by a configuration system designed around a relational
database abstraction of the individual components (reconstruction modules, filters, etc.) and
their parameters [7]. An HLT menu consists of a set of trigger paths, each consisting of a sequence of reconstruction and selection modules. Each path is normally designed to select a
specific physics signature (e.g. inclusive muon events). Calibration and other conditions data
are retrieved from the online database and distributed to the HLT processes by a hierarchy of
cache servers, connecting to a redundant FroNTier server [8] that provides uncomplicated web
access to databases. It is used by the CMS software, on all tiers of the distributed-computing
infrastructure of CMS, to retrieve calibration and alignment constants (Section 4).
3.1 Streams and primary datasets in CRAFT
3
Events accepted by the HLT are delivered to the storage manager system (SM) via the same
switched network, used for event building. The SM consists of 16 independent processes running on independent computers, and connected through a fibre-channel switch to eight disk
arrays, for a total of 320 TB of disk space. The SM is capable of an aggregate maximum throughput to disk of up to 2 GB/s while concurrently transferring complete data files to the Tier-0 at
up to 800 MB/s.
For CRAFT, CMS operated 4 slices of the DAQ system using 275 computing nodes and 320 TB
of available disk capacity for the SM system.
Routing of individual event data to files in the SM is driven by the definition of output streams,
which group events selected by specific HLT paths. Several different streams are normally defined to group together events according to their offline usage (e.g. primary “physics” stream,
“express” stream, calibration streams, etc.). The same path can feed multiple streams and
hence, in general, individual streams can overlap. Within a stream, sets of paths selecting
similar signatures (e.g. “inclusive muons”, etc.) can be further grouped into primary datasets
(PDs). A PD is defined as a subset of the stream consisting of the events satisfying a certain
group of paths selected by that stream. The PD definition is subsequently used by the Tier-0
repacking step to split the contents of a stream into its component PDs (Section 4). Overlaps
between streams affect the transfer bandwidth to the Tier-0 while overlaps between PDs primarily affect the disk and tape space consumption of the recorded data. Both the stream and
PD definition are intimately connected with the HLT menu, and hence the three are handled as
a unit. The same configuration management system is used to maintain and distribute them,
and a single identification key is used by the HLT, the SM, and the Tier-0 to retrieve the relevant
portion of the configuration from the database.
3.1
Streams and primary datasets in CRAFT
For CRAFT a primary physics stream and several alignment and calibration streams had been
defined (see Section 7) based on L1 and HLT trigger decisions. The physics stream was divided up into three main physics primary datasets and several “technical” datasets, used for
subdetector specific commissioning analyses.
The three physics primary datasets were:
• Cosmics: all events satisfying at least one of the muon trigger paths. Those events
are mainly used in cosmic ray data analyses. Its rate was around 300 Hz.
• Calo: all events satisfying all other L1 physics trigger paths. There was no explicit
exclusion of the cosmic ray events trigger path in the Calo dataset, but the overlap
was minimal. The Calo dataset was mainly populated by detector noise events and
amounted to about 300 Hz.
• MinimumBias: all events selected either by a forward hadronic calorimeter technical trigger or by one requiring a minimal activity in the pixel tracker. The MinimumBias dataset contained also a fraction of random triggers, useful for commissioning studies. Its rate was about 10 Hz.
The processing of the output streams and their splitting in primary datasets is described in the
next section.
4
4
Data Handling
4 Data Handling
The computing centre at CERN hosts the Tier-0 of the distributed computing system of CMS
[9]. The CMS computing system relies on a distributed infrastructure of Grid resources, services
and toolkits, to cope with computing requirements for storage, processing, and analysis. It is
based on building blocks provided by the Worldwide LHC Computing Grid project (WLCG)
[10]. The distributed computing centres available to CMS around the world are configured
in a tiered architecture (as proposed in the MONARC [11] working group), that behaves as a
single coherent system. The Tier-0 hosts the initial processing of data coming from the detector and corresponds to about 20% of all computing resources available to CMS. The Tier-1 level
takes care of subsequent processing and re-processing workflows (Section 4.3) and has approximately 40% of the CMS computing resources available, while the Tier-2 level hosts Monte Carlo
(MC) simulation and analysis and uses the remaining ∼ 40% of all CMS computing resources.
All streams defined by the online system (Section 2) and the HLT (Section 3) are written in a
binary data format, referred to as streamer files. A transfer system copies the streamer files from
the online systems at the detector site to the main CERN computing centre to be converted to
a ROOT-based event data format [5, 12], split into primary datasets and stored on tape. A first
reconstruction is performed and its output is stored in separate datasets. The event content
of the detector measurements is called the RAW data-tier and the output of the reconstruction
pass is called the RECO data-tier.
The primary datasets are distributed amongst seven Tier-1 sites available to CMS for custodial storage and further processing. They are located in France (T1 FR IN2P3), Germany
(T1 DE FZK), Italy (T1 IT CNAF), Spain (T1 ES PIC), Taiwan (T1 TW ASGC), the United Kingdom (T1 UK RAL), and the United States (T1 US FNAL).
In a final step, datasets stored at the Tier-1 sites are served to Tier-2 centres, where the final
analysis to extract physics results is performed.
4.1
Tier-0 workflows
The Tier-0 performs the conversion of the streamer files into the ROOT-based event data format
(repacking) and splits the streams into primary datasets (Section 3). This is followed by the
reconstruction of the primary datasets. In the case of CRAFT, only the three main physics
primary datasets were reconstructed at the Tier-0.
A Python-based processing system [13] with an ORACLE database schema for state tracking
(T0AST) is used to control a dedicated batch system queue (managed by LSF [14]) to split and
process the data. The input and output files are handled by the CERN Advanced STORage
manager (CASTOR) mass storage system [15]. Table 1 gives an overview of the volumes of
data produced from the central data-handling perspective during CRAFT. CMS collected over
2 billion events including technical events for monitoring and calibrations purposes. During
data taking, CMS recorded events without magnetic field and with the solenoid at a magnetic
field strength of 3.8 T.
In subsequent steps, the output of the reconstruction is used to derive specialised alignment
and calibration datasets (Section 7) and data quality information is extracted and uploaded to
a web server (see Section 6).
For these two processing steps, the system normally used for MC production and central processing at the Tier-1 sites was used as a temporary solution ([16], Section 4.3). The processing of
newly recorded data was triggered periodically during data taking to produce alignment and
5
4.1 Tier-0 workflows
Table 1: Overview of data produced during the CRAFT run, from the central data-handling
perspective
Number of primary datasets produced
Number of events recorded
Number of events in Cosmics primary dataset
Number of runs recorded
11
2 ×109
370 ×106
239
Total data volume recorded and produced
396 TB
Total data volume recorded and produced in Cosmics primary dataset
133 TB
calibration datasets and upload quality information to a web server. These functionalities are
now integrated in the Python-based Tier-0 processing system.
The LSF queue at the Tier-0 was dimensioned for early data taking in Sept. 2008 and allowed for
a maximum of 2250 jobs running in parallel. As shown in Fig. 2, this capacity was never used
completely owing to the short reconstruction time of the low-occupancy cosmic ray events.
The average reconstruction time in CRAFT was about 0.75 seconds per event. This should be
compared to the reconstruction time of proton-proton collision events, which is estimated to
be about 5 seconds per event. This estimate was derived from events producing top quark
pairs, which do not represent the bulk of the expected events but resemble many of the physics
processes of interest for analysis.
Figure 2: Utilization of the Tier-0 LSF batch queue during CRAFT. The maximum of 2250 batch
slots were never fully utilized and all jobs started promptly after submission. (Taken from
monitoring sources).
The in-bound and out-bound network data rates to the Tier-0 CASTOR disk pool are shown in
Fig. 3. The reading rate peaked at 5 GB/s while the writing rate was always below 1 GB/s. The
available network bandwidth was able to support all Tier-0 cosmic ray data taking workflows.
Overall, the Tier-0 infrastructure performed stably and reliably, and was sufficiently provisioned for cosmic ray data taking.
6
4
Data Handling
Figure 3: In-bound and out-bound network data rates to the Tier-0 CASTOR disk pool at CERN.
The reading rate peaked at 5 GB/s while the writing rate was always below 1 GB/s. (Taken
from monitoring sources).
4.2
Data storage and transfers
The CMS computing model foresees at least two copies of all data on independent storage
media, for example on tape at two different sites. To fulfill this requirement, all output datasets
in ROOT format are stored on tape at the Tier-0. This copy, called the archival copy of the data,
is only stored for backup purposes and is not accessible for processing workflows. A further
copy is distributed amongst the Tier-1 sites for custodial storage on tape. This copy is called the
primary copy and access is provided for further processing on the Tier-1 level (Section 4.3) and
analysis on the Tier-2 level. As a safety measure during CRAFT, all streamer files were stored
temporarily on tape as well.
A data-transfer management system named PhEDEx (Physics Experiment Data Export) [17] is
used to handle the movement of data between these computing centres. Deployed at all CMS
sites, PhEDEx automates many low-level tasks, such as large-scale data replication and tape
migration, and guarantees consistency of the dataset copies. PhEDEx uses standard WLCG
transfer tools such as FTS [18] and SRM [19], which interface with the mass storage systems at
Tier-1 and Tier-2 centres. PhEDEx provides site managers and users with a centralised system
for making data movement requests and provides status and overview information.
During CRAFT, the recorded and processed primary datasets were distributed amongst the
Tier-1 sites according to available free tape space taking into account processing capacity and
reliability of the Tier-1 sites. For the Cosmics primary dataset, the average size per event for
the RAW data tier was 105 kB/event and for the RECO data tier 125 kB/event.
Figure 4 shows the transfer rate during CRAFT from the Tier-0 to the Tier-1 sites. The transfers
averaged 240 MB/s with rates exceeding 400 MB/s on several occasions.
During CRAFT, a total of 600 TB was transferred out of CERN to the Tier-1 sites. Figure 5 shows
the cumulative transfer volume per Tier-1 site.
Overall, the transfer system performed very well and transferred all CRAFT data reliably to
the Tier-1 sites. There was one very large 200 GB file failing transfer to the US Tier-1 site. Being
about 20 times larger than the average file size in the dataset, a timeout occurred during each
transfer attempt and caused a short delay in analysis of the dataset. Later analysis on the file
was performed and safeguards have since been put into place to prevent similar failures from
occurring.
7
4.3 Tier-1 processing
500
Transfer Rate [MB/s]
CMS 2008
400
300
200
T1_US_FNAL
T1_FR_IN2P3
T1_UK_RAL
T1_DE_FZK
T1_ES_PIC
T1_TW_ASGC
T1_IT_CNAF
100
15.10. 18.10. 21.10. 24.10. 27.10. 30.10. 02.11. 05.11. 08.11. 11.11. 14.11.
2008 2008 2008 2008 2008 2008 2008 2008 2008 2008 2008
Date
Figure 4: Transfer rates from Tier-0 to Tier-1 centres over the duration of CRAFT. The average
was about 240 MB/s. (Taken from monitoring sources).
600
Data transferred [TB]
CMS 2008
500
400
300
T1_US_FNAL
T1_FR_IN2P3
T1_UK_RAL
T1_DE_FZK
T1_ES_PIC
T1_TW_ASGC
T1_IT_CNAF
200
100
15.10. 18.10. 21.10. 24.10. 27.10. 30.10. 02.11. 05.11. 08.11. 11.11. 14.11.
2008 2008 2008 2008 2008 2008 2008 2008 2008 2008 2008
Date
Figure 5: Cumulative transfer volume from Tier-0 to Tier-1 centres over the duration of CRAFT.
(Taken from monitoring sources).
4.3
Tier-1 processing
The central processing at the Tier-1 sites was performed using the current MC production system [16]. The system is written in Python [20] and uses a MySQL database [21] to schedule
jobs by interfacing with different Grid middlewares [22–24]. However, compared to the Tier0 system, it does not track the state of every processing step in detail. It was optimised for
production of Monte Carlo samples, for which 100% accountability is not a primary concern
because more events can be easily generated in the case of failures or infrastructure problems.
The requirements for the Tier-1 processing workflows are in fact very different compared to
MC production workflows. During processing on the Tier-1 level, an input RAW or RECO
dataset is processed to either prepare a new reconstruction pass with updated software and/or
conditions and alignment constants (re-reconstruction) or to extract events of interest from the
total dataset to reduce the amount of data to be analysed (skimming). Accounting of the processing of every single input event is of highest priority as all events have to be accounted for
in order to correctly calculate the luminosity for the processed LHC collisions.
8
4
Data Handling
Table 2: List of skims exercised during CRAFT, showing for each skim the parent primary
dataset, acceptance of the skim event selection and output event content combined from different data tiers. The RAW data-tier consists of all detector measurements while the RECO
data-tier contains all reconstructed properties of an event.
Skim name
Prim. Dataset
Acceptance
Event Content
SuperPointing
Cosmics
0.27%
RAW-RECO
TrackerPointing
Cosmics
4.50%
RAW-RECO
MultiMuon
Cosmics
0.44%
RECO
PtMinSelector
Cosmics
0.68%
RECO
CSC Skim
Cosmics
3.84%
RAW-RECO
CSC Skim BFieldStudies
Cosmics
0.04%
RAW-RECO
HCALHighEnergy
Cosmics/Calo/MinBias 0.09% (on Cosmics)
RAW-RECO
ECALSkim
Cosmics/Calo
0.40% (on Cosmics)
RECO
In the near future, the used Monte Carlo production system will be completely redesigned
based on the state tracking technology of the Tier-0 processing system. Guaranteeing 100%
accountability, the new system is planned to be put into operation in spring 2010.
In CRAFT, the physics primary datasets were skimmed to reduce the amount of data for physics
analyses to process, still keeping full physics content. Some skims combined the RAW and
RECO event content in the same dataset to simplify debugging of the software and low-level
analysis of the events. In addition, many skims were used for re-reconstruction to validate new
algorithms or new derived alignment and calibrations conditions thus reducing the number of
full reprocessings.
The list of skims exercised during CRAFT is shown in Table 2 by listing the parent primary
dataset and acceptance of the skim event selection as well as the output event content of each
skim.
The SuperPointing and TrackerPointing skims preferentially selected events containing muons
whose propagation loosely traversed the Pixel and Strip tracker regions, respectively. The MultiMuon skim contained events with more than four reconstructed muons for muon shower
analyses, while the PtMinSelector skim retained only events with high-energy muons (p T >
50 GeV/c).
The CSC Skim skim selected events with activity in the CSC endcap muon detectors. In addition, good segments from CSC hits were selected in the CSC Skim BFieldStudies skim, which
were used in a measurement of the magnetic field in the endcap yoke regions.
The HCAL skim selected events with high energy deposits in the HCAL and the ECAL skim
was selecting events with high energy deposits in the ECAL due to either showering muons or
noise activity.
Two re-reconstruction passes of all data taken in CRAFT were made after the data taking period
ended. The first pass was started in November 2008 and finished in January 2009. It suffered
significantly from infrastructure and technical problems. After improvements, a second pass
was performed from the 12th to the 25th of February 2009. Both re-reconstruction passes produced the associated alignment and calibration datasets, as was done for the Tier-0 processing.
The major issue observed during the re-reconstruction was the lack of complete accountability
of the reprocessing system. This complicated the task of identifying job failures, which had to
be done manually. The net result was that a very small fraction (about a few percent) of the
final processing was incomplete.
9
Table 3: Summary of the different reconstruction algorithms used during CRAFT. The central
column indicates the main algorithm executed as input to subsequent steps. The right column
shows the alternative algorithms performed in parallel. See Section 5.1 for details.
Component
Tracker local
ECAL
HCAL
DT
CSC
RPC
Tracking
Default Code/Configuration
standard
pulse fit based
threshold for cosmic ray events
standard
standard
standard
dedicated seeding
and navigation
Muon
dedicated cosmic
muon reconstruction
Jet and MET
Electron/Photon
B tagging
Particle Flow and
Tau reconstruction
standard
subset of standard
not run
not run
Alternative versions
none
standard weight based
none
no-drift (coarse hits) , t0 fitting
none
none
Road Search (cosmic version)
Cosmic Track Finder (no pattern reco)
track splitting (up/down)
barrel/endcap only
different DT local reco
LHC vs. Cosmic navigation and fit
single-leg vs. two-leg
none
none
none
none
This will be addressed in the future by the new Tier-1 processing system, developed for LHC
collision running.
5 Reconstruction Software
The main goals of event reconstruction in the context of CRAFT were to provide a reliable cosmic ray event reconstruction in order to support detector performance, calibration, and alignment studies, as well as testing as much as possible the software components to be used in
proton-proton collision events with the LHC.
In order to accomplish these objectives, a dedicated configuration of the reconstruction software tools was prepared and optimised for cosmic ray muon events. While some part of the
reconstruction code developed for proton-proton collisions could be basically re-used, many
key elements needed additional code development to support the reconstruction of cosmic ray
muon events. The code developed for proton-proton collision events is nevertheless used and
tested wherever possible. In several cases, two or more reconstruction algorithms performing
similar tasks have been run in parallel to gain experience (Table 3).
In the following section we briefly describe the major changes in reconstruction code during
the CRAFT data taking period.
10
5.1
5
Reconstruction Software
Dedicated reconstruction algorithms for cosmic ray events
Local reconstruction in the electromagnetic calorimeter
In parallel with the standard ECAL local reconstruction, an additional algorithm has been used
to provide a better measurement of the energy deposited by a cosmic ray muon [25]. The
standard algorithm is optimised for particles reaching the ECAL crystals within a very narrow
time window, as appropriate for LHC collisions, while in case of cosmic ray muons the actual
time of arrival is spread over the 25 ns of the bunch clock interval. The modified algorithm
performs a fit of the pulse shape, sampled tenfold by the ECAL hardware, and thus provides
a precise measurement of both the energy and the arrival time of the cosmic ray muon. While
particles from LHC interactions release energy in only a few crystals, cosmic ray muons cross
the ECAL surface at a wide range of incidence angles, which can spread the energy deposit
over a sizable number of crystals. A dedicated version of the clustering algorithm has been
used to collect this energy most efficiently.
Track reconstruction within the Tracker
The track reconstruction for CRAFT is largely based on the methods already employed for the
Magnet Test and Cosmic Challenge (MTCC) in 2006 [26, 27], and the Tracker Integration Facility (TIF) sector test [28] in Spring 2007. The main differences compared to standard tracking
for collisions are in the seeding and in the navigation steps. Seeding combines hits from several neighbouring layers to generate the starting point for the track pattern recognition and is
mainly based on the pixel system in case of LHC collision events; this has been modified to be
able to reconstruct those trajectories not crossing the very small pixel volume. The modified
seeding mainly uses those layers of the silicon strip tracker that provide a three dimensional
position measurement. Navigation, on the other hand, provides the set of the paths the particle
can possibly have taken between one tracking layer and another; these sets differ considerably
between cosmic ray muons and particles originating from the central interaction point.
For diagnostic purposes, the concept of top-bottom track splitting has been introduced, in
which the incoming and outgoing part of the cosmic ray muon trajectory, with respect to the
point of closest approach to the beam line, are reconstructed as separate tracks. Comparison
of the parameters of these two legs serves as a powerful instrument for alignment and tracking
performance studies [29].
Reconstruction of muons
This section describes the reconstruction of muon trajectories in the CMS muon system. When
possible, the muon system part of the trajectory is combined with a track reconstructed within
the tracker, resulting in a global muon track.
Several different flavours of muon reconstruction are used combining different configurations
of the various components. As described in Ref. [30] it is possible to perform dedicated cosmic
ray muon reconstruction as a single track (referred to as single leg mode), or split the trajectory
into an incoming and outgoing track (two legs). Alternatively, the standard reconstruction for
collision events can be used, in which optionally the reconstruction can be restricted to either
the barrel or the endcap regions.
Muon reconstruction also depends on the local reconstruction of the DT, RPC, and CSC subdetectors, for which different variants are available. For reconstruction of the DT track segments, two options are addressing the fact that cosmic ray muons, contrary to those from collisions, arrive at an arbitrary time not correlated to the bunch clock. The t0 -corrected segment
5.2 Deployment of software updates during prompt reconstruction
11
reconstruction treats the arrival time t0 as an additional free parameter in the fit, while the
no-drift variant does not use drift time information at all, resulting in relatively coarse point
resolution.
The final reconstructed muon object combines all the information available for a reconstructed
muon, including standalone muon reconstruction (using only DT, RPC, CSC), global muon reconstruction (matching the track from standalone muon reconstruction with the silicon tracker
track), and tracker-based reconstruction (matching the tracker tracks with muon segments).
5.2
Deployment of software updates during prompt reconstruction
In view of the large-scale commissioning nature of CRAFT, prompt reconstruction is one of the
workflows for which a low latency of code corrections is important. Fast deployment of bug
fixes may generally become necessary when new major releases have been deployed for the
first time, when running conditions change drastically, or in the early stage of a data taking
period. Most of the problems encountered during CRAFT were related to unexpected detector
inputs and lack of corresponding protections in the code to handle them properly.
The procedure for deploying an update was handled by several different shift roles with different responsibilities. The prompt reconstruction operator reported job failures. These failures
were investigated by the offline run manager who identified the code to be fixed and contacted
the appropriate experts. A minimal bug fix was usually provided within a few hours; if no
correction could be achieved on this time scale, the corresponding reconstruction feature was
disabled to restore Tier-0 operations. This update was provided either as a new configuration
or as a new software release.
The CRAFT experience has been a driving element for CMS to introduce a patch-release system
to reduce the time required for deployment of new code. It is also seen as an advantage to
be able to pause the Tier-0 processing for about 24 hours if necessary to take actions. These
two points, combined with the fast feedback that an additional express stream processing will
provide, should allow minimising the Tier-0 inefficiency due to reconstruction software failures
in future data taking.
5.3
Evolution of Tier-1 reprocessing
The two reprocessing cycles executed at Tier-1 centres allowed the reconstruction to be rerun
with updated detector conditions (see Section 7.4 for details) and to improve the reconstruction
code. In addition, these reprocessing steps were used to tailor slightly the content of the reconstruction output files according to requests from the commissioning and alignment groups. The
biggest changes concerned the muon reconstruction and the track reconstruction in the tracker.
6 Data Quality Monitoring and Prompt Feedback
Data quality monitoring is critically important for ensuring a good detector and operation efficiency, and for the reliable certification of the recorded data for physics analyses. The CMSwide DQM system comprises:
• tools for creating, filling, transporting, and archiving of histograms and scalar monitor elements, with standardised algorithms for performing automated quality and
validity tests on distributions;
• online monitoring systems for the detector, trigger, and DAQ hardware status and
data throughput;
12
6
Data Quality Monitoring and Prompt Feedback
• offline monitoring systems for reconstruction and for validating calibration results,
software releases, and simulated data;
• visualisation of the monitoring results;
• certification of datasets for physics analyses.
The main features of the DQM system, as operated during the CRAFT data taking period, are
shown in Fig. 6. A detailed experience report can be found elsewhere [31]. DQM for data
taking is performed in two different stages, with very small latency during the data taking
(online) and after prompt reconstruction of the events (offline).
6.1
Online monitoring
The online DQM system consists of a number of consumer applications, labelled as DQM in
Fig. 6, usually one per subsystem, which receive event data through a storage manager event
server and fill histograms at an event rate of 10–15 Hz.
In addition, a small number of histograms is filled in the HLT filter units, which process events
at up to 100 kHz. These histograms are shipped out to DQM consumer applications periodically. Identical histograms are summed up across different filter units in the storage manager.
All the histogram data, including alarm states based on quality test results, are made available
to a central DQM graphical user interface server (GUI) for visualisation in real time [32], and
are stored in a ROOT file periodically during the run. At the end of the run the final archived
results are uploaded to a large disk pool. Eventually, the files are merged and backed up to
tape.
All DQM data processing components and the event display start and stop automatically un-
HLT
DQM
GUI
DQM
Run
Registry
Certification
Offline Online
SM
Tier-0
Tier-1s
CAF
DQM
GUI
Tier-2s
Release
Validation
Data Processing
Visualisation
Figure 6: Sketch of the DQM system, consisting of branches for online and offline monitoring.
6.2 Offline monitoring
13
der centralised CMS run control [33]. The web servers for the DQM GUI [32] and web-based
conditions monitoring (WBM) [34] are long-lived server systems which are independent of the
run control.
6.2
Offline monitoring
The offline DQM system accumulates monitoring data from several workflows in CMS, namely
Tier-0 prompt reconstruction, re-reconstruction at the Tier-1s and the validation of the alignment and calibration results, the software releases, and all the simulated data.
CMS has standardised the monitoring of the event data processing into a two-step workflow:
1. The histogram monitor elements are created and filled with CMS event data information.
The histograms are stored along with the processed events into the normal output event
data files. When the CMS data processing systems merge output files, the histograms are
automatically summed to form the first partial result.
2. At the end of the data processing the histograms are extracted from the job output data
files and summed together across entire runs to yield full event statistics. The final histograms are then used to calculate efficiencies and are checked for quality, by making
comparisons with reference distributions. The histograms, certification results, and quality test results are saved into a ROOT file, which is then uploaded to a central DQM GUI
web server. In the web server, the files are merged and backed up to tape; recent data are
kept cached on disk for several months.
Online and offline DQM GUI web servers provide a common interface, and are linked together
as one entity, giving the entire world-wide collaboration access to inspection and analysis of all
DQM data at one central location.
6.3
Data certification
CMS uses a database with a front-end web application, the run registry, as the central workflow
tracking and bookkeeping tool to manage the creation of the final physics dataset certification
result. The run registry is both a user interface managing the workflow and a persistent storage
of the information.
The work to evaluate the detector and physics object data quality is organised in shifts. The
shift persons follow instructions specifically tailored to catch problems. The observations are
entered in the run registry database where they are available to detector and physics object
groups, as well as the whole collaboration, for inspection and confirmation. Final certification results are produced at regular sign-off meetings, typically once per week, before they are
delivered to the experiment by storage in the data bookkeeping system (DBS) [35]. The information in DBS is associated with the primary datasets and is input to the creation of analysis
datasets by analysis groups.
Online shifts take place continuously during detector operation at the CMS detector site. Offline DQM shifts are carried out at daytime at the CMS centre [36] on the main CERN site. The
shift activities are also supported by regular remote shifts, two shifts per day at Fermilab and
one shift per day at DESY, at the local CMS centres [37].
14
7
Alignment and Calibration
Table 4: Cosmic ray triggered events collected during CRAFT in the Cosmics primary dataset
in periods when the magnetic field was at the nominal value of 3.8 T with the listed detector
system (or combination of systems) operating nominally and passing offline quality criteria.
The minimum configuration required for data taking was that at least the DT barrel muon
chambers and the strip tracker passed the quality criteria. The other subdetectors were allowed
to go out of data taking for tests.
Quality flag
(none)
Trigger
Pixel Tracker
Strip Tracker
ECAL
HCAL
RPC
CSC
DT
DT+Strip
All
6.4
Events (millions)
315
240
290
270
230
290
270
275
310
270
130
Prompt data analysis and feedback
The central DQM shift activity is complemented by the prompt feedback groups, one for each
subsystem, which consist of subsystem experts located at the CMS centre. These groups analyse the prompt reconstruction output and integrate the data quality information in a timely
way. The CMS CERN Analysis Facility (CAF) [38], providing large CPU power and fast access
to the data stored on a local CASTOR disk pool [15], was heavily used for such analyses.
The run-by-run DQM results were used to monitor the time evolution of the detector behaviour.
Any observed change was carefully checked and tracked. As an example, Fig. 7 shows the evolution of the relative number of hits on tracks in the tracker inner barrel detector as a function
of the run number. The step in the distribution is due to improved alignment parameter errors
applied to the later data. During reprocessing, the improved parameters were applied to all
data, thus removing the step.
Table 4 shows the number of good events in the Cosmics primary dataset, based on the quality assignment described above. The breakdown for each subsystem is given when operating
nominally and passing the offline selection criteria. Most of the subsystems declared individually more than 85% of the recorded data in the Cosmics primary datasets as good. Having
declared a detector component as good does not entail having detected a cosmic ray muon
within it’s fiducial volume. Figure 8 shows the accumulated number of cosmic ray triggered
events as a function of run number with the magnet at its operating central field of 3.8 T, where
the minimal configuration of the silicon strip tracker and the DT muon system delivering data
certified for further offline analysis was required. It was not required to keep the other systems
in the configuration. A total of 270 million such events were collected.
7 Alignment and Calibration
This section describes the workflows used to compute and improve the alignment and calibration constants. While some calibrations are already performed online at the CMS detector site,
15
0.18
Events
0.16
x106
Tot. Events: ~270M
250
0.14
200
0.12
0.1
150
0.08
100
0.06
0.04
50
CMS 2008
CMS 2008
Run No.
Figure 7: Evolution of the relative number
of hits on muon tracks in the tracker inner
barrel detector. The step in the distribution is due to improved alignment parameter errors applied to the later data.
69997
69382
69276
68279
68949
68124
67818
67544
67219
67126
66757
66733
0
66709
70036
69797
69382
68926
68276
68021
67647
67541
67147
66783
66722
66703
66612
0
66662
0.02
66604
Fraction of muons with TIB hits
7.1 AlCaRaw streams
Run No.
Figure 8: Integrated statistics vs. run
collected during CRAFT in the Cosmics
dataset for runs with good quality flags
from the drift tubes and the silicon strip
tracker. Only runs with magnetic field of
3.8 T have been considered.
this section will focus on the workflows performed at the Tier-0 site and at the CAF.
The basic offline workflow for alignment and calibration in CRAFT was a slightly simplified
version of the full model for collisions, and it is illustrated in Fig. 9. Commissioning experience
from this workflow in the context of a challenge with simulated events has been reported elsewhere [39]. Event information relevant for alignment and calibration was streamed from the
CMS detector site via the standard physics event stream (Section 3), and via a special calibration
stream and streams with special event content, labeled “AlCaRaw” (described below), dedicated to particular calibration procedures. Events from these streams passed the conversion
to the ROOT-based event data format at the Tier-0 (Section 4.1) and in the case of the physics
stream entered the prompt reconstruction process. The reconstructed data were then skimmed
to create a series of “AlCaReco” datasets that were transferred to the CAF to be used as input to
alignment and calibration algorithms. The AlCaReco datasets are designed to contain only the
minimal amount of information required by the associated alignment and calibration workflows. The skims producing them performed both event selection, starting from a selection
based on HLT bits, and reduction of event content. The alignment and calibration workflows,
performed at the CAF, used the AlCaReco datasets to generate alignment and calibration constants that are validated and uploaded to the conditions database. Re-reconstruction at the
Tier-1 sites, using the new constants, also generated new AlCaReco datasets that were used in
turn as input to the next series of improvements on alignment and calibration constants.
7.1
AlCaRaw streams
Some calibration procedures in CMS require a very high event rate of typically a few kHz in
order to achieve the targeted precision in the time scale of a few days. These events would
saturate the available bandwidth between the CMS cavern and the Tier-0 site if the full event
content were transferred. This concerns in particular the φ symmetry calibration procedures
for ECAL and HCAL, and the ECAL calibration with neutral pion decays. The solution is the
creation of special data streams called AlCaRaw already within dedicated high-rate triggers
at the HLT farm, which contain only the minimal information needed for these workflows.
These AlCaRaw streams have been successfully generated for a significant part of the CRAFT
16
7
Alignment and Calibration
Figure 9: Offline workflow for alignment and calibration used during CRAFT.
run, accumulating over 160 million events (for each, ECAL and HCAL) for the φ-symmetry
streams. Detailed information about the AlCaRaw streams produced in CRAFT can be found
in Ref. [40].
7.2
AlCaReco skims
During the global runs that CMS performed in 2008, the number of AlCaReco skims produced
as part of prompt Tier-0 processing has been steadily increased. The set of AlCaReco datasets
that have been produced in CRAFT is listed in Table 5. This list also contains datasets that are
not meaningful for calibration with cosmic muon data and have been included only for commissioning the production system. As a result, nine AlCaReco skims have been created in parallel, which is comparable with the maximum number anticipated for a given PD during LHC
collisions, thus constituting an important scaling test for the CMS alignment and calibration
framework. The number of events given in the table, which corresponds to the output of the
prompt processing, reflects the selection mechanism. For example, the MuAlStandaloneCosmics dataset did not require tracker information and thus selected a large part of the overall
CRAFT event sample. The TkAlCosmics0T dataset, which was originally designed for runs
without field, required a trajectory reconstructed in the tracker and thus selected slightly more
than one percent of the standalone muon sample. The TkAlCosmicsHLT skim required the
corresponding HLT trigger bit and selected only particles with a transverse momentum cut
above 4 GeV/c, which resulted in a slightly smaller sample. Low noise thresholds allowed the
population of the HcalCalDiJets sample.
7.3
Alignment and calibration workflows, validation and sign-off of constants
All workflows deriving alignment and calibration constants have been performed at the CAF
based on the AlCaReco datasets. The derived constants have been uploaded to the CMS condi-
17
7.4 Conditions used for reprocessing campaigns
Table 5: AlCaReco datasets produced in CRAFT.
Dataset
TkAlCosmicsHLT
TkAlCosmics0T
MuAlStandaloneCosmics
MuAlBeamHaloOverlaps
MuAlGlobalCosmics
HcalCalHOCosmics
HcalCalDiJets
MuAlCalIsolatedMu
RpcCalHLT
Number of events
4.3 M
4.9 M
288 M
3.3 M
5.5 M
313 M
67 M
52 M
241 M
Purpose
Tracker alignment
Tracker alignment (no pT cut)
Muon standalone alignment
Muon endcap alignment
Muon system alignment w.r.t. tracker
HCAL HO calibration
HCAL calibration
Muon system alignment, DT calibration
DT calibration, RPC monitoring
tions database. The management of conditions within the database is explained in more detail
in Section 8. Standardised validation procedures have been applied to certify the correctness of
the constants. The validation results have been reviewed in a formalised sign-off procedure, to
ensure the quality of database constants that are used for any central processing. Special care
has been taken regarding the interdependencies of the various workflows.
7.4
Conditions used for reprocessing campaigns
The first comprehensive alignment and calibration campaign started immediately after the end
of data taking. The set of constants used for the first reprocessing of the full CRAFT dataset
included the following: tracker alignment and alignment error estimates, strip tracker gain calibration, bad strip and bad fiber maps, pixel tracker gain and pedestal constants, internal trackbased alignment constants for the barrel muon DT chambers [41], global longitudinal positions
and tilt angles from the optical alignment systems for most of the endcap muon chambers [42],
muon DT inter channel synchronization, time pedestal and noise calibration constants [43],
gain and pedestal calibration constants for both ECAL and HCAL. All constants were ready,
validated, signed-off and included in the official set of conditions on 20 November 2008, about
two weeks after the end of data taking.
The second pass of alignment and calibration was performed after the results of the first reprocessing became available around the middle of January 2009. Tracker gain calibration was
updated and calibration of the Lorentz angle was added. Tracker alignment was further improved [44], benefiting also from the Lorentz angle calibration, and from an increased number
of pixel hits available due to the updated alignment error estimates. The muon chambers were
aligned relative to the tracker with global tracks [41]; in addition, the optical alignment of the
endcap system was extended [42]. Drift-tube calibration was updated following an improved
reconstruction in the inner chambers of the wheels closest to the endcaps in the presence of the
magnetic field [43]. HCAL pedestals and gains were improved, and inter-calibration constants
for the ECAL endcaps were updated based on laboratory measurements combined with information from laser data taken during CRAFT [45]. These constants were used in the second
reprocessing of the CRAFT data.
8 Conditions
The CMS conditions database system relies on three databases for storing non-event data:
1. OMDS (Online Master Database System) is in the online network at the detector site; it
18
8
Conditions
stores the data needed for the configuration and proper settings of the detector (configuration data), and the conditions data produced directly from the front-end electronics. For
example, the Data Control System (DCS) information is stored with the ORACLE interface provided by PVSS [46].
2. ORCON (Offline Reconstruction Condition DB Online subset) is also located at the detector site. It stores all the condition data, including calibration and alignment constants, that
are needed for the reconstruction of physics quantities in the HLT, as well as for detector
performance studies. These are a small subset of all the online constants. These data are
written using the POOL-ORA [47] technology and are retrieved by the HLT programs as
C++ objects.
3. ORCOFF (Offline Reconstruction Condition DB Offline subset) is located at the CERN
computing centre. It contains a copy of the information in ORCON, kept in sync through
ORACLE streaming [48]. Data are retrieved by the reconstruction algorithms as C++
objects.
In order to guarantee consistency of the data in ORCON and ORCOFF, it is one of the CMS policies to write any condition data needed for offline purposes to the ORCON database. ORACLE
streaming provides the transfer from ORCON to ORCOFF.
8.1
Interval of validity for conditions data and the global tag
All conditions data are organised by condition database tags. A tag points to one or more instances of a given type of condition data (e.g. ECAL pedestals), each of which has an associated
interval of validity (IOV). The IOV is a range of events which is contiguous in time, for which
that version of the condition data is valid. This range is normally defined in terms of run numbers, but can also be defined in terms of absolute time. While some conditions are only valid
for the specific run for which they are measured (e.g. beamspot, pedestals), other conditions
can be valid for any run (e.g. calorimeter intercalibration constants). Each payload object in
ORCON/ORCOFF is unambiguously indexed by its IOV and a tag.
The full consistent set of conditions which needs to be accessed by the HLT and offline reconstruction software is defined in a global tag, which consists of one tag for each type of condition
data. For a given event, the reconstruction algorithms query the corresponding conditions data
by means of the global tag.
8.2
Population of the conditions database
The flow of conditions data is illustrated in Fig. 10. Conditions data that are produced online
are initially stored in OMDS. The subset of online conditions that are required for the HLT and
offline reconstruction is extracted and sent to ORCON. This data transfer is operated using a
framework named PopCon (Populator of Condition [49]). PopCon encapsulates the relational
data as POOL-ORA objects and adds meta-data information (the tag to which the object belongs
and the IOV), so that the data is correctly indexed for reading by the HLT and offline software.
Moreover, PopCon has the additional functionality of logging specific information about any
transaction writing to the ORCON database.
Further conditions are produced by the alignment and calibration workflows operated offline,
as described in Section 7; these are directly uploaded to the ORCON database, again using
PopCon.
19
8.3 Database population during CRAFT
POPCON
streaming
POPCON
Relational db
POOL-ORA db
ORCOFF
@CERN
ORCON
@IP5
OMDS
@IP5
Online data
Offline data
HLT
applications
Online LHC-IP5
network
Reconstruction
applications
Offline CMS-T0
network
Figure 10: Conditions databases architecture.
Finally, all data in ORCON are transferred to ORCOFF, which is the database used for all offline
processing and analysis, via ORACLE streaming.
For massively parallel read-access, the ORCON and ORCOFF databases are interfaced with a
cache system referred to as “FroNTier,” which in the case of ORCOFF is the mechanism used
to distribute conditions data to the Tier-1 and Tier-2 centres outside CERN. Caching servers are
used to cache requested objects to avoid repeated access to the same data, thus significantly improving the performance and greatly reducing the load on the central database servers. Further
details can be found in Ref. [8].
8.3
Database population during CRAFT
During the CRAFT run, the majority of conditions data were transferred in the offline database
using the PopCon application. A central procedure, based on an automatic uploader via a
dedicated machine in the online network, was successfully deployed during 2008 [49].
A set of automatic jobs was set up for each sub-detector, in order to both populate the ORCON
database and monitor any transaction to it. Each automatic job is associated with a “watchdog”
tool that monitors its status. A dedicated web interface was set up on a CMS web server in
order to monitor all database transactions. PopCon was used by almost all sub-detectors and
an average of one hundred PopCon applications per day were run during CRAFT.
During the entire duration of CRAFT the total amount of conditions data written to ORCON
was about 1 TB. ORCON-ORCOFF streaming and the FroNTier caching mechanism operated
smoothly throughout CRAFT.
20
9
Analysis Model and Tool
9 Analysis Model and Tool
In this section, the model and the tool to analyse the recorded and reconstructed data are described. CMS uses a distributed data-analysis model [9] mostly based on the WLCG Grid
infrastructure. It also supports low-latency access to data on the CAF for prompt analysis and
calibration. The CMS analysis model is data-location driven, i.e. the user analysis runs where
data are located. The related workflow is mainly characterised by the following steps: interactive code development using small data samples; job preparation and configuration to run over
higher statistics (hence to access the whole dataset or a significant part of it); and interactive
analysis of the obtained results. With the increasing complexity of the computing infrastructure, the implementation of such a workflow became more and more difficult for the end user.
In order to provide the physicists an efficient access to the distributed data while hiding the
underlying complexity, CMS developed and deployed a dedicated tool named CMS Remote
Analysis Builder (CRAB) [50], which is described in the following.
9.1
CRAB architecture
CRAB is the official CMS tool for distributed analysis. The system, which guarantees interoperability with the various grid flavours and batch submission systems, has evolved into
a client-server architecture. The client provides the user interface and is a command line
Python [20] application which mainly takes care of the local environment interaction and packages private user library and code, in order to replicate remotely the very same local configuration. The server is the intermediate service responsible to automate the user analysis
workflow with resubmissions, error handling, and output retrieval thus leaving to the user just
the preparation of the configuration file. The server also notifies the user of the output availability. The server architecture is made of a set of independent components communicating
asynchronously through a shared messaging service and cooperating to carry out the analysis
workflow.
Figure 11: CRAFT jobs distributions as a function of time. Left: Daily distribution of analysis jobs submitted using CRAB and accessing CRAFT data. Grid (dark shading, red) and
CAF (light shading, yellow) activities are shown. (Taken from monitoring sources). Right:
CRAFT jobs submitted only at CAF (with and without CRAB). The upper line shows the cumulative number of jobs, the lower line shows the number of jobs submitted each week. The
time window extends well beyond the end of CRAFT data taking to cover the extensive period
of analysis.
9.2 CRAFT analysis activity
21
Figure 12: Cumulative plot of number of different users accessing CRAFT data as a function
of time. Left: users using CRAB to submit Grid (dark shading, red) and CAF (light shading,
yellow) jobs. (Taken from monitoring sources). Right: number of users submitting jobs only at
CAF (with and without CRAB). The lower line shows the number of users per week, the upper
line the integrated number over a long period. The time window extends well beyond the end
of CRAFT data taking to cover the extensive period of analysis.
9.2
CRAFT analysis activity
The CRAFT data have been analysed both at CERN (using the local batch system at the CAF),
and on the Grid making use of distributed resources (Tier-2). While access to data on Tier-2 sites
has been performed exclusively by CRAB, the CAF queues have been used to run both CRAB
and non-CRAB jobs. The large fraction of non-CRAB jobs executed at the CAF is partially due
to calibration and alignment workflows, which are for the time being not integrated within
the CRAB framework. The collaboration is currently evaluating the best strategy for a proper
integration of such workflows.
9.3
Analysed data volume
From October 2008 to the beginning of May 2009 more than 2 million analysis jobs accessed
CRAFT data, including both CRAB and non-CRAB jobs. The quoted value takes into account
both CAF and Grid activity (Fig. 11). Figure 12 shows the cumulative numbers of distinct users
which performed CRAFT data analysis in the considered time window. The shapes, combined
with daily jobs distribution, give a clear indication of how the user community increased continuously. Referring to the same time interval it is estimated that more than 200 distinct users
in total performed CRAFT analysis activities. As shown in Fig. 13, the overall efficiency of
CRAFT analysis jobs is approximately 60%. Local submissions on the CAF were 85% efficient.
The main source of failures of Grid CRAFT jobs are remote stage-out problems, which will be
addressed in the future by a new workload management infrastructure. In general, there is a
10% failure rate due to problems within the user code. No relevant bottlenecks were experienced by the system during CRAFT.
10
Summary
Data taking with cosmic ray muons during the CRAFT exercise in 2008, which lasted about
a month, has provided a wealth of experience in operating the workflows from recording to
analysing the data. The online system and the high level trigger have been operated continuously, and besides stress-testing the general reliability, major functionalities have been exercised. These include the definition of streams and primary datasets, and the interaction with
22
10
Summary
Figure 13: Success rate of CRAFT analysis jobs submitted using CRAB. Left: jobs submitted
only at CAF. Right: jobs submitted through the Grid.
associated HLT menus, for which efficient online-offline interplay is essential.
Data handling has been confronted with almost the full qualitative complexity expected for
collisions. Most of the Tier-0 related processing has been handled with the final software infrastructure and performed very well. The setup for collisions will still require the ramp-up of
the express stream infrastructure for prompt calibration and monitoring, and inclusion of alignment and calibration skims into the Tier-0 processing system. Data distribution via the PhEDEx
system performed very well overall. Re-reconstruction at the Tier-1 sites was performed with
adequate turn-around time, but showed the need of a system with full accountability, which
will be introduced by spring 2010.
Event reconstruction has used various algorithms dedicated to cosmic ray muons, but in addition used CRAFT to commission the methodology for collisions. A comprehensive set of reconstructed objects has been provided to support the analysis of the CRAFT data. The workflow
for fast deployment of code corrections presented some organisational challenges, and while
solutions were generally available quickly, several improvements for future operation were
implemented. Data quality monitoring was performed both at the online and offline levels,
and regular DQM shifts were run continuously. Remote CMS centres fully participated in this
process. Further certification and validation of the data were performed by prompt feedback
groups who analysed the output of the prompt reconstruction, and discovered time-dependent
developments which were correlated to intentional changes in the detector conditions.
Essentially all alignment and calibration constants accessible with cosmic ray muon data taking have been determined during CRAFT, thus putting the corresponding framework through
a very comprehensive test. The associated organizational challenges with a large number
of concurrent workflows, properly respecting the interdependencies, were successfully addressed. Several reprocessing campaigns with successively improved constants have been
performed, which provided a very high data-quality sample for cosmic ray analysis. The conditions database, which is a sophisticated network of commercial relational database management servers with proxies and mechanisms to provide distributed access, proved to be a solid
basis for all conditions-related operations.
The large sample of cosmic ray data also provided a realistic test of the distributed analysis system. Limitations in job execution efficiency were traced to remote file staging issues, which will
be addressed by future improvements in the workload management system. Overall, CRAFT
has shown that CMS has highly reliable methods at its disposal to make data samples available
with short latency for analysis at remote centres.
23
In conclusion, CRAFT has demonstrated the proper functioning of the overall CMS workflow
machinery to a very high degree. While the challenge has been instrumental in identifying
individual areas which need some further improvements, the overall system is well designed
and is expected to scale smoothly to data taking with LHC collisions.
Acknowledgements
We thank the technical and administrative staff at CERN and other CMS Institutes, and acknowledge support from: FMSR (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ,
and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES (Croatia); RPF (Cyprus); Academy of Sciences and NICPB (Estonia);
Academy of Finland, ME, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG,
and HGF (Germany); GSRT (Greece); OTKA and NKTH (Hungary); DAE and DST (India);
IPM (Iran); SFI (Ireland); INFN (Italy); NRF (Korea); LAS (Lithuania); CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); PAEC (Pakistan); SCSR (Poland); FCT (Portugal); JINR
(Armenia, Belarus, Georgia, Ukraine, Uzbekistan); MST and MAE (Russia); MSTDS (Serbia);
MICINN and CPAN (Spain); Swiss Funding Agencies (Switzerland); NSC (Taipei); TUBITAK
and TAEK (Turkey); STFC (United Kingdom); DOE and NSF (USA). Individuals have received
support from the Marie-Curie IEF program (European Union); the Leventis Foundation; the A.
P. Sloan Foundation; and the Alexander von Humboldt Foundation.
References
[1] R. Adolphi et al., “The CMS experiment at the CERN LHC”, JINST 0803 (2008) S08004.
doi:10.1088/1748-0221/3/08/S08004.
[2] L. Evans, (ed. ) and P. Bryant, (ed. ), “LHC Machine”, JINST 3 (2008) S08001.
doi:10.1088/1748-0221/3/08/S08001.
[3] CMS Collaboration, “The CMS Cosmic Run at Four Tesla”, submitted to JINST (2009).
[4] P. Sphicas, (ed. ), “CMS: The TriDAS project. Technical design report, Vol. 2: Data
acquisition and high-level trigger”,. CERN-LHCC-2002-026.
[5] C. Jones et al., “The new CMS event data model and framework”, in Proceedings for
Computing in High-Energy Physics (CHEP ’06), Mumbai, India. Feb., 2006.
[6] G. Bauer et al., “The run control and monitoring system of the CMS experiment”, PoS
ACAT (2007) 026. doi:10.1088/1742-6596/119/2/022010.
[7] E. Meschi, “High level trigger configuration and handling of trigger tables in the CMS
filter farm”, J. Phys. Conf. Ser. 119 (2008) 022011.
doi:10.1088/1742-6596/119/2/022011.
[8] B. J. Blumenfeld, D. Dykstra, L. Lueking et al., “CMS conditions data access using
FroNTier”, J. Phys. Conf. Ser. 119 (2008) 072007.
doi:10.1088/1742-6596/119/7/072007.
[9] CMS Collaboration, “CMS: The computing project. Technical design report”,.
CERN-LHCC-2005-023.
24
10
Summary
[10] “Worldwide LHC Computing Grid (WLCG)”.
http://lcg.web.cern.ch/LCG/public/default.htm.
[11] M. Aderholz et al., “Models of networked analysis at regional centres for LHC
experiments (MONARC). Phase 2 report”,. CERN/LCB 2000-001.
[12] R. Brun and F. Rademakers, “ROOT: An object oriented data analysis framework”, Nucl.
Instrum. Meth. A389 (1997) 81–86. See also http://root.cern.ch.
doi:10.1016/S0168-9002(97)00048-X.
[13] D. Mason, “Remote Operation of the global CMS Data and Workflows”. talk given at the
Computing in High-Energy Physics Conference (CHEP ’09), Prague, Czech Republic,
March, 2009.
[14] “CERN Batch Services (LSF)”. http://batch.web.cern.ch/batch.
[15] “CERN Advanced STORage Manager 2 (CASTOR2)”.
http://castor.web.cern.ch/castor/.
[16] S. Wakefield et al., “Large Scale Job Management and Experience in Recent Data
Challenges within the LHC CMS experiment”, in Proceedings for XII Advanced Computing
and Analysis Techniques in Physics Research, Erice, Italy. Nov., 2008.
[17] R. Egeland et al., “Data transfer infrastructure for CMS data taking”, in Proceedings for XII
Advanced Computing and Analysis Techniques in Physics Research, Erice, Italy. Nov., 2008.
[18] “The gLite File Transfer Service”.
http://egee-jra1-dm.web.cern.ch/egee-jra1-dm/FTS/.
[19] “Storage Resource Management (SRM) Working Group”.
https://sdm.lbl.gov/srm-wg/.
[20] “Python Programming Language”. http://www.python.org.
[21] “MySQL”. http://www.mysql.com.
[22] “Enabling Grids for E-sciencE (EGEE)”. http://www.eu-egee.org.
[23] “Nordugrid”. http://www.nordugrid.org.
[24] “Open Science Grid”. http://www.opensciencegrid.org.
[25] CMS Collaboration, “Time Reconstruction and Performance of the CMS Crystal
Electromagnetic Calorimeter”, submitted to JINST (2009).
[26] D. Benedetti et al., “Tracking and Alignment with the Silicon Strip Tracker at the CMS
Magnet Test Cosmic Challenge”, CMS Note 2007/030 (2007).
[27] T. Christiansen, “The CMS magnet test and cosmic challenge”, arXiv:0805.1882.
[28] W. Adams et al., “Track Reconstruction with Cosmic Ray Data at the Tracker Integration
Facility”, CMS Note 2009/003 (2009).
[29] CMS Collaboration, “Alignment of The CMS inner tracking system with cosmic ray
particles”, submitted to JINST (2009).
[30] CMS Collaboration, “Muon Reconstruction Performance”, submitted to JINST (2009).
25
[31] L. Tuura et al., “CMS data quality monitoring: systems and experiences”, in Proceedings
for Computing in High-Energy Physics (CHEP ’09), Prague, Czech Republic. March, 2009.
[32] L. Tuura et al., “CMS data quality monitoring web service”, in Proceedings for Computing
in High-Energy Physics (CHEP ’09), Prague, Czech Republic. March, 2009.
[33] G. Bauer et al., “The Run Control System of the CMS Experiment”, in Proceedings for
Computing in High-Energy Physics (CHEP ’07), Victoria B.C., Canada. Sept., 2007.
[34] W. Badgett et al., “CMS Online Web-Based Monitoring and Remote Operations”, in
Proceedings for Computing in High-Energy Physics (CHEP ’07), Victoria B.C., Canada. Sept.,
2007.
[35] A. Afaq et al., “The CMS dataset bookkeeping service”, J. Phys. Conf. Ser. 119 (2008)
072001. doi:10.1088/1742-6596/119/7/072001.
[36] E. Gottschalk et al., “CMS Centres Worldwide: a New Collaborative Infrastructure”, in
Proceedings for Computing in High-Energy Physics (CHEP ’09), Prague, Czech Republic.
March, 2009.
[37] E. Gottschalk et al., “Collaborating at a Distance: Operations Centres, Tools, and Trends”,
in Proceedings for Computing in High-Energy Physics (CHEP ’09), Prague, Czech Republic.
March, 2009.
[38] P. Kreuzer et al., “Building and Commissioning of the CMS CERN Analysis Facility
(CAF)”, in Proceedings for Computing in High-Energy Physics (CHEP ’09), Prague, Czech
Republic. March, 2009.
[39] D. Futyan et al., “The CMS Computing, Software and Analysis Challenge”, in Proceedings
for Computing in High-Energy Physics (CHEP ’09), Prague, Czech Republic. March, 2009.
[40] CMS Collaboration, “Performance of the CMS High Level Trigger”, submitted to JINST
(2009).
[41] CMS Collaboration, “Alignment of the CMS Muon System with cosmic ray and
beam-halo tracks”, submitted to JINST (2009).
[42] CMS Collaboration, “Aligning the CMS Muon Chambers with the hardware alignment
system during the CRAFT exercise”, submitted to JINST (2009).
[43] CMS Collaboration, “Results on the DT Calibration and Drift Velocity analysis with
CRAFT data”, submitted to JINST (2009).
[44] CMS Collaboration, “Alignment of the CMS Silicon Tracker During Commissioning with
Cosmic Ray Particles”, submitted to JINST (2009).
[45] CMS Collaboration, “Performance and Operation of the CMS Crystal Electromagnetic
Calorimeter”, submitted to JINST (2009).
[46] M. Gonzalez-Berges, “The joint controls project framework”,
arXiv:physics/0305128.
[47] Z. Xie et al., “Pool persistency framework for LHC: New developments and CMS
applications”, in Proceedings for 4th International Workshop on Frontier Science, Milan,
Biococca. Sept., 2005.
26
10
Summary
[48] R. Urbano, “Oracle Database 2 Day + Data Replication and Integration Guide, 11g”.
Oracle, release 1 (11.1) edition, 2008.
[49] M. De Gruttola et al., “First experience in operating the population of the condition
database of the CMS experiment”, in Proceedings for Computing in High-Energy Physics
(CHEP ’09), Prague, Czech Republic. March, 2009.
[50] D. Spiga et al., “The CMS Remote Analysis Builder (CRAB)”, Lect. Notes Comput. Sci. 4873
(2007) 580–586. doi:10.1007/978-3-540-77220-0 52.
27
A
The CMS Collaboration
Yerevan Physics Institute, Yerevan, Armenia
S. Chatrchyan, V. Khachatryan, A.M. Sirunyan
Institut für Hochenergiephysik der OeAW, Wien, Austria
W. Adam, B. Arnold, H. Bergauer, T. Bergauer, M. Dragicevic, M. Eichberger, J. Erö, M. Friedl,
R. Frühwirth, V.M. Ghete, J. Hammer1 , S. Hänsel, M. Hoch, N. Hörmann, J. Hrubec, M. Jeitler,
G. Kasieczka, K. Kastner, M. Krammer, D. Liko, I. Magrans de Abril, I. Mikulec, F. Mittermayr,
B. Neuherz, M. Oberegger, M. Padrta, M. Pernicka, H. Rohringer, S. Schmid, R. Schöfbeck,
T. Schreiner, R. Stark, H. Steininger, J. Strauss, A. Taurok, F. Teischinger, T. Themel, D. Uhl,
P. Wagner, W. Waltenberger, G. Walzel, E. Widl, C.-E. Wulz
National Centre for Particle and High Energy Physics, Minsk, Belarus
V. Chekhovsky, O. Dvornikov, I. Emeliantchik, A. Litomin, V. Makarenko, I. Marfin,
V. Mossolov, N. Shumeiko, A. Solin, R. Stefanovitch, J. Suarez Gonzalez, A. Tikhonov
Research Institute for Nuclear Problems, Minsk, Belarus
A. Fedorov, A. Karneyeu, M. Korzhik, V. Panov, R. Zuyeuski
Research Institute of Applied Physical Problems, Minsk, Belarus
P. Kuchinsky
Universiteit Antwerpen, Antwerpen, Belgium
W. Beaumont, L. Benucci, M. Cardaci, E.A. De Wolf, E. Delmeire, D. Druzhkin, M. Hashemi,
X. Janssen, T. Maes, L. Mucibello, S. Ochesanu, R. Rougny, M. Selvaggi, H. Van Haevermaet,
P. Van Mechelen, N. Van Remortel
Vrije Universiteit Brussel, Brussel, Belgium
V. Adler, S. Beauceron, S. Blyweert, J. D’Hondt, S. De Weirdt, O. Devroede, J. Heyninck, A. Kalogeropoulos, J. Maes, M. Maes, M.U. Mozer, S. Tavernier, W. Van Doninck1 , P. Van Mulders,
I. Villella
Université Libre de Bruxelles, Bruxelles, Belgium
O. Bouhali, E.C. Chabert, O. Charaf, B. Clerbaux, G. De Lentdecker, V. Dero, S. Elgammal,
A.P.R. Gay, G.H. Hammad, P.E. Marage, S. Rugovac, C. Vander Velde, P. Vanlaer, J. Wickens
Ghent University, Ghent, Belgium
M. Grunewald, B. Klein, A. Marinov, D. Ryckbosch, F. Thyssen, M. Tytgat, L. Vanelderen,
P. Verwilligen
Université Catholique de Louvain, Louvain-la-Neuve, Belgium
S. Basegmez, G. Bruno, J. Caudron, C. Delaere, P. Demin, D. Favart, A. Giammanco,
G. Grégoire, V. Lemaitre, O. Militaru, S. Ovyn, K. Piotrzkowski1 , L. Quertenmont, N. Schul
Université de Mons, Mons, Belgium
N. Beliy, E. Daubie
Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro, Brazil
G.A. Alves, M.E. Pol, M.H.G. Souza
Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil
W. Carvalho, D. De Jesus Damiao, C. De Oliveira Martins, S. Fonseca De Souza, L. Mundim,
V. Oguri, A. Santoro, S.M. Silva Do Amaral, A. Sznajder
Instituto de Fisica Teorica, Universidade Estadual Paulista, Sao Paulo, Brazil
28
A The CMS Collaboration
T.R. Fernandez Perez Tomei, M.A. Ferreira Dias, E. M. Gregores2 , S.F. Novaes
Institute for Nuclear Research and Nuclear Energy, Sofia, Bulgaria
K. Abadjiev1 , T. Anguelov, J. Damgov, N. Darmenov1 , L. Dimitrov, V. Genchev1 , P. Iaydjiev,
S. Piperov, S. Stoykova, G. Sultanov, R. Trayanov, I. Vankov
University of Sofia, Sofia, Bulgaria
A. Dimitrov, M. Dyulendarova, V. Kozhuharov, L. Litov, E. Marinova, M. Mateev, B. Pavlov,
P. Petkov, Z. Toteva1
Institute of High Energy Physics, Beijing, China
G.M. Chen, H.S. Chen, W. Guan, C.H. Jiang, D. Liang, B. Liu, X. Meng, J. Tao, J. Wang, Z. Wang,
Z. Xue, Z. Zhang
State Key Lab. of Nucl. Phys. and Tech., Peking University, Beijing, China
Y. Ban, J. Cai, Y. Ge, S. Guo, Z. Hu, Y. Mao, S.J. Qian, H. Teng, B. Zhu
Universidad de Los Andes, Bogota, Colombia
C. Avila, M. Baquero Ruiz, C.A. Carrillo Montoya, A. Gomez, B. Gomez Moreno, A.A. Ocampo
Rios, A.F. Osorio Oliveros, D. Reyes Romero, J.C. Sanabria
Technical University of Split, Split, Croatia
N. Godinovic, K. Lelas, R. Plestina, D. Polic, I. Puljak
University of Split, Split, Croatia
Z. Antunovic, M. Dzelalija
Institute Rudjer Boskovic, Zagreb, Croatia
V. Brigljevic, S. Duric, K. Kadija, S. Morovic
University of Cyprus, Nicosia, Cyprus
R. Fereos, M. Galanti, J. Mousa, A. Papadakis, F. Ptochos, P.A. Razis, D. Tsiakkouri, Z. Zinonos
National Institute of Chemical Physics and Biophysics, Tallinn, Estonia
A. Hektor, M. Kadastik, K. Kannike, M. Müntel, M. Raidal, L. Rebane
Helsinki Institute of Physics, Helsinki, Finland
E. Anttila, S. Czellar, J. Härkönen, A. Heikkinen, V. Karimäki, R. Kinnunen, J. Klem, M.J. Kortelainen, T. Lampén, K. Lassila-Perini, S. Lehti, T. Lindén, P. Luukka, T. Mäenpää, J. Nysten,
E. Tuominen, J. Tuominiemi, D. Ungaro, L. Wendland
Lappeenranta University of Technology, Lappeenranta, Finland
K. Banzuzi, A. Korpela, T. Tuuva
Laboratoire d’Annecy-le-Vieux de Physique des Particules, IN2P3-CNRS, Annecy-le-Vieux,
France
P. Nedelec, D. Sillou
DSM/IRFU, CEA/Saclay, Gif-sur-Yvette, France
M. Besancon, R. Chipaux, M. Dejardin, D. Denegri, J. Descamps, B. Fabbro, J.L. Faure, F. Ferri,
S. Ganjour, F.X. Gentit, A. Givernaud, P. Gras, G. Hamel de Monchenault, P. Jarry, M.C. Lemaire,
E. Locci, J. Malcles, M. Marionneau, L. Millischer, J. Rander, A. Rosowsky, D. Rousseau,
M. Titov, P. Verrecchia
Laboratoire Leprince-Ringuet, Ecole Polytechnique, IN2P3-CNRS, Palaiseau, France
S. Baffioni, L. Bianchini, M. Bluj3 , P. Busson, C. Charlot, L. Dobrzynski, R. Granier de Cassagnac, M. Haguenauer, P. Miné, P. Paganini, Y. Sirois, C. Thiebaux, A. Zabi
29
Institut Pluridisciplinaire Hubert Curien, Université de Strasbourg, Université de Haute
Alsace Mulhouse, CNRS/IN2P3, Strasbourg, France
J.-L. Agram4 , A. Besson, D. Bloch, D. Bodin, J.-M. Brom, E. Conte4 , F. Drouhin4 , J.-C. Fontaine4 ,
D. Gelé, U. Goerlach, L. Gross, P. Juillot, A.-C. Le Bihan, Y. Patois, J. Speck, P. Van Hove
Université de Lyon, Université Claude Bernard Lyon 1, CNRS-IN2P3, Institut de Physique
Nucléaire de Lyon, Villeurbanne, France
C. Baty, M. Bedjidian, J. Blaha, G. Boudoul, H. Brun, N. Chanon, R. Chierici, D. Contardo,
P. Depasse, T. Dupasquier, H. El Mamouni, F. Fassi5 , J. Fay, S. Gascon, B. Ille, T. Kurca, T. Le
Grand, M. Lethuillier, N. Lumb, L. Mirabito, S. Perries, M. Vander Donckt, P. Verdier
E. Andronikashvili Institute of Physics, Academy of Science, Tbilisi, Georgia
N. Djaoshvili, N. Roinishvili, V. Roinishvili
Institute of High Energy Physics and Informatization, Tbilisi State University, Tbilisi,
Georgia
N. Amaglobeli
RWTH Aachen University, I. Physikalisches Institut, Aachen, Germany
R. Adolphi, G. Anagnostou, R. Brauer, W. Braunschweig, M. Edelhoff, H. Esser, L. Feld,
W. Karpinski, A. Khomich, K. Klein, N. Mohr, A. Ostaptchouk, D. Pandoulas, G. Pierschel,
F. Raupach, S. Schael, A. Schultz von Dratzig, G. Schwering, D. Sprenger, M. Thomas, M. Weber,
B. Wittmer, M. Wlochal
RWTH Aachen University, III. Physikalisches Institut A, Aachen, Germany
O. Actis, G. Altenhöfer, W. Bender, P. Biallass, M. Erdmann, G. Fetchenhauer1 , J. Frangenheim,
T. Hebbeker, G. Hilgers, A. Hinzmann, K. Hoepfner, C. Hof, M. Kirsch, T. Klimkovich,
P. Kreuzer1 , D. Lanske† , M. Merschmeyer, A. Meyer, B. Philipps, H. Pieta, H. Reithler,
S.A. Schmitz, L. Sonnenschein, M. Sowa, J. Steggemann, H. Szczesny, D. Teyssier, C. Zeidler
RWTH Aachen University, III. Physikalisches Institut B, Aachen, Germany
M. Bontenackels, M. Davids, M. Duda, G. Flügge, H. Geenen, M. Giffels, W. Haj Ahmad, T. Hermanns, D. Heydhausen, S. Kalinin, T. Kress, A. Linn, A. Nowack, L. Perchalla, M. Poettgens,
O. Pooth, P. Sauerland, A. Stahl, D. Tornier, M.H. Zoeller
Deutsches Elektronen-Synchrotron, Hamburg, Germany
M. Aldaya Martin, U. Behrens, K. Borras, A. Campbell, E. Castro, D. Dammann, G. Eckerlin,
A. Flossdorf, G. Flucke, A. Geiser, D. Hatton, J. Hauk, H. Jung, M. Kasemann, I. Katkov,
C. Kleinwort, H. Kluge, A. Knutsson, E. Kuznetsova, W. Lange, W. Lohmann, R. Mankel1 ,
M. Marienfeld, A.B. Meyer, S. Miglioranzi, J. Mnich, M. Ohlerich, J. Olzem, A. Parenti,
C. Rosemann, R. Schmidt, T. Schoerner-Sadenius, D. Volyanskyy, C. Wissing, W.D. Zeuner1
University of Hamburg, Hamburg, Germany
C. Autermann, F. Bechtel, J. Draeger, D. Eckstein, U. Gebbert, K. Kaschube, G. Kaussen,
R. Klanner, B. Mura, S. Naumann-Emme, F. Nowak, U. Pein, C. Sander, P. Schleper, T. Schum,
H. Stadie, G. Steinbrück, J. Thomsen, R. Wolf
Institut für Experimentelle Kernphysik, Karlsruhe, Germany
J. Bauer, P. Blüm, V. Buege, A. Cakir, T. Chwalek, W. De Boer, A. Dierlamm, G. Dirkes,
M. Feindt, U. Felzmann, M. Frey, A. Furgeri, J. Gruschke, C. Hackstein, F. Hartmann1 ,
S. Heier, M. Heinrich, H. Held, D. Hirschbuehl, K.H. Hoffmann, S. Honc, C. Jung, T. Kuhr,
T. Liamsuwan, D. Martschei, S. Mueller, Th. Müller, M.B. Neuland, M. Niegel, O. Oberst,
A. Oehler, J. Ott, T. Peiffer, D. Piparo, G. Quast, K. Rabbertz, F. Ratnikov, N. Ratnikova, M. Renz,
C. Saout1 , G. Sartisohn, A. Scheurer, P. Schieferdecker, F.-P. Schilling, G. Schott, H.J. Simonis,
30
A The CMS Collaboration
F.M. Stober, P. Sturm, D. Troendle, A. Trunov, W. Wagner, J. Wagner-Kuhr, M. Zeise, V. Zhukov6 ,
E.B. Ziebarth
Institute of Nuclear Physics ”Demokritos”, Aghia Paraskevi, Greece
G. Daskalakis, T. Geralis, K. Karafasoulis, A. Kyriakis, D. Loukas, A. Markou, C. Markou,
C. Mavrommatis, E. Petrakou, A. Zachariadou
University of Athens, Athens, Greece
L. Gouskos, P. Katsas, A. Panagiotou1
University of Ioánnina, Ioánnina, Greece
I. Evangelou, P. Kokkas, N. Manthos, I. Papadopoulos, V. Patras, F.A. Triantis
KFKI Research Institute for Particle and Nuclear Physics, Budapest, Hungary
G. Bencze1 , L. Boldizsar, G. Debreczeni, C. Hajdu1 , S. Hernath, P. Hidas, D. Horvath7 , K. Krajczar, A. Laszlo, G. Patay, F. Sikler, N. Toth, G. Vesztergombi
Institute of Nuclear Research ATOMKI, Debrecen, Hungary
N. Beni, G. Christian, J. Imrek, J. Molnar, D. Novak, J. Palinkas, G. Szekely, Z. Szillasi1 ,
K. Tokesi, V. Veszpremi
University of Debrecen, Debrecen, Hungary
A. Kapusi, G. Marian, P. Raics, Z. Szabo, Z.L. Trocsanyi, B. Ujvari, G. Zilizi
Panjab University, Chandigarh, India
S. Bansal, H.S. Bawa, S.B. Beri, V. Bhatnagar, M. Jindal, M. Kaur, R. Kaur, J.M. Kohli,
M.Z. Mehta, N. Nishu, L.K. Saini, A. Sharma, A. Singh, J.B. Singh, S.P. Singh
University of Delhi, Delhi, India
S. Ahuja, S. Arora, S. Bhattacharya8 , S. Chauhan, B.C. Choudhary, P. Gupta, S. Jain, S. Jain,
M. Jha, A. Kumar, K. Ranjan, R.K. Shivpuri, A.K. Srivastava
Bhabha Atomic Research Centre, Mumbai, India
R.K. Choudhury, D. Dutta, S. Kailas, S.K. Kataria, A.K. Mohanty, L.M. Pant, P. Shukla, A. Topkar
Tata Institute of Fundamental Research - EHEP, Mumbai, India
T. Aziz, M. Guchait9 , A. Gurtu, M. Maity10 , D. Majumder, G. Majumder, K. Mazumdar,
A. Nayak, A. Saha, K. Sudhakar
Tata Institute of Fundamental Research - HECR, Mumbai, India
S. Banerjee, S. Dugad, N.K. Mondal
Institute for Studies in Theoretical Physics & Mathematics (IPM), Tehran, Iran
H. Arfaei, H. Bakhshiansohi, A. Fahim, A. Jafari, M. Mohammadi Najafabadi, A. Moshaii,
S. Paktinat Mehdiabadi, S. Rouhani, B. Safarzadeh, M. Zeinali
University College Dublin, Dublin, Ireland
M. Felcini
INFN Sezione di Bari a , Università di Bari b , Politecnico di Bari c , Bari, Italy
M. Abbresciaa,b , L. Barbonea , F. Chiumaruloa , A. Clementea , A. Colaleoa , D. Creanzaa,c ,
G. Cuscelaa , N. De Filippisa , M. De Palmaa,b , G. De Robertisa , G. Donvitoa , F. Fedelea , L. Fiorea ,
M. Francoa , G. Iasellia,c , N. Lacalamitaa , F. Loddoa , L. Lusitoa,b , G. Maggia,c , M. Maggia ,
N. Mannaa,b , B. Marangellia,b , S. Mya,c , S. Natalia,b , S. Nuzzoa,b , G. Papagnia , S. Piccolomoa ,
G.A. Pierroa , C. Pintoa , A. Pompilia,b , G. Pugliesea,c , R. Rajana , A. Ranieria , F. Romanoa,c ,
31
G. Rosellia,b , G. Selvaggia,b , Y. Shindea , L. Silvestrisa , S. Tupputia,b , G. Zitoa
INFN Sezione di Bologna a , Universita di Bologna b , Bologna, Italy
G. Abbiendia , W. Bacchia,b , A.C. Benvenutia , M. Boldinia , D. Bonacorsia , S. BraibantGiacomellia,b , V.D. Cafaroa , S.S. Caiazzaa , P. Capiluppia,b , A. Castroa,b , F.R. Cavalloa ,
G. Codispotia,b , M. Cuffiania,b , I. D’Antonea , G.M. Dallavallea,1 , F. Fabbria , A. Fanfania,b ,
D. Fasanellaa , P. Giacomellia , V. Giordanoa , M. Giuntaa,1 , C. Grandia , M. Guerzonia ,
S. Marcellinia , G. Masettia,b , A. Montanaria , F.L. Navarriaa,b , F. Odoricia , G. Pellegrinia ,
A. Perrottaa , A.M. Rossia,b , T. Rovellia,b , G. Sirolia,b , G. Torromeoa , R. Travaglinia,b
INFN Sezione di Catania a , Universita di Catania b , Catania, Italy
S. Albergoa,b , S. Costaa,b , R. Potenzaa,b , A. Tricomia,b , C. Tuvea
INFN Sezione di Firenze a , Universita di Firenze b , Firenze, Italy
G. Barbaglia , G. Broccoloa,b , V. Ciullia,b , C. Civininia , R. D’Alessandroa,b , E. Focardia,b ,
S. Frosalia,b , E. Galloa , C. Gentaa,b , G. Landia,b , P. Lenzia,b,1 , M. Meschinia , S. Paolettia ,
G. Sguazzonia , A. Tropianoa
INFN Laboratori Nazionali di Frascati, Frascati, Italy
L. Benussi, M. Bertani, S. Bianco, S. Colafranceschi11 , D. Colonna11 , F. Fabbri, M. Giardoni,
L. Passamonti, D. Piccolo, D. Pierluigi, B. Ponzio, A. Russo
INFN Sezione di Genova, Genova, Italy
P. Fabbricatore, R. Musenich
INFN Sezione di Milano-Biccoca a , Universita di Milano-Bicocca b , Milano, Italy
A. Benagliaa , M. Callonia , G.B. Ceratia,b,1 , P. D’Angeloa , F. De Guioa , F.M. Farinaa , A. Ghezzia ,
P. Govonia,b , M. Malbertia,b,1 , S. Malvezzia , A. Martellia , D. Menascea , V. Miccioa,b , L. Moronia ,
P. Negria,b , M. Paganonia,b , D. Pedrinia , A. Pulliaa,b , S. Ragazzia,b , N. Redaellia , S. Salaa ,
R. Salernoa,b , T. Tabarelli de Fatisa,b , V. Tancinia,b , S. Taronia,b
INFN Sezione di Napoli a , Universita di Napoli ”Federico II” b , Napoli, Italy
S. Buontempoa , N. Cavalloa , A. Cimminoa,b,1 , M. De Gruttolaa,b,1 , F. Fabozzia,12 , A.O.M. Iorioa ,
L. Listaa , D. Lomidzea , P. Nolia,b , P. Paoluccia , C. Sciaccaa,b
INFN Sezione di Padova a , Università di Padova b , Padova, Italy
P. Azzia,1 , N. Bacchettaa , L. Barcellana , P. Bellana,b,1 , M. Bellatoa , M. Benettonia , M. Biasottoa,13 ,
D. Biselloa,b , E. Borsatoa,b , A. Brancaa , R. Carlina,b , L. Castellania , P. Checchiaa , E. Contia ,
F. Dal Corsoa , M. De Mattiaa,b , T. Dorigoa , U. Dossellia , F. Fanzagoa , F. Gasparinia,b ,
U. Gasparinia,b , P. Giubilatoa,b , F. Gonellaa , A. Greselea,14 , M. Gulminia,13 , A. Kaminskiya,b ,
S. Lacapraraa,13 , I. Lazzizzeraa,14 , M. Margonia,b , G. Marona,13 , S. Mattiazzoa,b , M. Mazzucatoa ,
M. Meneghellia , A.T. Meneguzzoa,b , M. Michelottoa , F. Montecassianoa , M. Nespoloa ,
M. Passaseoa , M. Pegoraroa , L. Perrozzia , N. Pozzobona,b , P. Ronchesea,b , F. Simonettoa,b ,
N. Tonioloa , E. Torassaa , M. Tosia,b , A. Triossia , S. Vaninia,b , S. Venturaa , P. Zottoa,b ,
G. Zumerlea,b
INFN Sezione di Pavia a , Universita di Pavia b , Pavia, Italy
P. Baessoa,b , U. Berzanoa , S. Bricolaa , M.M. Necchia,b , D. Paganoa,b , S.P. Rattia,b , C. Riccardia,b ,
P. Torrea,b , A. Vicinia , P. Vituloa,b , C. Viviania,b
INFN Sezione di Perugia a , Universita di Perugia b , Perugia, Italy
D. Aisaa , S. Aisaa , E. Babuccia , M. Biasinia,b , G.M. Bileia , B. Caponeria,b , B. Checcuccia , N. Dinua ,
L. Fanòa , L. Farnesinia , P. Laricciaa,b , A. Lucaronia,b , G. Mantovania,b , A. Nappia,b , A. Pilusoa ,
V. Postolachea , A. Santocchiaa,b , L. Servolia , D. Tonoiua , A. Vedaeea , R. Volpea,b
32
A The CMS Collaboration
INFN Sezione di Pisa a , Universita di Pisa b , Scuola Normale Superiore di Pisa c , Pisa, Italy
P. Azzurria,c , G. Bagliesia , J. Bernardinia,b , L. Berrettaa , T. Boccalia , A. Boccia,c , L. Borrelloa,c ,
F. Bosia , F. Calzolaria , R. Castaldia , R. Dell’Orsoa , F. Fioria,b , L. Foàa,c , S. Gennaia,c , A. Giassia ,
A. Kraana , F. Ligabuea,c , T. Lomtadzea , F. Mariania , L. Martinia , M. Massaa , A. Messineoa,b ,
A. Moggia , F. Pallaa , F. Palmonaria , G. Petragnania , G. Petrucciania,c , F. Raffaellia , S. Sarkara ,
G. Segneria , A.T. Serbana , P. Spagnoloa,1 , R. Tenchinia,1 , S. Tolainia , G. Tonellia,b,1 , A. Venturia ,
P.G. Verdinia
INFN Sezione di Roma a , Universita di Roma ”La Sapienza” b , Roma, Italy
S. Baccaroa,15 , L. Baronea,b , A. Bartolonia , F. Cavallaria,1 , I. Dafineia , D. Del Rea,b , E. Di
Marcoa,b , M. Diemoza , D. Francia,b , E. Longoa,b , G. Organtinia,b , A. Palmaa,b , F. Pandolfia,b ,
R. Paramattia,1 , F. Pellegrinoa , S. Rahatloua,b , C. Rovellia
INFN Sezione di Torino a , Università di Torino b , Università del Piemonte Orientale (Novara) c , Torino, Italy
G. Alampia , N. Amapanea,b , R. Arcidiaconoa,b , S. Argiroa,b , M. Arneodoa,c , C. Biinoa ,
M.A. Borgiaa,b , C. Bottaa,b , N. Cartigliaa , R. Castelloa,b , G. Cerminaraa,b , M. Costaa,b ,
D. Dattolaa , G. Dellacasaa , N. Demariaa , G. Dugheraa , F. Dumitrachea , A. Grazianoa,b ,
C. Mariottia , M. Maronea,b , S. Masellia , E. Migliorea,b , G. Milaa,b , V. Monacoa,b , M. Musicha,b ,
M. Nervoa,b , M.M. Obertinoa,c , S. Oggeroa,b , R. Paneroa , N. Pastronea , M. Pelliccionia,b ,
A. Romeroa,b , M. Ruspaa,c , R. Sacchia,b , A. Solanoa,b , A. Staianoa , P.P. Trapania,b,1 , D. Trocinoa,b ,
A. Vilela Pereiraa,b , L. Viscaa,b , A. Zampieria
INFN Sezione di Trieste a , Universita di Trieste b , Trieste, Italy
F. Ambroglinia,b , S. Belfortea , F. Cossuttia , G. Della Riccaa,b , B. Gobboa , A. Penzoa
Kyungpook National University, Daegu, Korea
S. Chang, J. Chung, D.H. Kim, G.N. Kim, D.J. Kong, H. Park, D.C. Son
Wonkwang University, Iksan, Korea
S.Y. Bahk
Chonnam National University, Kwangju, Korea
S. Song
Konkuk University, Seoul, Korea
S.Y. Jung
Korea University, Seoul, Korea
B. Hong, H. Kim, J.H. Kim, K.S. Lee, D.H. Moon, S.K. Park, H.B. Rhee, K.S. Sim
Seoul National University, Seoul, Korea
J. Kim
University of Seoul, Seoul, Korea
M. Choi, G. Hahn, I.C. Park
Sungkyunkwan University, Suwon, Korea
S. Choi, Y. Choi, J. Goh, H. Jeong, T.J. Kim, J. Lee, S. Lee
Vilnius University, Vilnius, Lithuania
M. Janulis, D. Martisiute, P. Petrov, T. Sabonis
Centro de Investigacion y de Estudios Avanzados del IPN, Mexico City, Mexico
H. Castilla Valdez1 , A. Sánchez Hernández
33
Universidad Iberoamericana, Mexico City, Mexico
S. Carrillo Moreno
Universidad Autónoma de San Luis Potosı́, San Luis Potosı́, Mexico
A. Morelos Pineda
University of Auckland, Auckland, New Zealand
P. Allfrey, R.N.C. Gray, D. Krofcheck
University of Canterbury, Christchurch, New Zealand
N. Bernardino Rodrigues, P.H. Butler, T. Signal, J.C. Williams
National Centre for Physics, Quaid-I-Azam University, Islamabad, Pakistan
M. Ahmad, I. Ahmed, W. Ahmed, M.I. Asghar, M.I.M. Awan, H.R. Hoorani, I. Hussain,
W.A. Khan, T. Khurshid, S. Muhammad, S. Qazi, H. Shahzad
Institute of Experimental Physics, Warsaw, Poland
M. Cwiok, R. Dabrowski, W. Dominik, K. Doroba, M. Konecki, J. Krolikowski, K. Pozniak16 ,
R. Romaniuk, W. Zabolotny16 , P. Zych
Soltan Institute for Nuclear Studies, Warsaw, Poland
T. Frueboes, R. Gokieli, L. Goscilo, M. Górski, M. Kazana, K. Nawrocki, M. Szleper, G. Wrochna,
P. Zalewski
Laboratório de Instrumentação e Fı́sica Experimental de Partı́culas, Lisboa, Portugal
N. Almeida, L. Antunes Pedro, P. Bargassa, A. David, P. Faccioli, P.G. Ferreira Parracho,
M. Freitas Ferreira, M. Gallinaro, M. Guerra Jordao, P. Martins, G. Mini, P. Musella, J. Pela,
L. Raposo, P.Q. Ribeiro, S. Sampaio, J. Seixas, J. Silva, P. Silva, D. Soares, M. Sousa, J. Varela,
H.K. Wöhri
Joint Institute for Nuclear Research, Dubna, Russia
I. Altsybeev, I. Belotelov, P. Bunin, Y. Ershov, I. Filozova, M. Finger, M. Finger Jr., A. Golunov,
I. Golutvin, N. Gorbounov, V. Kalagin, A. Kamenev, V. Karjavin, V. Konoplyanikov, V. Korenkov, G. Kozlov, A. Kurenkov, A. Lanev, A. Makankin, V.V. Mitsyn, P. Moisenz, E. Nikonov,
D. Oleynik, V. Palichik, V. Perelygin, A. Petrosyan, R. Semenov, S. Shmatov, V. Smirnov,
D. Smolin, E. Tikhonenko, S. Vasil’ev, A. Vishnevskiy, A. Volodko, A. Zarubin, V. Zhiltsov
Petersburg Nuclear Physics Institute, Gatchina (St Petersburg), Russia
N. Bondar, L. Chtchipounov, A. Denisov, Y. Gavrikov, G. Gavrilov, V. Golovtsov, Y. Ivanov,
V. Kim, V. Kozlov, P. Levchenko, G. Obrant, E. Orishchin, A. Petrunin, Y. Shcheglov, A. Shchetkovskiy, V. Sknar, I. Smirnov, V. Sulimov, V. Tarakanov, L. Uvarov, S. Vavilov, G. Velichko,
S. Volkov, A. Vorobyev
Institute for Nuclear Research, Moscow, Russia
Yu. Andreev, A. Anisimov, P. Antipov, A. Dermenev, S. Gninenko, N. Golubev, M. Kirsanov,
N. Krasnikov, V. Matveev, A. Pashenkov, V.E. Postoev, A. Solovey, A. Solovey, A. Toropin,
S. Troitsky
Institute for Theoretical and Experimental Physics, Moscow, Russia
A. Baud, V. Epshteyn, V. Gavrilov, N. Ilina, V. Kaftanov† , V. Kolosov, M. Kossov1 , A. Krokhotin,
S. Kuleshov, A. Oulianov, G. Safronov, S. Semenov, I. Shreyber, V. Stolin, E. Vlasov, A. Zhokin
Moscow State University, Moscow, Russia
E. Boos, M. Dubinin17 , L. Dudko, A. Ershov, A. Gribushin, V. Klyukhin, O. Kodolova, I. Lokhtin,
S. Petrushanko, L. Sarycheva, V. Savrin, A. Snigirev, I. Vardanyan
34
A The CMS Collaboration
P.N. Lebedev Physical Institute, Moscow, Russia
I. Dremin, M. Kirakosyan, N. Konovalova, S.V. Rusakov, A. Vinogradov
State Research Center of Russian Federation, Institute for High Energy Physics, Protvino,
Russia
S. Akimenko, A. Artamonov, I. Azhgirey, S. Bitioukov, V. Burtovoy, V. Grishin1 , V. Kachanov,
D. Konstantinov, V. Krychkine, A. Levine, I. Lobov, V. Lukanin, Y. Mel’nik, V. Petrov, R. Ryutin,
S. Slabospitsky, A. Sobol, A. Sytine, L. Tourtchanovitch, S. Troshin, N. Tyurin, A. Uzunian,
A. Volkov
Vinca Institute of Nuclear Sciences, Belgrade, Serbia
P. Adzic, M. Djordjevic, D. Jovanovic18 , D. Krpic18 , D. Maletic, J. Puzovic18 , N. Smiljkovic
Centro de Investigaciones Energéticas Medioambientales y Tecnológicas (CIEMAT),
Madrid, Spain
M. Aguilar-Benitez, J. Alberdi, J. Alcaraz Maestre, P. Arce, J.M. Barcala, C. Battilana, C. Burgos
Lazaro, J. Caballero Bejar, E. Calvo, M. Cardenas Montes, M. Cepeda, M. Cerrada, M. Chamizo
Llatas, F. Clemente, N. Colino, M. Daniel, B. De La Cruz, A. Delgado Peris, C. Diez Pardos,
C. Fernandez Bedoya, J.P. Fernández Ramos, A. Ferrando, J. Flix, M.C. Fouz, P. Garcia-Abia,
A.C. Garcia-Bonilla, O. Gonzalez Lopez, S. Goy Lopez, J.M. Hernandez, M.I. Josa, J. Marin,
G. Merino, J. Molina, A. Molinero, J.J. Navarrete, J.C. Oller, J. Puerta Pelayo, L. Romero,
J. Santaolalla, C. Villanueva Munoz, C. Willmott, C. Yuste
Universidad Autónoma de Madrid, Madrid, Spain
C. Albajar, M. Blanco Otano, J.F. de Trocóniz, A. Garcia Raboso, J.O. Lopez Berengueres
Universidad de Oviedo, Oviedo, Spain
J. Cuevas, J. Fernandez Menendez, I. Gonzalez Caballero, L. Lloret Iglesias, H. Naves Sordo,
J.M. Vizan Garcia
Instituto de Fı́sica de Cantabria (IFCA), CSIC-Universidad de Cantabria, Santander, Spain
I.J. Cabrillo, A. Calderon, S.H. Chuang, I. Diaz Merino, C. Diez Gonzalez, J. Duarte Campderros, M. Fernandez, G. Gomez, J. Gonzalez Sanchez, R. Gonzalez Suarez, C. Jorda, P. Lobelle
Pardo, A. Lopez Virto, J. Marco, R. Marco, C. Martinez Rivero, P. Martinez Ruiz del Arbol,
F. Matorras, T. Rodrigo, A. Ruiz Jimeno, L. Scodellaro, M. Sobron Sanudo, I. Vila, R. Vilar
Cortabitarte
CERN, European Organization for Nuclear Research, Geneva, Switzerland
D. Abbaneo, E. Albert, M. Alidra, S. Ashby, E. Auffray, J. Baechler, P. Baillon, A.H. Ball,
S.L. Bally, D. Barney, F. Beaudette19 , R. Bellan, D. Benedetti, G. Benelli, C. Bernet, P. Bloch,
S. Bolognesi, M. Bona, J. Bos, N. Bourgeois, T. Bourrel, H. Breuker, K. Bunkowski, D. Campi,
T. Camporesi, E. Cano, A. Cattai, J.P. Chatelain, M. Chauvey, T. Christiansen, J.A. Coarasa
Perez, A. Conde Garcia, R. Covarelli, B. Curé, A. De Roeck, V. Delachenal, D. Deyrail, S. Di
Vincenzo20 , S. Dos Santos, T. Dupont, L.M. Edera, A. Elliott-Peisert, M. Eppard, M. Favre,
N. Frank, W. Funk, A. Gaddi, M. Gastal, M. Gateau, H. Gerwig, D. Gigi, K. Gill, D. Giordano,
J.P. Girod, F. Glege, R. Gomez-Reino Garrido, R. Goudard, S. Gowdy, R. Guida, L. Guiducci,
J. Gutleber, M. Hansen, C. Hartl, J. Harvey, B. Hegner, H.F. Hoffmann, A. Holzner, A. Honma,
M. Huhtinen, V. Innocente, P. Janot, G. Le Godec, P. Lecoq, C. Leonidopoulos, R. Loos,
C. Lourenço, A. Lyonnet, A. Macpherson, N. Magini, J.D. Maillefaud, G. Maire, T. Mäki,
L. Malgeri, M. Mannelli, L. Masetti, F. Meijers, P. Meridiani, S. Mersi, E. Meschi, A. Meynet
Cordonnier, R. Moser, M. Mulders, J. Mulon, M. Noy, A. Oh, G. Olesen, A. Onnela, T. Orimoto,
L. Orsini, E. Perez, G. Perinic, J.F. Pernot, P. Petagna, P. Petiot, A. Petrilli, A. Pfeiffer, M. Pierini,
M. Pimiä, R. Pintus, B. Pirollet, H. Postema, A. Racz, S. Ravat, S.B. Rew, J. Rodrigues Antunes,
35
G. Rolandi21 , M. Rovere, V. Ryjov, H. Sakulin, D. Samyn, H. Sauce, C. Schäfer, W.D. Schlatter,
M. Schröder, C. Schwick, A. Sciaba, I. Segoni, A. Sharma, N. Siegrist, P. Siegrist, N. Sinanis,
T. Sobrier, P. Sphicas22 , D. Spiga, M. Spiropulu17 , F. Stöckli, P. Traczyk, P. Tropea, J. Troska,
A. Tsirou, L. Veillet, G.I. Veres, M. Voutilainen, P. Wertelaers, M. Zanetti
Paul Scherrer Institut, Villigen, Switzerland
W. Bertl, K. Deiters, W. Erdmann, K. Gabathuler, R. Horisberger, Q. Ingram, H.C. Kaestli,
S. König, D. Kotlinski, U. Langenegger, F. Meier, D. Renker, T. Rohe, J. Sibille23 ,
A. Starodumov24
Institute for Particle Physics, ETH Zurich, Zurich, Switzerland
B. Betev, L. Caminada25 , Z. Chen, S. Cittolin, D.R. Da Silva Di Calafiori, S. Dambach25 ,
G. Dissertori, M. Dittmar, C. Eggel25 , J. Eugster, G. Faber, K. Freudenreich, C. Grab, A. Hervé,
W. Hintz, P. Lecomte, P.D. Luckey, W. Lustermann, C. Marchica25 , P. Milenovic26 , F. Moortgat, A. Nardulli, F. Nessi-Tedaldi, L. Pape, F. Pauss, T. Punz, A. Rizzi, F.J. Ronga, L. Sala,
A.K. Sanchez, M.-C. Sawley, V. Sordini, B. Stieger, L. Tauscher† , A. Thea, K. Theofilatos,
D. Treille, P. Trüb25 , M. Weber, L. Wehrli, J. Weng, S. Zelepoukine27
Universität Zürich, Zurich, Switzerland
C. Amsler, V. Chiochia, S. De Visscher, C. Regenfus, P. Robmann, T. Rommerskirchen,
A. Schmidt, D. Tsirigkas, L. Wilke
National Central University, Chung-Li, Taiwan
Y.H. Chang, E.A. Chen, W.T. Chen, A. Go, C.M. Kuo, S.W. Li, W. Lin
National Taiwan University (NTU), Taipei, Taiwan
P. Bartalini, P. Chang, Y. Chao, K.F. Chen, W.-S. Hou, Y. Hsiung, Y.J. Lei, S.W. Lin, R.-S. Lu,
J. Schümann, J.G. Shiu, Y.M. Tzeng, K. Ueno, Y. Velikzhanin, C.C. Wang, M. Wang
Cukurova University, Adana, Turkey
A. Adiguzel, A. Ayhan, A. Azman Gokce, M.N. Bakirci, S. Cerci, I. Dumanoglu, E. Eskut,
S. Girgis, E. Gurpinar, I. Hos, T. Karaman, T. Karaman, A. Kayis Topaksu, P. Kurt, G. Önengüt,
G. Önengüt Gökbulut, K. Ozdemir, S. Ozturk, A. Polatöz, K. Sogut28 , B. Tali, H. Topakli,
D. Uzun, L.N. Vergili, M. Vergili
Middle East Technical University, Physics Department, Ankara, Turkey
I.V. Akin, T. Aliev, S. Bilmis, M. Deniz, H. Gamsizkan, A.M. Guler, K. Öcalan, M. Serin, R. Sever,
U.E. Surat, M. Zeyrek
Bogaziçi University, Department of Physics, Istanbul, Turkey
M. Deliomeroglu, D. Demir29 , E. Gülmez, A. Halu, B. Isildak, M. Kaya30 , O. Kaya30 , S. Ozkorucuklu31 , N. Sonmez32
National Scientific Center, Kharkov Institute of Physics and Technology, Kharkov, Ukraine
L. Levchuk, S. Lukyanenko, D. Soroka, S. Zub
University of Bristol, Bristol, United Kingdom
F. Bostock, J.J. Brooke, T.L. Cheng, D. Cussans, R. Frazier, J. Goldstein, N. Grant,
M. Hansen, G.P. Heath, H.F. Heath, C. Hill, B. Huckvale, J. Jackson, C.K. Mackay, S. Metson,
D.M. Newbold33 , K. Nirunpong, V.J. Smith, J. Velthuis, R. Walton
Rutherford Appleton Laboratory, Didcot, United Kingdom
K.W. Bell, C. Brew, R.M. Brown, B. Camanzi, D.J.A. Cockerill, J.A. Coughlan, N.I. Geddes,
K. Harder, S. Harper, B.W. Kennedy, P. Murray, C.H. Shepherd-Themistocleous, I.R. Tomalin,
J.H. Williams† , W.J. Womersley, S.D. Worm
36
A The CMS Collaboration
Imperial College, University of London, London, United Kingdom
R. Bainbridge, G. Ball, J. Ballin, R. Beuselinck, O. Buchmuller, D. Colling, N. Cripps, G. Davies,
M. Della Negra, C. Foudas, J. Fulcher, D. Futyan, G. Hall, J. Hays, G. Iles, G. Karapostoli, B.C. MacEvoy, A.-M. Magnan, J. Marrouche, J. Nash, A. Nikitenko24 , A. Papageorgiou,
M. Pesaresi, K. Petridis, M. Pioppi34 , D.M. Raymond, N. Rompotis, A. Rose, M.J. Ryan,
C. Seez, P. Sharp, G. Sidiropoulos1 , M. Stettler, M. Stoye, M. Takahashi, A. Tapper, C. Timlin,
S. Tourneur, M. Vazquez Acosta, T. Virdee1 , S. Wakefield, D. Wardrope, T. Whyntie, M. Wingham
Brunel University, Uxbridge, United Kingdom
J.E. Cole, I. Goitom, P.R. Hobson, A. Khan, P. Kyberd, D. Leslie, C. Munro, I.D. Reid,
C. Siamitros, R. Taylor, L. Teodorescu, I. Yaselli
Boston University, Boston, USA
T. Bose, M. Carleton, E. Hazen, A.H. Heering, A. Heister, J. St. John, P. Lawson, D. Lazic,
D. Osborne, J. Rohlf, L. Sulak, S. Wu
Brown University, Providence, USA
J. Andrea, A. Avetisyan, S. Bhattacharya, J.P. Chou, D. Cutts, S. Esen, G. Kukartsev, G. Landsberg, M. Narain, D. Nguyen, T. Speer, K.V. Tsang
University of California, Davis, Davis, USA
R. Breedon, M. Calderon De La Barca Sanchez, M. Case, D. Cebra, M. Chertok, J. Conway,
P.T. Cox, J. Dolen, R. Erbacher, E. Friis, W. Ko, A. Kopecky, R. Lander, A. Lister, H. Liu,
S. Maruyama, T. Miceli, M. Nikolic, D. Pellett, J. Robles, M. Searle, J. Smith, M. Squires, J. Stilley,
M. Tripathi, R. Vasquez Sierra, C. Veelken
University of California, Los Angeles, Los Angeles, USA
V. Andreev, K. Arisaka, D. Cline, R. Cousins, S. Erhan1 , J. Hauser, M. Ignatenko, C. Jarvis,
J. Mumford, C. Plager, G. Rakness, P. Schlein† , J. Tucker, V. Valuev, R. Wallny, X. Yang
University of California, Riverside, Riverside, USA
J. Babb, M. Bose, A. Chandra, R. Clare, J.A. Ellison, J.W. Gary, G. Hanson, G.Y. Jeng, S.C. Kao,
F. Liu, H. Liu, A. Luthra, H. Nguyen, G. Pasztor35 , A. Satpathy, B.C. Shen† , R. Stringer, J. Sturdy,
V. Sytnik, R. Wilken, S. Wimpenny
University of California, San Diego, La Jolla, USA
J.G. Branson, E. Dusinberre, D. Evans, F. Golf, R. Kelley, M. Lebourgeois, J. Letts, E. Lipeles,
B. Mangano, J. Muelmenstaedt, M. Norman, S. Padhi, A. Petrucci, H. Pi, M. Pieri, R. Ranieri,
M. Sani, V. Sharma, S. Simon, F. Würthwein, A. Yagil
University of California, Santa Barbara, Santa Barbara, USA
C. Campagnari, M. D’Alfonso, T. Danielson, J. Garberson, J. Incandela, C. Justus, P. Kalavase,
S.A. Koay, D. Kovalskyi, V. Krutelyov, J. Lamb, S. Lowette, V. Pavlunin, F. Rebassoo, J. Ribnik,
J. Richman, R. Rossin, D. Stuart, W. To, J.R. Vlimant, M. Witherell
California Institute of Technology, Pasadena, USA
A. Apresyan, A. Bornheim, J. Bunn, M. Chiorboli, M. Gataullin, D. Kcira, V. Litvine, Y. Ma,
H.B. Newman, C. Rogan, V. Timciuc, J. Veverka, R. Wilkinson, Y. Yang, L. Zhang, K. Zhu,
R.Y. Zhu
Carnegie Mellon University, Pittsburgh, USA
B. Akgun, R. Carroll, T. Ferguson, D.W. Jang, S.Y. Jun, M. Paulini, J. Russ, N. Terentyev,
H. Vogel, I. Vorobiev
37
University of Colorado at Boulder, Boulder, USA
J.P. Cumalat, M.E. Dinardo, B.R. Drell, W.T. Ford, B. Heyburn, E. Luiggi Lopez, U. Nauenberg,
K. Stenson, K. Ulmer, S.R. Wagner, S.L. Zang
Cornell University, Ithaca, USA
L. Agostino, J. Alexander, F. Blekman, D. Cassel, A. Chatterjee, S. Das, L.K. Gibbons, B. Heltsley,
W. Hopkins, A. Khukhunaishvili, B. Kreis, V. Kuznetsov, J.R. Patterson, D. Puigh, A. Ryd, X. Shi,
S. Stroiney, W. Sun, W.D. Teo, J. Thom, J. Vaughan, Y. Weng, P. Wittich
Fairfield University, Fairfield, USA
C.P. Beetz, G. Cirino, C. Sanzeni, D. Winn
Fermi National Accelerator Laboratory, Batavia, USA
S. Abdullin, M.A. Afaq1 , M. Albrow, B. Ananthan, G. Apollinari, M. Atac, W. Badgett, L. Bagby,
J.A. Bakken, B. Baldin, S. Banerjee, K. Banicz, L.A.T. Bauerdick, A. Beretvas, J. Berryhill,
P.C. Bhat, K. Biery, M. Binkley, I. Bloch, F. Borcherding, A.M. Brett, K. Burkett, J.N. Butler,
V. Chetluru, H.W.K. Cheung, F. Chlebana, I. Churin, S. Cihangir, M. Crawford, W. Dagenhart,
M. Demarteau, G. Derylo, D. Dykstra, D.P. Eartly, J.E. Elias, V.D. Elvira, D. Evans, L. Feng,
M. Fischler, I. Fisk, S. Foulkes, J. Freeman, P. Gartung, E. Gottschalk, T. Grassi, D. Green,
Y. Guo, O. Gutsche, A. Hahn, J. Hanlon, R.M. Harris, B. Holzman, J. Howell, D. Hufnagel,
E. James, H. Jensen, M. Johnson, C.D. Jones, U. Joshi, E. Juska, J. Kaiser, B. Klima, S. Kossiakov,
K. Kousouris, S. Kwan, C.M. Lei, P. Limon, J.A. Lopez Perez, S. Los, L. Lueking, G. Lukhanin,
S. Lusin1 , J. Lykken, K. Maeshima, J.M. Marraffino, D. Mason, P. McBride, T. Miao, K. Mishra,
S. Moccia, R. Mommsen, S. Mrenna, A.S. Muhammad, C. Newman-Holmes, C. Noeding,
V. O’Dell, O. Prokofyev, R. Rivera, C.H. Rivetta, A. Ronzhin, P. Rossman, S. Ryu, V. Sekhri,
E. Sexton-Kennedy, I. Sfiligoi, S. Sharma, T.M. Shaw, D. Shpakov, E. Skup, R.P. Smith† , A. Soha,
W.J. Spalding, L. Spiegel, I. Suzuki, P. Tan, W. Tanenbaum, S. Tkaczyk1 , R. Trentadue1 , L. Uplegger, E.W. Vaandering, R. Vidal, J. Whitmore, E. Wicklund, W. Wu, J. Yarba, F. Yumiceva,
J.C. Yun
University of Florida, Gainesville, USA
D. Acosta, P. Avery, V. Barashko, D. Bourilkov, M. Chen, G.P. Di Giovanni, D. Dobur,
A. Drozdetskiy, R.D. Field, Y. Fu, I.K. Furic, J. Gartner, D. Holmes, B. Kim, S. Klimenko,
J. Konigsberg, A. Korytov, K. Kotov, A. Kropivnitskaya, T. Kypreos, A. Madorsky, K. Matchev,
G. Mitselmakher, Y. Pakhotin, J. Piedra Gomez, C. Prescott, V. Rapsevicius, R. Remington,
M. Schmitt, B. Scurlock, D. Wang, J. Yelton
Florida International University, Miami, USA
C. Ceron, V. Gaultney, L. Kramer, L.M. Lebolo, S. Linn, P. Markowitz, G. Martinez, J.L. Rodriguez
Florida State University, Tallahassee, USA
T. Adams, A. Askew, H. Baer, M. Bertoldi, J. Chen, W.G.D. Dharmaratna, S.V. Gleyzer, J. Haas,
S. Hagopian, V. Hagopian, M. Jenkins, K.F. Johnson, E. Prettner, H. Prosper, S. Sekmen
Florida Institute of Technology, Melbourne, USA
M.M. Baarmand, S. Guragain, M. Hohlmann, H. Kalakhety, H. Mermerkaya, R. Ralich, I. Vodopiyanov
University of Illinois at Chicago (UIC), Chicago, USA
B. Abelev, M.R. Adams, I.M. Anghel, L. Apanasevich, V.E. Bazterra, R.R. Betts, J. Callner,
M.A. Castro, R. Cavanaugh, C. Dragoiu, E.J. Garcia-Solis, C.E. Gerber, D.J. Hofman, S. Khalatian, C. Mironov, E. Shabalina, A. Smoron, N. Varelas
38
A The CMS Collaboration
The University of Iowa, Iowa City, USA
U. Akgun, E.A. Albayrak, A.S. Ayan, B. Bilki, R. Briggs, K. Cankocak36 , K. Chung, W. Clarida,
P. Debbins, F. Duru, F.D. Ingram, C.K. Lae, E. McCliment, J.-P. Merlo, A. Mestvirishvili,
M.J. Miller, A. Moeller, J. Nachtman, C.R. Newsom, E. Norbeck, J. Olson, Y. Onel, F. Ozok,
J. Parsons, I. Schmidt, S. Sen, J. Wetzel, T. Yetkin, K. Yi
Johns Hopkins University, Baltimore, USA
B.A. Barnett, B. Blumenfeld, A. Bonato, C.Y. Chien, D. Fehling, G. Giurgiu, A.V. Gritsan,
Z.J. Guo, P. Maksimovic, S. Rappoccio, M. Swartz, N.V. Tran, Y. Zhang
The University of Kansas, Lawrence, USA
P. Baringer, A. Bean, O. Grachov, M. Murray, V. Radicci, S. Sanders, J.S. Wood, V. Zhukova
Kansas State University, Manhattan, USA
D. Bandurin, T. Bolton, K. Kaadze, A. Liu, Y. Maravin, D. Onoprienko, I. Svintradze, Z. Wan
Lawrence Livermore National Laboratory, Livermore, USA
J. Gronberg, J. Hollar, D. Lange, D. Wright
University of Maryland, College Park, USA
D. Baden, R. Bard, M. Boutemeur, S.C. Eno, D. Ferencek, N.J. Hadley, R.G. Kellogg, M. Kirn,
S. Kunori, K. Rossato, P. Rumerio, F. Santanastasio, A. Skuja, J. Temple, M.B. Tonjes, S.C. Tonwar, T. Toole, E. Twedt
Massachusetts Institute of Technology, Cambridge, USA
B. Alver, G. Bauer, J. Bendavid, W. Busza, E. Butz, I.A. Cali, M. Chan, D. D’Enterria, P. Everaerts,
G. Gomez Ceballos, K.A. Hahn, P. Harris, S. Jaditz, Y. Kim, M. Klute, Y.-J. Lee, W. Li, C. Loizides,
T. Ma, M. Miller, S. Nahn, C. Paus, C. Roland, G. Roland, M. Rudolph, G. Stephans, K. Sumorok,
K. Sung, S. Vaurynovich, E.A. Wenger, B. Wyslouch, S. Xie, Y. Yilmaz, A.S. Yoon
University of Minnesota, Minneapolis, USA
D. Bailleux, S.I. Cooper, P. Cushman, B. Dahmes, A. De Benedetti, A. Dolgopolov, P.R. Dudero,
R. Egeland, G. Franzoni, J. Haupt, A. Inyakin37 , K. Klapoetke, Y. Kubota, J. Mans, N. Mirman,
D. Petyt, V. Rekovic, R. Rusack, M. Schroeder, A. Singovsky, J. Zhang
University of Mississippi, University, USA
L.M. Cremaldi, R. Godang, R. Kroeger, L. Perera, R. Rahmat, D.A. Sanders, P. Sonnek, D. Summers
University of Nebraska-Lincoln, Lincoln, USA
K. Bloom, B. Bockelman, S. Bose, J. Butt, D.R. Claes, A. Dominguez, M. Eads, J. Keller, T. Kelly,
I. Kravchenko, J. Lazo-Flores, C. Lundstedt, H. Malbouisson, S. Malik, G.R. Snow
State University of New York at Buffalo, Buffalo, USA
U. Baur, I. Iashvili, A. Kharchilava, A. Kumar, K. Smith, M. Strang
Northeastern University, Boston, USA
G. Alverson, E. Barberis, O. Boeriu, G. Eulisse, G. Govi, T. McCauley, Y. Musienko38 , S. Muzaffar, I. Osborne, T. Paul, S. Reucroft, J. Swain, L. Taylor, L. Tuura
Northwestern University, Evanston, USA
A. Anastassov, B. Gobbi, A. Kubik, R.A. Ofierzynski, A. Pozdnyakov, M. Schmitt, S. Stoynev,
M. Velasco, S. Won
University of Notre Dame, Notre Dame, USA
L. Antonelli, D. Berry, M. Hildreth, C. Jessop, D.J. Karmgard, T. Kolberg, K. Lannon, S. Lynch,
39
N. Marinelli, D.M. Morse, R. Ruchti, J. Slaunwhite, J. Warchol, M. Wayne
The Ohio State University, Columbus, USA
B. Bylsma, L.S. Durkin, J. Gilmore39 , J. Gu, P. Killewald, T.Y. Ling, G. Williams
Princeton University, Princeton, USA
N. Adam, E. Berry, P. Elmer, A. Garmash, D. Gerbaudo, V. Halyo, A. Hunt, J. Jones, E. Laird,
D. Marlow, T. Medvedeva, M. Mooney, J. Olsen, P. Piroué, D. Stickland, C. Tully, J.S. Werner,
T. Wildish, Z. Xie, A. Zuranski
University of Puerto Rico, Mayaguez, USA
J.G. Acosta, M. Bonnett Del Alamo, X.T. Huang, A. Lopez, H. Mendez, S. Oliveros, J.E. Ramirez
Vargas, N. Santacruz, A. Zatzerklyany
Purdue University, West Lafayette, USA
E. Alagoz, E. Antillon, V.E. Barnes, G. Bolla, D. Bortoletto, A. Everett, A.F. Garfinkel, Z. Gecse,
L. Gutay, N. Ippolito, M. Jones, O. Koybasi, A.T. Laasanen, N. Leonardo, C. Liu, V. Maroussov,
P. Merkel, D.H. Miller, N. Neumeister, A. Sedov, I. Shipsey, H.D. Yoo, Y. Zheng
Purdue University Calumet, Hammond, USA
P. Jindal, N. Parashar
Rice University, Houston, USA
V. Cuplov, K.M. Ecklund, F.J.M. Geurts, J.H. Liu, D. Maronde, M. Matveev, B.P. Padley,
R. Redjimi, J. Roberts, L. Sabbatini, A. Tumanov
University of Rochester, Rochester, USA
B. Betchart, A. Bodek, H. Budd, Y.S. Chung, P. de Barbaro, R. Demina, H. Flacher, Y. Gotra,
A. Harel, S. Korjenevski, D.C. Miner, D. Orbaker, G. Petrillo, D. Vishnevskiy, M. Zielinski
The Rockefeller University, New York, USA
A. Bhatti, L. Demortier, K. Goulianos, K. Hatakeyama, G. Lungu, C. Mesropian, M. Yan
Rutgers, the State University of New Jersey, Piscataway, USA
O. Atramentov, E. Bartz, Y. Gershtein, E. Halkiadakis, D. Hits, A. Lath, K. Rose, S. Schnetzer,
S. Somalwar, R. Stone, S. Thomas, T.L. Watts
University of Tennessee, Knoxville, USA
G. Cerizza, M. Hollingsworth, S. Spanier, Z.C. Yang, A. York
Texas A&M University, College Station, USA
J. Asaadi, A. Aurisano, R. Eusebi, A. Golyash, A. Gurrola, T. Kamon, C.N. Nguyen, J. Pivarski,
A. Safonov, S. Sengupta, D. Toback, M. Weinberger
Texas Tech University, Lubbock, USA
N. Akchurin, L. Berntzon, K. Gumus, C. Jeong, H. Kim, S.W. Lee, S. Popescu, Y. Roh, A. Sill,
I. Volobouev, E. Washington, R. Wigmans, E. Yazgan
Vanderbilt University, Nashville, USA
D. Engh, C. Florez, W. Johns, S. Pathak, P. Sheldon
University of Virginia, Charlottesville, USA
D. Andelin, M.W. Arenton, M. Balazs, S. Boutle, M. Buehler, S. Conetti, B. Cox, R. Hirosky,
A. Ledovskoy, C. Neu, D. Phillips II, M. Ronquest, R. Yohay
Wayne State University, Detroit, USA
S. Gollapinni, K. Gunthoti, R. Harr, P.E. Karchin, M. Mattson, A. Sakharov
40
A The CMS Collaboration
University of Wisconsin, Madison, USA
M. Anderson, M. Bachtis, J.N. Bellinger, D. Carlsmith, I. Crotty1 , S. Dasu, S. Dutta, J. Efron,
F. Feyzi, K. Flood, L. Gray, K.S. Grogg, M. Grothe, R. Hall-Wilton1 , M. Jaworski, P. Klabbers,
J. Klukas, A. Lanaro, C. Lazaridis, J. Leonard, R. Loveless, M. Magrans de Abril, A. Mohapatra,
G. Ott, G. Polese, D. Reeder, A. Savin, W.H. Smith, A. Sourkov40 , J. Swanson, M. Weinberg,
D. Wenman, M. Wensveen, A. White
†: Deceased
1: Also at CERN, European Organization for Nuclear Research, Geneva, Switzerland
2: Also at Universidade Federal do ABC, Santo Andre, Brazil
3: Also at Soltan Institute for Nuclear Studies, Warsaw, Poland
4: Also at Université de Haute-Alsace, Mulhouse, France
5: Also at Centre de Calcul de l’Institut National de Physique Nucleaire et de Physique des
Particules (IN2P3), Villeurbanne, France
6: Also at Moscow State University, Moscow, Russia
7: Also at Institute of Nuclear Research ATOMKI, Debrecen, Hungary
8: Also at University of California, San Diego, La Jolla, USA
9: Also at Tata Institute of Fundamental Research - HECR, Mumbai, India
10: Also at University of Visva-Bharati, Santiniketan, India
11: Also at Facolta’ Ingegneria Universita’ di Roma ”La Sapienza”, Roma, Italy
12: Also at Università della Basilicata, Potenza, Italy
13: Also at Laboratori Nazionali di Legnaro dell’ INFN, Legnaro, Italy
14: Also at Università di Trento, Trento, Italy
15: Also at ENEA - Casaccia Research Center, S. Maria di Galeria, Italy
16: Also at Warsaw University of Technology, Institute of Electronic Systems, Warsaw, Poland
17: Also at California Institute of Technology, Pasadena, USA
18: Also at Faculty of Physics of University of Belgrade, Belgrade, Serbia
19: Also at Laboratoire Leprince-Ringuet, Ecole Polytechnique, IN2P3-CNRS, Palaiseau, France
20: Also at Alstom Contracting, Geneve, Switzerland
21: Also at Scuola Normale e Sezione dell’ INFN, Pisa, Italy
22: Also at University of Athens, Athens, Greece
23: Also at The University of Kansas, Lawrence, USA
24: Also at Institute for Theoretical and Experimental Physics, Moscow, Russia
25: Also at Paul Scherrer Institut, Villigen, Switzerland
26: Also at Vinca Institute of Nuclear Sciences, Belgrade, Serbia
27: Also at University of Wisconsin, Madison, USA
28: Also at Mersin University, Mersin, Turkey
29: Also at Izmir Institute of Technology, Izmir, Turkey
30: Also at Kafkas University, Kars, Turkey
31: Also at Suleyman Demirel University, Isparta, Turkey
32: Also at Ege University, Izmir, Turkey
33: Also at Rutherford Appleton Laboratory, Didcot, United Kingdom
34: Also at INFN Sezione di Perugia; Universita di Perugia, Perugia, Italy
35: Also at KFKI Research Institute for Particle and Nuclear Physics, Budapest, Hungary
36: Also at Istanbul Technical University, Istanbul, Turkey
37: Also at University of Minnesota, Minneapolis, USA
38: Also at Institute for Nuclear Research, Moscow, Russia
39: Also at Texas A&M University, College Station, USA
40: Also at State Research Center of Russian Federation, Institute for High Energy Physics,
Protvino, Russia
41