Papers by Segev Wasserkrug
Events are the main input of event-based systems. Some events are generated externally and flow a... more Events are the main input of event-based systems. Some events are generated externally and flow across distributed systems, while other events and their content need to be inferred by the event-based system itself. Such inference has a clear trade-off between inferring events with certainty, using full and complete information, and the need to provide a quick notification of newly revealed events. Timely event inference is therefore hampered by the gap between the actual occurrences of events, to which the system must respond, and the ability of event-based systems to accurately infer these events. This gap results in uncertainty and may be attributed to unreliable data sources (e.g., an inaccurate sensor reading), unreliable networks (e.g., packet drop at routers), the use of fuzzy terminology in reports (e.g., normal temperature) or the inability to determine with certainty whether a phenomenon has occurred (e.g., declaring an epidemic). In this chapter we present the state-of-the-art in event processing over uncertain data. We provide a classification of uncertainty in event-based systems, define a model for event processing over uncertain data, and propose algorithmic solutions for handling uncertainty.We also define, for demonstration purposes, a simple pattern language that supports uncertainty and detail open issues and challenges in this research area.
There exist many problem agnostic frameworks and algorithms for parallel simulation. However, cre... more There exist many problem agnostic frameworks and algorithms for parallel simulation. However, creating parallel simulation models that take advantage of characteristics specific to either the problem domain or a specific model can create significant performance benefits. This article provides an overview of general frameworks and algorithms for paralleling simulation execution, and also demonstrates two ways in which assumptions underlying the implementations of epidemiological models can be used to enable such parallelization in an efficient manner. These examples are based on planning and developing agentbased models activities carried out as part of the NIH's MIDAS (Models of Infectious Disease Agent Study) family of grants.
In recent years, there has been an increased need for the use of active systems – systems that in... more In recent years, there has been an increased need for the use of active systems – systems that include substantial processing which should be triggered by events. In many cases, however, there is an information gap between the actual occurrences of events to which such a system must respond, and the data generated by monitoring tools regarding these events. For example, some events, by their very nature, may not be signaled by any monitoring tools, or the inaccuracy of monitoring tools may incorrectly reflect the information associated with events. The result is that in many cases, there is uncertainty in the active system associated with event occurrence. In this paper, we provide a taxonomy of the sources of this uncertainty. Furthermore, we provide a formal way to represent this uncertainty, which is the first step towards addressing the aforementioned information gap.
This paper presents initial research into a framework (specification and execution model) for inf... more This paper presents initial research into a framework (specification and execution model) for inference, prediction, and decision making with uncertain events in active systems. This work is motivated by the observation that in many cases, there is a gap between the reported events that are used as a direct input to an active system, and the actual events upon which an active system must act. This paper motivates the work, surveys other efforts in this area, and presents preliminary ideas for both specification and execution model.
Enterprises today wish to manage their IT resources so as to optimize business objectives, such a... more Enterprises today wish to manage their IT resources so as to optimize business objectives, such as income, rather than IT metrics, such as response times. Therefore, we introduce a new paradigm, which focuses on such business objective oriented resource management. Additionally, we define a general simulation-based autonomous process enabling such optimizations, and describe a case study, demonstrating the usefulness of such a process.
Emergency Departments (EDs) require advanced support systems for monitoring and controlling their... more Emergency Departments (EDs) require advanced support systems for monitoring and controlling their processes: clinical, operational, and financial. A prerequisite for such a system is comprehensive operational information (e.g. queueing times, busy resources,…), reliably portraying and predicting ED status as it evolves in time. To this end, simulation comes to the rescue, through a two-step procedure that is hereby proposed for supporting real-time ED control. In the first step, an ED manager infers the ED's current state, based on historical data and simulation: data is fed into the simulator (e.g. via location-tracking systems, such as RFID tags), and the simulator then completes unobservable state-components. In the second step, and based on the inferred present state, simulation supports control by predicting future ED scenarios. To this end, we estimate timevarying resource requirements via a novel simulation-based technique that utilizes the notion of offered-load. 2042 978-1-4244-5771-7/09/$26.00 ©2009 IEEE Marmor, Wasserkrug, Zeltyn, Mesika, Greenshpan, Carmeli, Shtub and Mandelbaum the fact that presently in EDs, available information systems provide only partial, and often inaccurate, information regarding the current operational state. Therefore, an additional capability required by the command-and-control solution is a modelbased estimation of the current state.
ABSTRACT Efficiency is critical to the profitability of software maintenance and support organiza... more ABSTRACT Efficiency is critical to the profitability of software maintenance and support organizations. Managing such organizations effectively requires suitable measures of efficiency that are sensitive enough to detect significant changes, and accurate and timely in detecting them. Mean time to close problem reports is the most commonly used efficiency measure, but its suitability has not been evaluated carefully. We performed such an evaluation by mining and analyzing many years of support data on multiple IBM products. Our preliminary results suggest that the mean is less sensitive and accurate than another measure, percentiles, in cases that are particularly important in the maintenance and support domain. Using percentiles, we also identified statistical techniques to detect efficiency trends and evaluated their accuracy. Although preliminary, these results may have significant ramifications for effectively measuring and improving software maintenance and support processes.
International Journal of Services Operations and Informatics, 2008
... Segev is currently the technical lead for the SWOPS workforce management tool, and is an IBM ... more ... Segev is currently the technical lead for the SWOPS workforce management tool, and is an IBM research sub-strategist in the area of Services Analytics and Optimisation. 1 Introduction The beginning of the 21st century marks the era in which the structure of business begins to ...
Ibm Journal of Research and Development, 2011
Asset-intensive businesses across industries rely on physical assets to deliver services to their... more Asset-intensive businesses across industries rely on physical assets to deliver services to their customers, and effective asset management is critical to the businesses. Today, businesses may make use of enterprise asset-management (EAM) solutions for many asset-related processes, ranging from the core asset-management functions to maintenance, inventory, contracts, warranties, procurement, and customer-service management. While EAM solutions have transformed the operational aspects of asset management through data capture and process automation, the decision-making process with respect to assets still heavily relies on institutional knowledge and anecdotal insights. Analytics-driven asset management is an approach that makes use of advanced analytics and optimization technologies to transform the vast amounts of data from asset management, metering, and sensor systems into actionable insight, foresight, and prescriptions that can guide decisions involving strategic and tactical assets, as well as customer and business models.
Emergency Departments (ED) are highly dynamic environments comprising complex multi-dimensional p... more Emergency Departments (ED) are highly dynamic environments comprising complex multi-dimensional patient-care processes. In recent decades, there has been increased pressure to improve ED services, while taking into account various aspects such as clinical quality, operational efficiency, and cost performance. Unfortunately, the information systems in today’s EDs cannot access the data required to provide a holistic view of the ED in a complete and timely fashion. What does exist is a set of disjoint information systems that provide some of the required data, without any additional structured tools to manage the ED processes. We present a concept for the designof an IT system that provides advanced management functionality to the ED. The system is composed of three major layers: data collection, analytics, and the user interface. The data collection layer integrates the IT systems that already exist in the ED and newly introduced systems such as sensor-based patient tracking. The analytics component combines methods and algorithms that turn the data into valuable knowledge. An advanced user interface serves as a tool to help make intelligent decisions based on that knowledge. We also describe several scenarios that demonstrate the use and impact of such a system on ED management. Such a system can be implemented in gradual stages, enabling incremental and ongoing improvements in managing the ED care processes. The multi-disciplinary vision presented here is based on the authors’ extensive experience and their collective records of accomplishment in emergency departments, business optimization, and the development of IT systems.
Outsourcing IT support of an enterprise requires that third level IT support is provided as a ser... more Outsourcing IT support of an enterprise requires that third level IT support is provided as a service by the outsourcer. Although there is a large body of existing work regarding demand forecasting and shift schedule creation for various domains such as call centers, very little work exists for third level IT support. Moreover, there is a significant difference between such
IEEE Transactions on Knowledge and Data Engineering, 2012
There is a growing need for systems that react automatically to events. While some events are gen... more There is a growing need for systems that react automatically to events. While some events are generated externally and deliver data across distributed systems, others need to be derived by the system itself based on available information. Event derivation is hampered by uncertainty attributed to causes such as unreliable data sources or the inability to determine with certainty whether an event has actually occurred, given available information. Two main challenges exist when designing a solution for event derivation under uncertainty. First, event derivation should scale under heavy loads of incoming events. Second, the associated probabilities must be correctly captured and represented. We present a solution to both problems by introducing a novel generic and formal mechanism and framework for managing event derivation under uncertainty. We also provide empirical evidence demonstrating the scalability and accuracy of our approach.
The creation of IT simulation models for uses such as capacity planning and optimization is becom... more The creation of IT simulation models for uses such as capacity planning and optimization is becoming more and more widespread. Traditionally, the creation of such models required deep modeling and/or programming expertise, thus severely limiting their extensive use. Moreover, many modern intelligent tools now require simulation models in order to carry out their function. For these tools to be widely deployable, the derivation of simulation models must be made possible without requiring excessive technical knowledge. Hence we introduce a general methodology that enables an almost automatic deployment of IT simulation models, based on three fundamental principles: Modeling only at the required level of detail; modeling standard components using pre-prepared models; and automatically deriving the application-specific model details. The technical details underlying this approach are presented. In addition, a case study, showing the application of this methodology to an eCommerce site, demonstrates the applicability of this approach.
Component business modeling (CBM) serves as a powerful analytical framework for reasoning about t... more Component business modeling (CBM) serves as a powerful analytical framework for reasoning about the business as a set of business components that collaborate through the provision and consumption of business services. This paper proposes and illustrates a method to calculate the relative importance of the entities that make up a componentized enterprise architecture. The proposed method includes a formal definition
Queueing Systems - Theory and Applications, 2009
We consider a multi-server queue with K priority classes. In this system, customers of the P high... more We consider a multi-server queue with K priority classes. In this system, customers of the P highest priorities (P<K) can preempt customers with lower priorities, ejecting them from service and sending them back into the queue. Service times are assumed exponential with the same mean for all classes. The Laplace–Stieltjes transforms of waiting times are calculated explicitly and the Laplace–Stieltjes transforms of sojourn times are provided in an implicit form via a system of functional equations. In both cases, moments of any order can be easily calculated. Specifically, we provide formulae for the steady state means and the second moments of waiting times for all priority classes. We also study some approximations of sojourn-time distributions via their moments. In a practical part of our paper, we discuss the use of mixed priorities for different types of Service Level Agreements, including an example based on a real scheduling problem of IT support teams.
International Journal of Services Operations and Informatics, 2008
IT support can be divided into first-level support, second-level support and third-level support.... more IT support can be divided into first-level support, second-level support and third-level support. Although there is a large body of existing work regarding demand forecasting and shift schedule creation for various domains such as call centres, very little work exists for second-and third-level IT support. Moreover, there is a significant difference between such support and other types of services. As a result, current best practices for scheduling such work are not based on demand, but rather on primitive rules of thumb. Due to the increasing number of people providing such support, theory and practice is sorely needed for scheduling second-and third-level support shifts according to actual demand. In this work, we present an end-to-end methodology for forecasting and scheduling this type of work. We also present a case study in which this methodology demonstrated significant potential savings in terms of manpower resources. 'Creating operational shift schedules for third-level IT support: challenges, models and case study ', Int. J. Services Operations and Informatics, Vol. 3, Nos. 3/4,
Uploads
Papers by Segev Wasserkrug