Papers by Simone Diniz Junqueira Barbosa
The Fourth Industrial Revolution (4IR) is characterized by a fusion of technologies, which is blu... more The Fourth Industrial Revolution (4IR) is characterized by a fusion of technologies, which is blurring the lines between the physical, digital, and biological spheres. In this context, two fundamental characteristics emerge: transparency and privacy. From one side, transparency can be seen as the quality that allows participants of a community to know which particular processes are being applied, by which agents, and on which data items. It is generally regarded as a means to enable checks and balances within this community, so as to provide a basis for trust among its participants. Privacy, on the other side, essentially refers to the right of an individual to control how information about her/him is used by others. The issue of public transparency versus individual privacy has long been discussed, and within already existing 4IR scenarios, it became clear that the free flow of information fostered by transparency efforts poses serious conflicting issues to privacy assurance. In order to deal with the myriad of often conflicting cross-cutting concerns, Internet applications and systems must incorporate adequate mechanisms to ensure compliance of both ethical and legal principles. In this paper, we use the OurPrivacy Framework as a conceptual framework to precisely characterize where in the design process the decisions must be made to handle both transparency and privacy concerns.
Abstract Online help systems are typically used (if at all) as a last resource in interactive bre... more Abstract Online help systems are typically used (if at all) as a last resource in interactive breakdown situations. In this paper, we present a semiotic engineering method for building online help that uses fairly known design models. We discuss the benefits of having designers explicitly communicate their design vision to users and the need or opportunity to foster new cultural attitudes towards online help. We show how, as a direct communication channel from designers to users, it opens new possibilities for interaction that can be ...
Interacting with Computers, Apr 1, 2001
End user programming (EUP) environments are difficult to evaluate empirically. Most users do not ... more End user programming (EUP) environments are difficult to evaluate empirically. Most users do not engage in programming, and those who do are often discouraged by the complexity of programming tasks. Often the difficulties arise from the programming languages in which users are expected to express themselves. But there are other difficulties associated with designing extensions and adjustments to artifacts that have been originally designed by others. This paper characterizes EUP as a semiotic design process, and presents two principles that can be used to illustrate the distinctions between the various kinds of techniques and approaches proposed in this field. The principles support a preliminary theoretical model of EUP and should thus facilitate the definition and interpretation of empirical evaluation studies. They also define some specific semiotic qualifications that more usable and applicable EUP languages could be expected to have.
Knowledge Based Systems, Dec 1, 2001
Designing software involves good perception, good reasoning, and a talent to express oneself effe... more Designing software involves good perception, good reasoning, and a talent to express oneself effectively through programming and interactive languages. Semiotic theories can help HCI designers increase their power to perceive, reason and communicate. By presenting some of the results we have reached with semiotic engineering over the last few years, we suggest that the main contributions of semiotic theory in supporting HCI design are: to provide designers with new perceptions on the process and product of HCI design; to bind together all the stages of software development and use, giving them a unique homogeneous treatment; and to pose innovative questions that extend the frontiers of HCI investigations.
Enterprise Information Systems, 2019
Dealing with average-sized event logs is considered a challenging task in process mining, in orde... more Dealing with average-sized event logs is considered a challenging task in process mining, in order to give value to event log data created by a wide variety of systems. An event log consists of a sequence of events for every case that was handled by the system. Discovery algorithms proposed in the literature work well in specific cases, but they usually fail in generic ones. Furthermore, there is no evidence that those existing strategies can handle logs with a large number of variants. We lack a generic approach to allow experts to explore event log data and decompose information into a series of smaller problems, to identify not only outliers, but also relations between the analyzed cases. In this chapter we propose a visual approach for filtering processes based on a low dimensionality representation of cases, a dissimilarity function based on both case attributes and case paths, and the use of entropy and silhouette to evaluate the uncertainty and quality, respectively, of each subset of cases. For each subset of cases, it is possible to reconstruct and evaluate each process model. Those contributions can be combined in an interactive tool to support process discovery. To demonstrate our tool, we use the event log from BPI Challenge 2017.
Product-Focused Software Process Improvement, 2020
2020 46th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 2020
Petrobras is Brazil’s largest publicly-held company, operating in the oil, natural gas, and energ... more Petrobras is Brazil’s largest publicly-held company, operating in the oil, natural gas, and energy industry. Internal efforts enabled Petrobras to identify Digital Transformation (DT) opportunities to further promote their operational excellence. While addressing these opportunities typically requires Research and Development (R&D) uncertainties that could lead traditional R&D cooperation terms to be negotiated in years, there are time-to-market constraints for fast-paced deliveries to experiment solution options. Having this in mind, they partnered up with PUC-Rio to establish a new DT initiative. [Goal] The goal of this paper is to present the Lean R&D approach, tailored within the new initiative to meet the aforementioned DT needs. [Method] We designed Lean R&D integrating the following building blocks: (i) Lean Inceptions, to allow stakeholders to jointly outline a Minimal Viable Product (MVP); (ii) parallel technical feasibility assessment and conception phases, allowing to ‘fail fast’; (iii) scrum-based development management; and (iv) strategically aligned continuous experimentation to test business hypotheses. We report on first experiences of applying Lean R&D in practice. [Results] Lean R&D enabled addressing research-related uncertainties early and to efficiently deliver valuable MVPs within fast-paced four months cycles. [Conclusions] In our first experiences Lean R&D showed itself suitable for supporting DT initiatives. However, more formal case studies are needed. The business strategy alignment and the continuous support of a highly qualified research team were considered key success factors.
Information Systems, 2020
Abstract The increasing amount of valuable, unstructured textual information poses a major challe... more Abstract The increasing amount of valuable, unstructured textual information poses a major challenge to extract value from those texts. We need to use NLP (Natural Language Processing) techniques, most of which rely on manually annotating a large corpus of text for its development and evaluation. Creating a large annotated corpus is laborious and requires suitable computational support. There are many annotation tools available, but their main weaknesses are the absence of data management features for quality control and the need for a commercial license. As the quality of the data used to train an NLP model directly affects the quality of the results, the quality control of the annotations is essential. In this paper, we introduce ERAS, a novel web-based text annotation tool developed to facilitate and manage the process of text annotation. ERAS includes not only the key features of current mainstream annotation systems but also other features necessary to improve the curation process, such as the inter-annotator agreement, self-agreement and annotation log visualization, for annotation quality control. ERAS also implements a series of features to improve the customization of the user’s annotation workflow, such as: random document selection, re-annotation stages, and warm-up annotations. We conducted two empirical studies to evaluate the tool’s support to text annotation, and the results suggest that the tool not only meets the basic needs of the annotation task but also has some important advantages over the other tools evaluated in the studies. ERAS is freely available at https://github.com/grosmanjs/eras .
Computers & Graphics, 2019
Abstract Inspecting the outputs of classification algorithms is becoming progressively difficult ... more Abstract Inspecting the outputs of classification algorithms is becoming progressively difficult due to the increase in both scale and complexity of both the data and the algorithms. This has led to research efforts to develop new techniques to interpret the behavior of these algorithms and to facilitate the understanding of their results. A common classification approach is the “ensemble of classifiers”, where a set of classifiers c ∈ C is trained on the input data set and the final classification is computed by “voting”, i.e., ranking their results. One of the issues with this approach, however, is that instead of having only one classifier to analyze, now there are |C|, each with its characteristics. Thus, there is a demand for methods that provide insights into the results of an ensemble of classifiers and at the same time allow a detailed analysis of each classifier in the ensemble. Our work proposes to draw on dimensionality reduction techniques to provide visual tools to interpret the results of an ensemble of classifiers, while also giving insights into how each classifier contributes to the final results. Our approach also presents a measure of classification uncertainty by highlighting regions where there is a divergence among the classifiers in the ensemble, allowing one to focus their analysis on these regions. We tested our approach using the Digits MNIST and Fashion MNIST data sets. Through the use of maps that provide an overview of a classifier behavior to instance-based visualizations, we show how our approach can assist in the interpretation of why a specific decision (classification) was made.
Artificial Intelligence and Law, 2019
Appellate Court Modifications Extraction consists of, given an Appellate Court decision, identify... more Appellate Court Modifications Extraction consists of, given an Appellate Court decision, identifying the proposed modifications by the upper Court of the lower Court judge's decision. In this work, we propose a system to extract Appellate Court Modifications for Portuguese. Information extraction for legal texts has been previously addressed using different techniques and for several languages. Our proposal differs from previous work in two ways: (1) our corpus is composed of Brazilian Appellate Court decisions, in which we look for a set of modifications provided by the Court; and (2) to automatically extract the modifications, we use a traditional Machine Learning approach and a Deep Learning approach, both as alternative solutions and as a combined solution. We tackle the Appellate Court Modifications Extraction task, experimenting with a wide variety of methods. In order to train and evaluate the system, we have built the KauaneJunior corpus, using public data disclosed by the Appellate State Court of Rio de Janeiro jurisprudence database. Our best method, which is a Bidirectional Long Short-Term Memory network combined with Conditional Random Fields, obtained an F =1 score of 94.79%.
Lecture Notes in Computer Science, 2013
HCI Education in Brazil has come a long way. Since 1999, the Brazilian Computer Society (SBC) inc... more HCI Education in Brazil has come a long way. Since 1999, the Brazilian Computer Society (SBC) included HCI in its reference curriculum for its Computing courses. Since then, the community has discussed the perspective of the area in our country. From 2010 to this day, we have held a series of workshops on HCI Education, called WEIHC, as a permanent discussion forum within the Brazilian HCI conference, IHC. We report here the results of the WEIHC discussions and of two surveys, conducted in 2009 and in 2012, to help us assess the status of HCI Education in Brazil. Despite the advances of the Brazilian HCI community, our surveys show that we still face some important challenges. We should curate existing teaching material to further enhance collaboration among professors, to increase the quality of our courses, and to broaden HCI awareness across all related departments.
Lecture Notes in Computer Science, 2005
This paper discusses the role of an enhanced extended lexicon as a shared communicative artifact ... more This paper discusses the role of an enhanced extended lexicon as a shared communicative artifact during software design. We describe how it may act as an interlingua that captures the shared understanding of both stakeholders and designers. We argue for the need to address communicative concerns among design team members, as well as from designers to users through the user interface. We thus extend an existing lexicon language (LEL) to include communication-oriented concerns that user interface designers need to take into account when representing their solution to end users. We propose that the enhanced LEL may be used as a valuable resource in model-based design, in modeling the help system, and in engineering the user interface elements and widgets.
Computers in Entertainment, 2015
The method proposed here to determine, in a simplified but still plausible way, the behavior of t... more The method proposed here to determine, in a simplified but still plausible way, the behavior of the characters participating in a story is based on rules that associate a given situation with a list of different goals. In view of the rules whose situation holds at the current state, each character engages in a decision-making process along three steps: goal selection, plan selection, and commitment. The selection criteria reflect individual preferences originating, respectively, from drives, attitudes and emotions. Four kinds of inter-character relations are considered, which may lead to goal and plan interferences. A prototype logic, programming tool was developed to run experiments.
CHI '14 Extended Abstracts on Human Factors in Computing Systems, 2014
The objective of this course is to provide newcomers to Human-Computer Interaction (HCI) with an ... more The objective of this course is to provide newcomers to Human-Computer Interaction (HCI) with an introduction and overview of the field. Attendees often include practitioners without a formal education in HCI, and those teaching HCI for the first time. This course includes content on theory, cognition, design, evaluation, and user diversity.
Interactions, 2011
Public policy increasingly plays a role in influencing the work that we do as HCI researchers, in... more Public policy increasingly plays a role in influencing the work that we do as HCI researchers, interaction designers, and practitioners. "Public policy" is a broad term that includes both government policy and policy within non-governmental organizations, such as standards bodies. The Interacting with Public Policy forum focuses on topics at the intersection of human-computer interaction and public policy.
Lecture Notes in Computer Science, 2009
Sketches are often used during user interface design and evaluation as both a design support tool... more Sketches are often used during user interface design and evaluation as both a design support tool and a communication tool. Despite recent efforts, computational support to user interface sketching has not yet reached its full potential. This paper reports a study comparing two evaluation techniques: paper prototyping and a simulation-based evaluation supported by the UISKEI tool.
2014 Brazilian Symposium on Software Engineering, 2014
Even though exception handling mechanisms have been proposed as a means to improve software robus... more Even though exception handling mechanisms have been proposed as a means to improve software robustness, empirical evidence suggests that exception handling code is still poorly implemented in industrial systems. Moreover, it is often claimed that the poor quality of exception handling code can be a source of faults in a software system. However, there is still a gap in the literature in terms of better understanding exceptional faults, i.e., faults whose causes regard to exception handling. In particular, there is still little empirical knowledge about what are the specific causes of exceptional faults in software systems. In this paper we start to fill this gap by presenting a categorization of the causes of exceptional faults observed in two mainstream open source projects. We observed ten different categories of exceptional faults, most of which were never reported before in the literature. Our results pinpoint that current verification and validation mechanisms for exception handling code are still not properly addressing these categories of exceptional faults.
Interacting with Computers, 2004
CiteSeerX - Document Details (Isaac Councill, Lee Giles): This paper proposes the use of an inter... more CiteSeerX - Document Details (Isaac Councill, Lee Giles): This paper proposes the use of an interaction modeling language called MoLIC to graphically represent scenarios as an additional resource in software development. ...
Uploads
Papers by Simone Diniz Junqueira Barbosa