Papers by Sandra Vilas Boas Jardim
SN Computer Science, Dec 1, 2023
Data science techniques have increased in popularity over the last decades due to its numerous ap... more Data science techniques have increased in popularity over the last decades due to its numerous applications when handling complex data, but also due to its high precision. In particular, Machine (ML) and Deep Learning (DL) systems have been explored in many unique applications, owing to their high precision, flexible customization, and strong adaptability. Our research focuses on a previously described image detection system and analyses the application of a user feedback system to improve the accuracy of the comparison formula. Due to the non-traditional requirements of our system, we intended to assess the performance of multiple AI techniques and find the most suitable model to analyze our data and implement possible improvements. The study focuses on a set of test data, using the test results collected for one particular image cluster. We researched some of the previous solutions on similar topics and compared multiple ML methods to find the most suitable model for our results. Artificial Neural networks and binary decision trees were among the better performing models tested. Reinforcement and Deep Learning methods could be the focus of future studies, once more varied data are collected, with bigger comparison weight diversity.
Sentiment analysis, also known as opinion mining, has become ubiquitous in our society, with appl... more Sentiment analysis, also known as opinion mining, has become ubiquitous in our society, with applications in online searching, computer vision, image understanding, artificial intelligence and marketing communications. In this paper, is described an unsupervised automatic method of multilingual lexicon-based sentiment analysis algorithm, with a dictionarybased approach. The developed algorithm was tested in the sentiment analysis of users' publications on a digital tourism platform. The results obtained demonstrate the efficiency of the solution, which presents a high accuracy in the classification of publications in four different languages.
Procedia Computer Science, 2022
Procedia Computer Science, 2023
2021 16th Iberian Conference on Information Systems and Technologies (CISTI), 2021
A crucial aspect to game development is the ability to establish the player emersion in a simulat... more A crucial aspect to game development is the ability to establish the player emersion in a simulated and virtual world, presenting engaging content and maximizing the gaming experience. The demand for new and hand-customized content keeps increasing, in playable maps that are ever expanding in size, without compromising quality or differentiation. The usage of Procedural Content Generation (PCG) allows computers to generate game content and produce distinguishable and unique instances amongst the ones generated allowing to better replicate reality and deliver intricate systems without the costs and time consumption of human intervention.As a key element for the appeal of many games, natural life exhibits both complex and distinguishable behaviors, from species to species as well as from individual to individual. Biodiversity is maintained by processes operating over broader spatial and temporal scales, and as such, simulating plant diversity is a process better suited to operate under PCG algorithms.In this study, we describe a procedural flora evolution algorithm that can replicate a flora species life cycle for a practical 3D game setting (Universe 51), using scientific knowledge to better reproduce such intricate behaviors. We based some part our approach on complex intelligence algorithms, as well as in some original processes targeted at our project specificities. The evolution algorithm starts with the end results of a flora generation algorithm that can create millions of different flora species based on selected biological parameters. This data is then interpreted by the evolutionary algorithm, that uses these values to determine each species survival chances in the surrounding environment. Our results were able to replicate and autonomously manage plant species life cycle, under a dynamic environment with potentially varying conditions.
2018 13th Iberian Conference on Information Systems and Technologies (CISTI), 2018
“Cuscar” is a Portuguese word which means gossip. So, when we talk about Cuscarias we can link th... more “Cuscar” is a Portuguese word which means gossip. So, when we talk about Cuscarias we can link the idea to a gossip local or a local to look for information and create knowledge. The system concept is based on technology to support citizen's co-creation providing information to a certain place — Cuscaria-whether it's a tourist spot or a coffee shop where people come together. Anyone can create a Cuscaria but need to contact the platform owners to mark the local with a beacon and with a theme. To access the information available for each Cuscaria, share experiences and emotions related to it, it is necessary to travel to the place where it is, to access its physical identifier (Beacon). Hence, anyone is able to find other people with the same interests and at the end of the day this system and concept promote in an effective way a place of acquaintance where is possible to achieve an interesting place to socialize. Also, it's possible to apply analytics to the platform to check trends and to support local strategies.
Journal of Imaging
Graphical Search Engines are conceptually used in many development areas surrounding information ... more Graphical Search Engines are conceptually used in many development areas surrounding information retrieval systems that aim to provide a visual representation of results, typically associated with retrieving images relevant to one or more input images. Since the 1990s, efforts have been made to improve the result quality, be it through improved processing speeds or more efficient graphical processing techniques that generate accurate representations of images for comparison. While many systems achieve timely results by combining high-level features, they still struggle when dealing with large datasets and abstract images. Image datasets regarding industrial property are an example of an hurdle for typical image retrieval systems where the dimensions and characteristics of images make adequate comparison a difficult task. In this paper, we introduce an image retrieval system based on a multi-phase implementation of different deep learning and image processing techniques, designed to ...
Journal of Imaging
With a wide range of applications, image segmentation is a complex and difficult preprocessing st... more With a wide range of applications, image segmentation is a complex and difficult preprocessing step that plays an important role in automatic visual systems, which accuracy impacts, not only on segmentation results, but directly affects the effectiveness of the follow-up tasks. Despite the many advances achieved in the last decades, image segmentation remains a challenging problem, particularly, the segmenting of color images due to the diverse inhomogeneities of color, textures and shapes present in the descriptive features of the images. In trademark graphic images segmentation, beyond these difficulties, we must also take into account the high noise and low resolution, which are often present. Trademark graphic images can also be very heterogeneous with regard to the elements that make them up, which can be overlapping and with varying lighting conditions. Due to the immense variation encountered in corporate logos and trademark graphic images, it is often difficult to select a s...
In this scientific article we intend to explain what was planned and developed for the event to b... more In this scientific article we intend to explain what was planned and developed for the event to be held, the CityHack, which aims to promote the design and implementation of solutions for cities. Among several methodologies or processes for monitoring projects, the PMBOK was chosen because it was thought to be the most appropriate. This work presents an analysis to the PMBOK and with this to evaluate its effectiveness and utility in the realization of the CityHack event. It is intended to know if the PMBOK really was the right choice. And that can be applied and support other types of events. So far this guide to good practices has been used and it can be concluded that it helps effectively in the planning of the activities related to the event. More significant conclusions are expected after the event. In the future we intend to continue to comply with the planning, which is very important for Cityhack.
Procedia Technology, 2013
One of the main advantages of using computational systems in the health care activity comes from ... more One of the main advantages of using computational systems in the health care activity comes from their ability to provide useful information for decision making to health professionals. Thus, their main purpose is to increment the quality and efficiency of healthcare delivery. In order to achieve these purposes, Health Information Systems must fulfill interoperability standards, quality, security, scalability, reliability and timeliness in data storage and processing terms. One of the main existing problems in this area is the fact that informatics applications do not share information, or share it in a very low level. When communication between different Health Information Systems exists, it is mainly achieved through proprietary integration solutions. In this paper is made a survey of the main advantages of Electronic Health Records and presented a proposal of some general guidelines for building them and promote the integration of different information resources.
Ultrasound in Medicine & Biology, 2005
This paper describes a new method for segmentation of fetal anatomic structures from echographic ... more This paper describes a new method for segmentation of fetal anatomic structures from echographic images. More specifically, we estimate and measure the contours of the femur and of cranial cross-sections of fetal bodies, which can thus be automatically measured. Contour estimation is formulated as a statistical estimation problem, where both the contour and the observation model parameters are unknown. The observation model (or likelihood function) relates, in probabilistic terms, the observed image with the underlying contour. This likelihood function is derived from a region-based statistical image model. The contour and the observation model parameters are estimated according to the maximum likelihood (ML) criterion, via deterministic iterative algorithms. Experiments reported in the paper, using synthetic and real images, testify for the adequacy and good performance of the proposed approach.
This paper describes a new method for the automatic extraction and measurement of fetal anatomic ... more This paper describes a new method for the automatic extraction and measurement of fetal anatomic structures from echographic images. More specifically, we estimate and measure the contours of the femur and of cranial crosssections of fetal bodies. Contour estimation is formulated as a statistical estimation problem, where both the contour and the observation model parameters are unknown. The observation model (or likelihood function) relates, in probabilistic terms, the observed image with the underlying contour. This likelihood function is derived from a region-based image model. The contour and the parameters are estimated according to the maximum likelihood (ML) criterion, via unsupervised deterministic iterative algorithms. Experiments reported in the paper, using synthetic and real images, testify for the adequacy and good performance of the proposed approach.
Theory and Applications of Mathematics Computer Science, Apr 1, 2015
Many problems in signal processing and statistical inference are based on finding a sparse soluti... more Many problems in signal processing and statistical inference are based on finding a sparse solution to an undetermined linear system. The reference approach to this problem of finding sparse signal representations, on overcomplete dictionaries, leads to convex unconstrained optimization problems, with a quadratic term ℓ 2 , for the adjustment to the observed signal, and a coefficient vector ℓ 1-norm. This work focus the development and experimental analysis of an algorithm for the solution of ℓ q-ℓ p optimization problems, where p ∈ ]0, 1] ∧ q ∈ [1, 2], of which ℓ 2-ℓ 1 is an instance. The developed algorithm belongs to the majorization-minimization class, where the solution of the problem is given by the minimization of a progression of majorizers of the original function. Each iteration corresponds to the solution of an ℓ 2-ℓ 1 problem, solved by the projected gradient algorithm. When tested on synthetic data and image reconstruction problems, the results shows a good performance of the implemented algorithm, both in compressed sensing and signal restoration scenarios.
This paper describes a new method for the automatic extraction and measurement of fetal anatomic ... more This paper describes a new method for the automatic extraction and measurement of fetal anatomic structures from echographic images. More specifically, we estimate and measure the contours of the femur and of cranial crosssections of fetal bodies. Contour estimation is formulated as a statistical estimation problem, where both the contour and the observation model parameters are unknown. The observation model (or likelihood function) relates, in probabilistic terms, the observed image with the underlying contour. This likelihood function is derived from a region-based image model. The contour and the parameters are estimated according to the maximum likelihood (ML) criterion, via unsupervised deterministic iterative algorithms. Experiments reported in the paper, using synthetic and real images, testify for the adequacy and good performance of the proposed approach.
Encyclopedia of E-Health and Telemedicine, 2000
This paper describes a new method for the automatic extraction and measurement of fetal anatomic ... more This paper describes a new method for the automatic extraction and measurement of fetal anatomic structures from echographic images. More specifically, we estimate and measure the contours of the femur and of cranial crosssections of fetal bodies. Contour estimation is formulated as a statistical estimation problem, where both the contour and the observation model parameters are unknown. The observation model (or likelihood function) relates, in probabilistic terms, the observed image with the underlying contour. This likelihood function is derived from a region-based image model. The contour and the parameters are estimated according to the maximum likelihood (ML) criterion, via unsupervised deterministic iterative algorithms. Experiments reported in the paper, using synthetic and real images, testify for the adequacy and good performance of the proposed approach.
This paper describes a new method for automatic estimation of the contours of the femur and of th... more This paper describes a new method for automatic estimation of the contours of the femur and of the cranial cross-section in fetal ultrasound images. Our approach can be described as a regionbased maximum likelihood formulation of parametric deformable contours. This formulation provides robustness against the poor image quality, and allows simultaneous estimation of the contour parameters together with other parameters of the model. Implementation is carried out by a deterministic iterative algorithm with minimal user intervention. Experimental results testify for the very good performance of the approach.
Uploads
Papers by Sandra Vilas Boas Jardim