Papers by A. Fernando Ribeiro
2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)
This paper describes the design and development of an autonomous robotic manipulator with four de... more This paper describes the design and development of an autonomous robotic manipulator with four degrees of freedom. The manipulator is named RACHIE-"Robotic Arm for Collaboration with Humans in Industrial Environment". The idea was to create a smaller version of the industrial manipulators available on the market. The mechanical and electronic components are presented as well as the software algorithms implemented on the robot. The manipulator has as its primary goal the detection and organization of cans by color and defects. The robot can detect a human operator so it can deliver defective cans by collaborating with him/her on an industrial environment. To be able to perform such task, the robot has implemented a machine learning algorithm, a Haar feature-based cascade classifier, on its vision system to detect cans and humans. On the handler motion, direct and inverse kinematics were calculated and implemented, and its equations are described in this paper. This robot presents high reliability and robustness in the task assigned. It is low-cost as it is a small version of commercial ones, making it optimized for smaller tasks.
2022 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)
2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2017
This paper describes the design and development of an autonomous robot for the Robot@Factory leag... more This paper describes the design and development of an autonomous robot for the Robot@Factory league at "Festival Nacional de Robótica 2016", held in Bragança, Portugal. This paper describes all the hardware and software components developed for a localization and performance of the robot according to the rules. The challenge consists of a table setup that recreates an industrial environment where a robot has to successfully transport boxes from an initial warehouse to the final warehouse. The destination to which the robot has to carry each box, depends on the state of the box, i.e., depending on the box LED color, even though in some cases the robot has to leave the box temporarily in the called processing machines (which are intermediate stations). The most significant innovation feature of this robot prototype consists of the possibility of carrying up to three boxes simultaneously while being able to select which box to drop. This project was developed with great success, since the team managed to reach the 3rd place in the competition.
IEEE Robotics & Automation Magazine, 2022
Proceedings of the 14th International Conference on Agents and Artificial Intelligence, 2022
The authors would like to thank the important contributions of Mr. Abel, his wife and Mr. Sampaio... more The authors would like to thank the important contributions of Mr. Abel, his wife and Mr. Sampaio for the success of this work. This work was supported by the Automation and Robotics Laboratory from the Algoritmi Research Center at the University of Minho in Guimaraes. This work is funded by FEDER through the Operational Competitiveness Programme — COMPETE — and by national funds through the Foundation for Science and Technology — FCT — in the scope of project: FCOMP-01-0124-FEDER-022674.
2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2019
An approach to the problem of autonomous mobile robot obstacle avoidance using Reinforcement Lear... more An approach to the problem of autonomous mobile robot obstacle avoidance using Reinforcement Learning, more precisely Q-Learning, is presented in this paper. Reinforcement Learning in Robotics has been a challenging topic for the past few years. The ability to equip a robot with a powerful enough tool to allow an autonomous discovery of an optimal behavior through trial-and-error interactions with its environment has been a reason for numerous deep research projects. In this paper, two different Q-Learning approaches are presented as well as an extensive hyperparameter study. These algorithms were developed for a simplistically simulated Bot'n Roll ONE A (Fig. 1). The simulated robot communicates with the control script via ROS. The robot must surpass three levels of iterative complexity mazes similar to the ones presented on RoboParty [1] educational event challenge. For both algorithms, an extensive hyperparameter search was taken into account by testing hundreds of simulations with different parameters. Both Q-Learning solutions develop different strategies trying to solve the three labyrinths, enhancing its learning ability as well as discovering different approaches to certain situations, and finishing the task in complex environments.
Robotics, 2021
The static stability of hexapods motivates their design for tasks in which stable locomotion is r... more The static stability of hexapods motivates their design for tasks in which stable locomotion is required, such as navigation across complex environments. This task is of high interest due to the possibility of replacing human beings in exploration, surveillance and rescue missions. For this application, the control system must adapt the actuation of the limbs according to their surroundings to ensure that the hexapod does not tumble during locomotion. The most traditional approach considers their limbs as robotic manipulators and relies on mechanical models to actuate them. However, the increasing interest in model-free models for the control of these systems has led to the design of novel solutions. Through a systematic literature review, this paper intends to overview the trends in this field of research and determine in which stage the design of autonomous and adaptable controllers for hexapods is.
Sensors (Basel, Switzerland), Jan 19, 2016
This paper presents a road surface scanning system that operates with a trichromatic line scan ca... more This paper presents a road surface scanning system that operates with a trichromatic line scan camera with light emitting diode (LED) lighting achieving road surface resolution under a millimeter. It was part of a project named Roadkills-Intelligent systems for surveying mortality of amphibians in Portuguese roads, sponsored by the Portuguese Science and Technology Foundation. A trailer was developed in order to accommodate the complete system with standalone power generation, computer image capture and recording, controlled lighting to operate day or night without disturbance, incremental encoder with 5000 pulses per revolution attached to one of the trailer wheels, under a meter Global Positioning System (GPS) localization, easy to utilize with any vehicle with a trailer towing system and focused on a complete low cost solution. The paper describes the system architecture of the developed prototype, its calibration procedure, the performed experimentation and some obtained results...
Advances in Intelligent Systems and Computing, 2015
The involvement of children and adolescents in robotics is on demand by the many events and compe... more The involvement of children and adolescents in robotics is on demand by the many events and competitions of robotics all over the world. This non-deterministic world is more attractive, fun, hands-on and with real results than computer virtual simulations and 3D worlds. It is important, by different reasons, to involve people of all ages in an area that some consider the future of mankind and an opportunity to increase the low rate of engineers globally. Robotics competitions at this level are essentially based on teaching motion and programming skills by using Lego™ based robots and a set of challenges to overcome. This paper presents a different approach that is being used by Minho University in order to attract STEM candidates into these fields, with visible success and excellent results. The event is called RoboParty® and teaches children, adolescents and adults, from any area, how to build a robot from scratch, using electronics, mechanics and programming during three non-stop days.
Activities involving robotics, projecting assembling and programming robots are in essence hands-... more Activities involving robotics, projecting assembling and programming robots are in essence hands-on and inquiry-based activities leading to an effective learning of different aspects of science and technology among others. Different approaches have being used to introduce robotics in the education of young children. In this communication we will present an approach that in an inquiry based science education, IBSE, perspective, uses an informal environment to introduced robotics, as well as a range of other science and technology, concepts and competencies to young students.
Lecture Notes in Computer Science, 2014
In RoboCup Middle Size league (MSL) the main referee uses assisting technology, controlled by a s... more In RoboCup Middle Size league (MSL) the main referee uses assisting technology, controlled by a second referee, to support him, in particular for conveying referee decisions for robot players with the help of a wireless communication system. In this paper a vision-based system is introduced, able to interpret dynamic and static gestures of the referee, thus eliminating the need for a second one. The referee's gestures are interpreted by the system and sent directly to the Referee Box, which sends the proper commands to the robots. The system is divided into four modules: a real time hand tracking and feature extraction, a SVM (Support Vector Machine) for static hand posture identification, an HMM (Hidden Markov Model) for dynamic unistroke hand gesture recognition, and a FSM (Finite State Machine) to control the various system states transitions. The experimental results showed that the system works very reliably, being able to recognize the combination of gestures and hand postures in real-time. For the hand posture recognition, with the SVM model trained with the selected features, an accuracy of 98,2% was achieved. Also, the system has many advantages over the current implemented one, like avoiding the necessity of a second referee, working on noisy environments, working on wireless jammed situations. This system is easy to implement and train and may be an inexpensive solution.
2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2014
Hand gestures are a powerful way for human communication, with lots of potential applications in ... more Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
OCEANS 2015 - Genova, 2015
Oceans have shown tremendous importance and impact on our lives. Thus the need for monitoring and... more Oceans have shown tremendous importance and impact on our lives. Thus the need for monitoring and protecting the oceans has grown exponentially in recent years. On the other hand, oceans have economical and industrial potential in areas such as pharmaceutical, oil, minerals and biodiversity. This demand is increasing and the need for high data rate and nearreal-time communications between submerged agents became of paramount importance. Among the needs for underwater communications, streaming video (e.g. for inspecting risers or hydrothermal vents) can be seen as the top challenge, which when solved will make all the other applications possible. Presently, the only reliable approach for underwater video streaming relies on wired connections or tethers (e.g. from ROVs to the surface) which presents severe operational constraints that makes acoustic links together with AUVs and sensor networks strongly appealing. Using new polymer-based acoustic transducers, which in very recent works have shown to have bandwidth and power efficiency much higher than the usual ceramics, this article proposes the development of a reprogrammable acoustic modem for operating in underwater communications with video streaming capabilities. The results have shown a maximum data-rate of 1Mbps with a simple modulation scheme such as OOK, at a distance of 20 m.
Middle Size Robot League Rules and Regulations for 2006
Neste artigo. abordamos os passos necessários para se conseguir a visualização tridimensional (3-... more Neste artigo. abordamos os passos necessários para se conseguir a visualização tridimensional (3-D) de volumes de dlidos adquiridos em sistemas de iR!agemmédica. A ~onstrução 3-D têm inú~ras vantagens pois penníte fornecer aocirorgião as di~nsões, relações topográficas. orientação e volume das diferentes estruturas ou lesões. Desta forma é possível uma visualização prévia, que penníte auxiliar na tomada de decisão sobre o planeamento cinírgico. O processo de visualização passa por várias fases até à projecção final (no plano) dos dados imagiológicos. entre as quaís se podem destacar as seguintes: interpolaçio, segmentaçio e projecçio do volume de dados no plano ("surface rendering" e "volume rendering").. . Abstract In dlis article we describe me steps ~ tO achieve a duee-dirnensional visualisation of data volumes from cr (Computed TomognlpbY) and MR (Magnetic Resonance). The Ihree-dirnensíonal rcconstructíon bas several advanlages since il provides lhe surgeon wílh lhe dirnensíons. topograptúcal relatíonships. onenlation and volume of lhe dífferent StIU~s and lesions. Thís process of visualisation has severa! phases untíl lhe tinal projectíonof data (in lhe plane), from which we poinl oul: interpolation, segrnentation and projectíon oflhe data volume wilh two melhods. surface rendering and volume rendering.
Vision-based hand gesture recognition is an area of active current research in computer vision an... more Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, visionbased hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign languages are not standard and universal and the grammars differ from country to country. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of features and an accuracy of 99.6% with a second dataset of features. Although the implemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.
Vision-based hand gesture recognition is an area of active current research in computer vision an... more Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign languages are not standard and universal and the grammars differ from country to country. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of features and an accuracy of 99.6% with a second dataset of features. Although the implemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.
In RoboCup Middle Size league (MSL) the main referee uses assisting technology, controlled by a s... more In RoboCup Middle Size league (MSL) the main referee uses assisting technology, controlled by a second referee, to support him, in particular for conveying referee decisions for robot players with the help of a wireless communication system. In this paper a vision-based system is introduced, able to interpret dynamic and static gestures of the referee, thus eliminating the need for a second one. The referee's gestures are interpreted by the system and sent directly to the Referee Box, which sends the proper commands to the robots. The system is divided into four modules: a real time hand tracking and feature extraction, a SVM (Support Vector Machine) for static hand posture identification, an HMM (Hidden Markov Model) for dynamic unistroke hand gesture recognition, and a FSM (Finite State Machine) to control the various system states transitions. The experimental results showed that the system works very reliably, being able to recognize the combination of gestures and hand postures in real time. For the hand posture recognition, with the SVM model trained with the selected features, an accuracy of 98,2% was achieved. Also, the system has many advantages over the current implemented one, like avoiding the necessity of a second referee, working on noisy environments, working on wireless jammed situations. This system is easy to implement and train and may be an inexpensive solution.
Hand gesture recognition for human computer interaction, being a natural way of human computer in... more Hand gesture recognition for human computer interaction, being a natural way of human computer interaction, is an area of active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them to convey information or for device control. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. In this study we try to identify hand features that, isolated, respond better in various situations in human-computer interaction. The extracted features are used to train a set of classifiers with the help of RapidMiner in order to find the best learner. A da-taset with our own gesture vocabulary consisted of 10 gestures, recorded from 20 users was created for later processing. Experimental results show that the radial signature and the centroid distance are the features that when used separately obtain better results, with an accuracy of 91% and 90,1% respectively obtained with a Neural Network classifier. These to methods have also the advantage of being simple in terms of computational complexity, which make them good candidates for real-time hand gesture recognition.
Uploads
Papers by A. Fernando Ribeiro