Visible camera-based semantic segmentation and semantic forecasting are important perception task... more Visible camera-based semantic segmentation and semantic forecasting are important perception tasks in autonomous driving. In semantic segmentation, the current frame's pixel level labels are estimated using the current visible frame. In semantic forecasting, the future frame's pixel-level labels are predicted using the current and the past visible frames and pixel-level labels. While reporting state-of-the-art accuracy, both of these tasks are limited by the visible camera's susceptibility to varying illumination, adverse weather conditions, sunlight and headlight glare etc. In this work, we propose to address these limitations using the deep sensor fusion of the visible and the thermal camera. The proposed sensor fusion framework performs both semantic forecasting as well as an optimal semantic segmentation within a multi-step iterative framework. In the first or forecasting step, the framework predicts the semantic map for the next frame. The predicted semantic map is ...
To significantly reduce the occurrence of severe traffic accidents, reducing the number of vehicl... more To significantly reduce the occurrence of severe traffic accidents, reducing the number of vehicles in urban areas should be considered. Personal mobility is essential for realizing this reduction, which requires consideration of the last-/first-mile problem. The overall objective of our research is to solve this problem using standing-type personal mobility vehicles as transportation devices; however, to evaluate the feasibility of such vehicles as future mobility devices, it is necessary to evaluate their operation under real-world conditions. Therefore, in this study, experimental and survey data relating to the velocity, stability, safety, and comfort of a standing-type personal mobility device are obtained to evaluate its performance in three different scenarios. The results show that the personal mobility vehicle is socially well received and can be safely operated on sidewalks, irrespective of the gender or age of the driver; moreover, the results suggest that subjects who ro...
In this paper, we investigate the impact of face direction during traveling by Standing-Type Pers... more In this paper, we investigate the impact of face direction during traveling by Standing-Type Personal Mobility Device (PMD). The use of PMD devices has been a popular choice for recreational activities in the developed countries such as in the USA and the countries in Europe. These devices are not completely risk free and various accidents have been reported. Since that, the risk factors leading to accidents have to be investigated. Unfortunately, the research studies on the risk factors on riding PMD devices have not been matured as much as the studies on driving cars. In this paper, we evaluate the impacts of face angle on travelling trajectory during travelling in a PMD. We showed by experiments that, the face direction is an important factor in risk assessment for traveling by a PMD.
Personal mobility devises become more and more popular last years. Gyroscooters, two wheeled self... more Personal mobility devises become more and more popular last years. Gyroscooters, two wheeled self-balancing vehicles, wheelchair, bikes, and scooters help people to solve the first and last mile problems in big cities. To help people with navigation and to increase their safety the intelligent rider assistant systems can be utilized that are used the rider personal smartphone to form the context and provide the rider with the recommendations. We understand the context as any information that characterize current situation. So, the context represents the model of current situation. We assume that rider mounts personal smartphone that allows it to track the rider face using the front-facing camera. Modern smartphones allow to track current situation using such sensors as: GPS / GLONASS, accelerometer, gyroscope, magnetometer, microphone, and video cameras. The proposed rider assistant system uses these sensors to capture the context information about the rider and the vehicle and gene...
Transportation Research Record: Journal of the Transportation Research Board
This study investigated the necessity of automated vehicle control customization for individual d... more This study investigated the necessity of automated vehicle control customization for individual drivers via a lane-changing experiment involving 35 subjects and an automated minivan. The experiment consisted of two automated driving conditions: one in which the subject was unable to override vehicle controls, the other with the option to override when the subject felt it was necessary. The automated vehicle drove at a speed of 40 km/h along three kinds of planned paths for lane changing, generated by Bezier curves; the distance required for lane changing was varied to obtain the preferred path of each subject. Various data obtained during driving, including vehicle trajectories and steering angles produced by subjects were logged. After automated driving, a questionnaire was administered to each subject. The experimental data showed that there was a statistically significant difference between comfort when the vehicle drove along the subject’s preferred path, and when it drove along...
In this study, we introduce a novel variant and application of the Collaborative Representation b... more In this study, we introduce a novel variant and application of the Collaborative Representation based Classification in spectral domain for recognition of the hand gestures using the raw surface Electromyography signals. The intuitive use of spectral features are explained via circulant matrices. The proposed Spectral Collaborative Representation based Classification (SCRC) is able to recognize gestures with higher levels of accuracy for a fairly rich gesture set. The worst recognition result which is the best in the literature is obtained as 97.3% among the four sets of the experiments for each hand gestures. The recognition results are reported with a substantial number of experiments and labeling computation.
In this study we describe the development of a ride assistance application which can be implement... more In this study we describe the development of a ride assistance application which can be implemented on the widespread smart phones and tablet. The ride assistance application has a signal processing and pattern classification module which yield almost 100% recognition accuracy for real-time signal pattern classification. We introduce a novel framework to build a training dictionary with an overwhelming discriminating capacity which eliminates the need of human intervention spotting the pattern on the training samples. We verify the recognition accuracy of the proposed methodologies by providing the results of another study in which the hand posture and gestures are tracked and recognized for steering a robotic wheelchair.
Personal Mobility Robots, such as the Seqway may be the remedy for the transportation related pro... more Personal Mobility Robots, such as the Seqway may be the remedy for the transportation related problems in the congested environment, especially for the last and first mile problems of the elderly people. However, the vehicle segmentation issues for the mobility robots, impede the use of these devices on the shared paths. The mobility reports can only be used in the designated areas and private facilities. The traffic regulatory institutions lack of robot-society interaction database. In this study, we proposed methods and algorithms which can be employed on a widespread computing device, such as an Android tablet, to gather travel information and rider behavior making use of the motion and position sensors of the tablet PC. The methods we developed, first filter the noisy sensor readings using a complementary filter, then align the body coordinate system of the device to the Segway’s motion coordinate. A couple of state of the art classification methods are integrated to classify the braking states of the Segway. The classification algorithms are not limited to classification of the braking states, but they can be used for other motion related maneuvers the road surfaces. The detected braking states and the other classified features related to the motion are reflected to the screen of the Android tablet to inform the rider about the riding and motion conditions. The developed Android application also gather these travel information to build a National database for further statisccal analysis of the robot-society interaction.
2015 17th Conference of Open Innovations Association (FRUCT), 2015
This paper presents an approach to a driver assistant system for a two-wheeled self-balancing mob... more This paper presents an approach to a driver assistant system for a two-wheeled self-balancing mobility vehicles in particular for a Segway. The approach is aimed for the readily available mobile devices, which become a part of our daily life such as a smartphone or a tablet. If a mobile device is well-positioned on a mobility vehicle, its front and rear cameras can be utilized as sensors to capture the ride related information about the rider's intention(s) and the interaction of the rider with the environment. In addition, attached to the handle bar of the mobility vehicle, this mobile device can be used to alert the driver using the motion and location sensor as well as cameras and gather ride characteristics. In this study, we describe a context-aware system that continuously observes both the rider and the dynamical characteristics of the ride and provides alerts to the rider anticipating the hazards, collision, the route of the other public road users, and the stability of the current ride characteristics.
This paper presents an approach to a driver assistant system for a two-wheeled self-balancing mob... more This paper presents an approach to a driver assistant system for a two-wheeled self-balancing mobility vehicles in particular for a Segway. The approach is aimed for the readily available mobile devices, which become a part of our daily life such as a smartphone or a tablet. If a mobile device is well-positioned on a mobility vehicle, its front and rear cameras can be utilized as sensors to capture the ride related information about the rider’s intention(s) and the interaction of the rider with the environment. In addition, attached to the handle bar of the mobility vehicle, this mobile device can be used to alert the driver using the motion and location sensor as well as cameras and gather ride characteristics. In this study, we describe a context-aware system that continuously observes both the rider and the dynamical characteristics of the ride and provides alerts to the rider anticipating the hazards, collision, the route of the other public road users, and the stability of the ...
2012 7th IEEE Conference on Industrial Electronics and Applications (ICIEA), 2012
ABSTRACT Compressed Sensing (CS) and Sparse Representation (SR) influenced the ways of signals ar... more ABSTRACT Compressed Sensing (CS) and Sparse Representation (SR) influenced the ways of signals are processed half a decade. The elegant solution to sparse signal recovery problem has found ground in several research fields such as machine learning and pattern recognition. The use of sparse representation and the solution of equations using ℓ1 minimization were utilized for face recognition problem under varying illumination and occlusion. Afterwards the idea was applied in biometrics to classify iris data. Similar to those studies, we use the discriminating nature of sparsity for the signals acquired in various signal domains and apply them to gesture recognition problem. The proposed algorithm in this context gives accurate recognition results over a recognition rate of 99% for user independent and 100% for user dependent gesture sets for fairly rich gesture dictionaries.
2011 International Symposium on Innovations in Intelligent Systems and Applications, 2011
In this study we describe the development of a six Degree of Freedom (6 DOF) pose estimation mode... more In this study we describe the development of a six Degree of Freedom (6 DOF) pose estimation model of a tracked object and 3D user interface using stereo vision and Infra-Red (IR) cameras in the Matlab/Simulink and C# environments. The raw coordinate values of the IR light sources located on the tracked object are detected, digitized and Bluetooth broadcast by IR cameras and associated circuitry within Nintendo Wiimotes. Then, the signals are received by a PC and processed using pose extraction and stereo vision algorithms. The extracted motion and position parameters are used to manipulate a virtual object in the virtual reality toolbox of Matlab for 6-DOF motion tracking. We setup a stereo camera system with Wiimotes to increase the vision volume and accuracy of 3D coordinate estimation and present a 3D user input device implementation in C# with Matlab functions. The camera calibration toolbox is used for calibration of the stereo system and computation of the extrinsic and intrinsic camera parameters. We use the epipolar geometry toolbox for computation of epipolar constraints to estimate the location of the points which are not seen by both of the cameras simultaneously. Our preliminary results for stereo vision analysis indicate that the precision for pose estimation may reach to millimeter or sub-millimeter accuracy.
The Sparse Representation based Classification (SRC) method has been utilized for various pattern... more The Sparse Representation based Classification (SRC) method has been utilized for various pattern recognition problems, especially for face recognition. Upon its success, the SRC method is extended by introducing Block Sparsity (BS) for the signal to be recovered and much better results are reported in the related literature. In this study, we test three block sparsity approach:Block Sparse Bayesian Learning, Dy- namic Group Sparsity and Block Sparse Convex Programming frameworks for the previously introduced SRC based gesture recognition algorithm. The results show that it yields faster and more accurate results than the SRC based gesture recognition algorithm and is suitable for real-time applications such as for commanding a robotic wheelchair.
Visible camera-based semantic segmentation and semantic forecasting are important perception task... more Visible camera-based semantic segmentation and semantic forecasting are important perception tasks in autonomous driving. In semantic segmentation, the current frame's pixel level labels are estimated using the current visible frame. In semantic forecasting, the future frame's pixel-level labels are predicted using the current and the past visible frames and pixel-level labels. While reporting state-of-the-art accuracy, both of these tasks are limited by the visible camera's susceptibility to varying illumination, adverse weather conditions, sunlight and headlight glare etc. In this work, we propose to address these limitations using the deep sensor fusion of the visible and the thermal camera. The proposed sensor fusion framework performs both semantic forecasting as well as an optimal semantic segmentation within a multi-step iterative framework. In the first or forecasting step, the framework predicts the semantic map for the next frame. The predicted semantic map is ...
To significantly reduce the occurrence of severe traffic accidents, reducing the number of vehicl... more To significantly reduce the occurrence of severe traffic accidents, reducing the number of vehicles in urban areas should be considered. Personal mobility is essential for realizing this reduction, which requires consideration of the last-/first-mile problem. The overall objective of our research is to solve this problem using standing-type personal mobility vehicles as transportation devices; however, to evaluate the feasibility of such vehicles as future mobility devices, it is necessary to evaluate their operation under real-world conditions. Therefore, in this study, experimental and survey data relating to the velocity, stability, safety, and comfort of a standing-type personal mobility device are obtained to evaluate its performance in three different scenarios. The results show that the personal mobility vehicle is socially well received and can be safely operated on sidewalks, irrespective of the gender or age of the driver; moreover, the results suggest that subjects who ro...
In this paper, we investigate the impact of face direction during traveling by Standing-Type Pers... more In this paper, we investigate the impact of face direction during traveling by Standing-Type Personal Mobility Device (PMD). The use of PMD devices has been a popular choice for recreational activities in the developed countries such as in the USA and the countries in Europe. These devices are not completely risk free and various accidents have been reported. Since that, the risk factors leading to accidents have to be investigated. Unfortunately, the research studies on the risk factors on riding PMD devices have not been matured as much as the studies on driving cars. In this paper, we evaluate the impacts of face angle on travelling trajectory during travelling in a PMD. We showed by experiments that, the face direction is an important factor in risk assessment for traveling by a PMD.
Personal mobility devises become more and more popular last years. Gyroscooters, two wheeled self... more Personal mobility devises become more and more popular last years. Gyroscooters, two wheeled self-balancing vehicles, wheelchair, bikes, and scooters help people to solve the first and last mile problems in big cities. To help people with navigation and to increase their safety the intelligent rider assistant systems can be utilized that are used the rider personal smartphone to form the context and provide the rider with the recommendations. We understand the context as any information that characterize current situation. So, the context represents the model of current situation. We assume that rider mounts personal smartphone that allows it to track the rider face using the front-facing camera. Modern smartphones allow to track current situation using such sensors as: GPS / GLONASS, accelerometer, gyroscope, magnetometer, microphone, and video cameras. The proposed rider assistant system uses these sensors to capture the context information about the rider and the vehicle and gene...
Transportation Research Record: Journal of the Transportation Research Board
This study investigated the necessity of automated vehicle control customization for individual d... more This study investigated the necessity of automated vehicle control customization for individual drivers via a lane-changing experiment involving 35 subjects and an automated minivan. The experiment consisted of two automated driving conditions: one in which the subject was unable to override vehicle controls, the other with the option to override when the subject felt it was necessary. The automated vehicle drove at a speed of 40 km/h along three kinds of planned paths for lane changing, generated by Bezier curves; the distance required for lane changing was varied to obtain the preferred path of each subject. Various data obtained during driving, including vehicle trajectories and steering angles produced by subjects were logged. After automated driving, a questionnaire was administered to each subject. The experimental data showed that there was a statistically significant difference between comfort when the vehicle drove along the subject’s preferred path, and when it drove along...
In this study, we introduce a novel variant and application of the Collaborative Representation b... more In this study, we introduce a novel variant and application of the Collaborative Representation based Classification in spectral domain for recognition of the hand gestures using the raw surface Electromyography signals. The intuitive use of spectral features are explained via circulant matrices. The proposed Spectral Collaborative Representation based Classification (SCRC) is able to recognize gestures with higher levels of accuracy for a fairly rich gesture set. The worst recognition result which is the best in the literature is obtained as 97.3% among the four sets of the experiments for each hand gestures. The recognition results are reported with a substantial number of experiments and labeling computation.
In this study we describe the development of a ride assistance application which can be implement... more In this study we describe the development of a ride assistance application which can be implemented on the widespread smart phones and tablet. The ride assistance application has a signal processing and pattern classification module which yield almost 100% recognition accuracy for real-time signal pattern classification. We introduce a novel framework to build a training dictionary with an overwhelming discriminating capacity which eliminates the need of human intervention spotting the pattern on the training samples. We verify the recognition accuracy of the proposed methodologies by providing the results of another study in which the hand posture and gestures are tracked and recognized for steering a robotic wheelchair.
Personal Mobility Robots, such as the Seqway may be the remedy for the transportation related pro... more Personal Mobility Robots, such as the Seqway may be the remedy for the transportation related problems in the congested environment, especially for the last and first mile problems of the elderly people. However, the vehicle segmentation issues for the mobility robots, impede the use of these devices on the shared paths. The mobility reports can only be used in the designated areas and private facilities. The traffic regulatory institutions lack of robot-society interaction database. In this study, we proposed methods and algorithms which can be employed on a widespread computing device, such as an Android tablet, to gather travel information and rider behavior making use of the motion and position sensors of the tablet PC. The methods we developed, first filter the noisy sensor readings using a complementary filter, then align the body coordinate system of the device to the Segway’s motion coordinate. A couple of state of the art classification methods are integrated to classify the braking states of the Segway. The classification algorithms are not limited to classification of the braking states, but they can be used for other motion related maneuvers the road surfaces. The detected braking states and the other classified features related to the motion are reflected to the screen of the Android tablet to inform the rider about the riding and motion conditions. The developed Android application also gather these travel information to build a National database for further statisccal analysis of the robot-society interaction.
2015 17th Conference of Open Innovations Association (FRUCT), 2015
This paper presents an approach to a driver assistant system for a two-wheeled self-balancing mob... more This paper presents an approach to a driver assistant system for a two-wheeled self-balancing mobility vehicles in particular for a Segway. The approach is aimed for the readily available mobile devices, which become a part of our daily life such as a smartphone or a tablet. If a mobile device is well-positioned on a mobility vehicle, its front and rear cameras can be utilized as sensors to capture the ride related information about the rider's intention(s) and the interaction of the rider with the environment. In addition, attached to the handle bar of the mobility vehicle, this mobile device can be used to alert the driver using the motion and location sensor as well as cameras and gather ride characteristics. In this study, we describe a context-aware system that continuously observes both the rider and the dynamical characteristics of the ride and provides alerts to the rider anticipating the hazards, collision, the route of the other public road users, and the stability of the current ride characteristics.
This paper presents an approach to a driver assistant system for a two-wheeled self-balancing mob... more This paper presents an approach to a driver assistant system for a two-wheeled self-balancing mobility vehicles in particular for a Segway. The approach is aimed for the readily available mobile devices, which become a part of our daily life such as a smartphone or a tablet. If a mobile device is well-positioned on a mobility vehicle, its front and rear cameras can be utilized as sensors to capture the ride related information about the rider’s intention(s) and the interaction of the rider with the environment. In addition, attached to the handle bar of the mobility vehicle, this mobile device can be used to alert the driver using the motion and location sensor as well as cameras and gather ride characteristics. In this study, we describe a context-aware system that continuously observes both the rider and the dynamical characteristics of the ride and provides alerts to the rider anticipating the hazards, collision, the route of the other public road users, and the stability of the ...
2012 7th IEEE Conference on Industrial Electronics and Applications (ICIEA), 2012
ABSTRACT Compressed Sensing (CS) and Sparse Representation (SR) influenced the ways of signals ar... more ABSTRACT Compressed Sensing (CS) and Sparse Representation (SR) influenced the ways of signals are processed half a decade. The elegant solution to sparse signal recovery problem has found ground in several research fields such as machine learning and pattern recognition. The use of sparse representation and the solution of equations using ℓ1 minimization were utilized for face recognition problem under varying illumination and occlusion. Afterwards the idea was applied in biometrics to classify iris data. Similar to those studies, we use the discriminating nature of sparsity for the signals acquired in various signal domains and apply them to gesture recognition problem. The proposed algorithm in this context gives accurate recognition results over a recognition rate of 99% for user independent and 100% for user dependent gesture sets for fairly rich gesture dictionaries.
2011 International Symposium on Innovations in Intelligent Systems and Applications, 2011
In this study we describe the development of a six Degree of Freedom (6 DOF) pose estimation mode... more In this study we describe the development of a six Degree of Freedom (6 DOF) pose estimation model of a tracked object and 3D user interface using stereo vision and Infra-Red (IR) cameras in the Matlab/Simulink and C# environments. The raw coordinate values of the IR light sources located on the tracked object are detected, digitized and Bluetooth broadcast by IR cameras and associated circuitry within Nintendo Wiimotes. Then, the signals are received by a PC and processed using pose extraction and stereo vision algorithms. The extracted motion and position parameters are used to manipulate a virtual object in the virtual reality toolbox of Matlab for 6-DOF motion tracking. We setup a stereo camera system with Wiimotes to increase the vision volume and accuracy of 3D coordinate estimation and present a 3D user input device implementation in C# with Matlab functions. The camera calibration toolbox is used for calibration of the stereo system and computation of the extrinsic and intrinsic camera parameters. We use the epipolar geometry toolbox for computation of epipolar constraints to estimate the location of the points which are not seen by both of the cameras simultaneously. Our preliminary results for stereo vision analysis indicate that the precision for pose estimation may reach to millimeter or sub-millimeter accuracy.
The Sparse Representation based Classification (SRC) method has been utilized for various pattern... more The Sparse Representation based Classification (SRC) method has been utilized for various pattern recognition problems, especially for face recognition. Upon its success, the SRC method is extended by introducing Block Sparsity (BS) for the signal to be recovered and much better results are reported in the related literature. In this study, we test three block sparsity approach:Block Sparse Bayesian Learning, Dy- namic Group Sparsity and Block Sparse Convex Programming frameworks for the previously introduced SRC based gesture recognition algorithm. The results show that it yields faster and more accurate results than the SRC based gesture recognition algorithm and is suitable for real-time applications such as for commanding a robotic wheelchair.
Uploads
Papers by Ali Boyali