Homeyra Robotic Asistive Arm
Homeyra Robotic Asistive Arm
Homeyra Robotic Asistive Arm
by
Homeyra Pourmohammadali
A thesis
presented to the University of Waterloo
in fulfillment of the
thesis requirement for the degree of
Master of Applied Science
in Mechanical Engineering
I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any
required final revisions, as accepted by my examiners.
Signature
Signature
ii
Abstract
The number of elderly people around the world is growing rapidly. This has led to an increase in
the number of people who are seeking assistance and adequate service either at home or in long-term-
care institutions to successfully accomplish their daily activities. Responding to these needs has been
a burden to the health care system in terms of labour and associated costs and has motivated research
in developing alternative services using new technologies.
Various intelligent, and non-intelligent, machines and robots have been developed to meet the
needs of elderly and people with upper limb disabilities or dysfunctions in gaining independence in
eating, which is one of the most frequent and time-consuming everyday tasks. However, in almost all
cases, the proposed systems are designed only for the personal use of one individual and little effort
to design a multiple-user feeding robot has been previously made. The feeding requirements of
elderly in environments such as senior homes, where many elderly residents dine together at least
three times per day, have not been extensively researched before.
The aim of this research was to develop a machine to feed multiple elderly people based on their
characteristics and feeding needs, as determined through observations at a nursing home.
Observations of the elderly during meal times have revealed that almost 40% of the population was
totally dependent on nurses or caregivers to be fed. Most of those remaining, suffered from hand
tremors, joint pain or lack of hand muscle strength, which made utensil manipulation and
coordination very difficult and the eating process both messy and lengthy. In addition, more than 43%
of the elderly were very slow in eating because of chewing and swallowing problems and most of the
rest were slow in scooping and directing utensils toward their mouths. Consequently, one nurse could
only respond to a maximum of two diners simultaneously. In order to manage the needs of all elderly
diners, they required the assistance of additional staff members. The limited time allocated for each
meal and the daily progression of the seniors’ disabilities also made mealtime very challenging.
Based on the caregivers’ opinion, many of the elderly in such environments can benefit from a
machine capable of feeding multiple users simultaneously. Since eating is a slow procedure, the idle
state of the robot during one user’s chewing and swallowing time can be allotted for feeding another
person who is sitting at the same table.
iii
The observations and studies have resulted in the design of a food tray, and selection of an
appropriate robot and applicable user interface. The proposed system uses a 6-DOF serial articulated
robot in the center of a four-seat table along with a specifically designed food tray to feed one to four
people. It employs a vision interface for food detection and recognition. Building the dynamic
equations of the robotic system and simulation of the system were used to verify its dynamic
behaviour before any prototyping and real-time testing.
iv
Acknowledgements
The completion of this thesis would be impossible without people who really supported me and
believed in me before the beginning of, and throughout my research. I would like to express my deep
and sincere gratitude to both of my supervisors, Dr. Amir Khajepour from the Department of
Mechanical and Mechatronics Engineering and Dr. Jonathan Kofman from the Department of System
Design Engineering. Their wide knowledge and their logical way of thinking have been of great value
for me. Their understanding, encouraging and personal guidance have provided a good basis for the
present thesis. The confidence and dynamism with which Dr. A. Khajepour guided the work requires
no elaboration. Dr. Kofman’s support and assurance at the time of crisis would be remembered
lifelong. On the other hand, their valuable suggestions as final words during the course of work are
greatly acknowledged.
My sincere thanks are due to the official readers, Dr. William Melek and Dr. Catherine Burns, for
their detailed review, constructive criticism and excellent advice during the preparation of this thesis.
I warmly thank them for their valuable advice and friendly help. Their discussions around my work
have been very helpful for this study.
I also want to thank my parents, who taught me the value of hard work by their own example. I
would like to share this moment of happiness with my parents, and siblings. I am very grateful to my
husband, Dr. Ehsan Toyserkani, for all the support he provided throughout my research work.
Without his loving, understanding and guidance I would never have completed my present research. I
am also really thankful to my son, Ali, who was very patient at the time when he needed my company
the most.
At last, not the least, I wish to express my warm and sincere thanks to Dr. Nasser Lashgarian Azad,
Kiumars Jalali, Matthew Millard, Ramesh Periasamy, Masoud Alimardani, Wilson Wong, Mehrdad
Iravani-Tabrizipour, Nasim Paryab, Mahtab Kamali, Somayeh Moazeni and Hanieh Aghighi who
supported me throughout my work. Finally, I would like to thank all, whose direct and indirect
supports helped me completing my thesis.
v
Dedication
and
vi
Table of Contents
Author’s Declaration ..............................................................................................................................ii
Abstract .................................................................................................................................................iii
Acknowledgements ................................................................................................................................ v
Dedication .............................................................................................................................................vi
Table of Contents .................................................................................................................................vii
List of Figures ........................................................................................................................................ x
List of Tables.......................................................................................................................................xiii
Chapter 1 Introduction............................................................................................................................ 1
1.1 Objectives and scope .................................................................................................................... 2
Chapter 2 Literature Review .................................................................................................................. 4
2.1 Marketing ..................................................................................................................................... 4
2.2 Aging Population and Escalation of Required Services ............................................................... 5
2.3 Self-Feeding Disabilities .............................................................................................................. 7
2.4 Eating As a Daily Activity ........................................................................................................... 9
2.5 Available Feeding Devices........................................................................................................... 9
2.5.1 Arm Supports....................................................................................................................... 10
2.5.2 Human Extenders for Feeding............................................................................................. 13
2.5.3 Electro-Mechanical Powered Devices................................................................................. 14
2.5.4 Assistive Robotic Feeding Systems..................................................................................... 19
2.5.5 Prices of Feeding Devices ................................................................................................... 24
2.5.6 Discussion on Feeding Devices........................................................................................... 25
2.6 User Interfaces for Feeding Devices .......................................................................................... 25
2.6.1 User Interfaces for Rehabilitation or Assistive Devices...................................................... 25
2.6.2 Discussion of User Interfaces .............................................................................................. 30
Chapter 3 Observation.......................................................................................................................... 34
3.1 Observation Objectives .............................................................................................................. 34
3.2 User Differences and Related Data ............................................................................................ 34
3.3 Observation Results.................................................................................................................... 35
3.4 Discussion of Results ................................................................................................................. 43
3.4.1 Differences between Two Care Units.................................................................................. 43
vii
3.4.2 Elderly Problems and Behaviour in Regular Care Unit....................................................... 44
3.4.3 Multiple-User System.......................................................................................................... 45
Chapter 4 Design of Feeding Robot ..................................................................................................... 47
4.1 User Characteristics.................................................................................................................... 47
4.2 User’s Safety .............................................................................................................................. 49
4.3 Assumptions for Using the System ............................................................................................ 49
4.4 Robotic System and Food Tray .................................................................................................. 50
4.5 Cups, Spoon, and Fork ............................................................................................................... 50
4.6 Expected Characteristics of Robot ............................................................................................. 55
4.7 Selected Robot............................................................................................................................ 57
4.8 Adding Cameras to the System .................................................................................................. 59
4.9 Multiple-Users Feeding Procedures ........................................................................................... 61
Chapter 5 Kinematic, Dynamic and Control of Multiple-User Feeding Robot.................................... 77
5.1 Kinematic and Inverse Problem ................................................................................................. 77
5.1.1 Analysis of Manipulator Singularity ................................................................................... 78
5.2 Building Dynamic Equations ..................................................................................................... 81
5.2.1 Behaviour of ADAMS Model to the Given Motions .......................................................... 82
5.3 Robot Control ............................................................................................................................. 86
5.3.1 ADAMS Control ................................................................................................................. 87
Chapter 6 Vision System and Image Processing.................................................................................. 92
6.1 Rationale for the Use of Vision System ..................................................................................... 92
6.2 Vision Related Tasks.................................................................................................................. 93
6.3 Image Acquisition and Preprocessing ........................................................................................ 94
6.3.1 Image Acquisition ............................................................................................................... 94
6.3.2 Image Histogram ................................................................................................................. 94
6.3.3 Image Enhancement ............................................................................................................ 94
6.4 Processing and Feature Extraction ............................................................................................. 95
6.4.1 Image Thresholding............................................................................................................. 95
6.4.2 Edge Detection .................................................................................................................... 95
6.4.3 Segmentation ....................................................................................................................... 95
6.4.4 Filling the Gaps ................................................................................................................... 96
viii
6.4.5 Region Growing .................................................................................................................. 96
6.4.6 Region Analysis................................................................................................................... 97
6.4.7 Feature Extraction ............................................................................................................... 97
6.5 Segmenting the Pieces of Solid Food......................................................................................... 97
6.6 Touching/Overlapping Problem ............................................................................................... 101
6.7 Discussion of Results ............................................................................................................... 103
Chapter 7 Closure............................................................................................................................... 105
7.1 Observations............................................................................................................................. 105
7.2 Multiple-user feeding system ................................................................................................... 107
7.3 Design....................................................................................................................................... 107
7.4 Vision system ........................................................................................................................... 109
Appendix A Anthropometric Data of an Adult Person ...................................................................... 111
Appendix B Research Ethics Review Feedback................................................................................. 114
Appendix C CRS A465 Characteristics and Dimensions................................................................... 115
Appendix D Kinematic and Dynamic of the Manipulators................................................................ 116
Appendix E Model of 6-DOF Robot in DynaFlexPro-Maple ............................................................ 121
Appendix F Model of 6-DOF Robot in DynaFlexPro-Maple ............................................................ 131
Appendix G ADAMS/MATLAB Interface........................................................................................ 146
ix
List of Figures
Figure 2-1: Canada’s Aging Population [4]. .......................................................................................... 6
Figure 2-2: (a) Action Arm [14], (b) Friction Feeder [15]. .................................................................. 11
Figure 2-3: (a) Stable Self Feeding Support [15], (b) Comfy Feeder [15]. .......................................... 12
Figure 2-4: (a) Eatery [20], (b) Magpie assists in eating [21]. ............................................................. 13
Figure 2-5: HAND Feeder [21]. ........................................................................................................... 14
Figure 2-6: My Spoon [23], Beeson Feeder [19].................................................................................. 16
Figure 2-7: Neater Eater [26]. .............................................................................................................. 16
Figure 2-8: Assistive Dinning Device [28]. ......................................................................................... 17
Figure 2-9: Winsford feeder [31]. ........................................................................................................ 18
Figure 2-10: Mila Feeder [30]. ............................................................................................................. 19
Figure 2-11: Handy 1 overall system and food tray [35]. .................................................................... 20
Figure 2-12: ISAC at work [38]. .......................................................................................................... 21
Figure 2-13: (a) The concept of Eater Assist robot, (b) CRT display [41]........................................... 22
Figure 2-14: Assistive Robot for Bedridden Elderly [43]. ................................................................... 22
Figure 2-15: Configuration of Assistive Robot Hand system [44]....................................................... 23
Figure 2-16: Categories of different user interfaces............................................................................. 31
Figure 4-1: Some possible shapes for the food tray (a) circular plate (b) square plate (c) arc plate. ... 50
Figure 4-2: Dimensions of the cup and its handle................................................................................ 51
Figure 4-3: Possible feeding angles (a) straight spoon with thick handle for front feeding, (b) inclined
spoon for easier scoop, (c) inclined spoon for semi-side feeding, (d) inclined spoon for side
feeding. ......................................................................................................................................... 52
Figure 4-4: (a) Top view of the considered area for fitting utensils, (b) Arrangement of the food
plates, cups, fork and spoon. The directions of all handles are towards the center. ..................... 53
Figure 4-5: Deep sloped plate for liquid/semi-liquid foods/desserts which can be scooped by a spoon.
...................................................................................................................................................... 53
Figure 4-6: Flat plate for the foods/desserts which can be picked up by a fork. .................................. 53
Figure 4-7: Top view of the position and arrangement of four food trays for four users, the users are at
least 25 cm away from the food tray edge.................................................................................... 54
Figure 4-8: 3D model of the robot located in the center of the table along with four food trays......... 55
x
Figure 4-9: Average anthropometric dimension of an adult user [25], size of a typical standard chair
and table, with respect to one food tray and also the proposed robot which has the dimensions of
a Thermo CRS-A465 articulated robot (schematic diagram is to scale, dimensions in mm)....... 58
Figure 4-10: Arrangement of cameras versus food trays and users (user 4 is not shown). Cam U i is
tracking the i th user’s mouth and Cam Fi is recognizing food and presence of utensils in
i th tray........................................................................................................................................... 60
Figure 4-11: Arrangement of eight cameras with respect to the users and the food trays.................... 60
Figure 4-12: Multiple-camera management. ........................................................................................ 66
Figure 4-13: User's face recognition and mouth tracking section. ....................................................... 67
Figure 4-14: Checking the availability of the users and objects, and object recognition section......... 68
Figure 4-15: Messages sent to the users in case of unavailability of each object. ............................... 69
Figure 4-16: Acceptable commands by the feeding robotic system..................................................... 70
Figure 4-17: Robot's tasks after receiving the command for picking up the fork. ............................... 71
Figure 4-18: Robot's tasks after receiving the command for picking up the spoon. ............................ 72
Figure 4-19: Robot's tasks after receiving the command for picking up any of the cups..................... 73
Figure 4-20: Messages sent to the users for choosing an appropriate utensil for picking up the food
according to the chosen section of food. ...................................................................................... 74
Figure 4-21: Robot's tasks after receiving the command for holding any of the utensils..................... 75
Figure 5-1: 6-DOF robot, inputs and outputs. ...................................................................................... 79
Figure 5-2: Specification of part numbers for dynamic analysis.......................................................... 82
Figure 5-3: Magnitude of the translational displacement (continuous line), translational velocity
(dashed line) and translational acceleration (dotted line) for Motion 1........................................ 83
Figure 5-4: Magnitude of angular velocity (continuous line) and angular acceleration (dashed line) for
Motion 1. ...................................................................................................................................... 83
Figure 5-5: Magnitude of the element torque (continuous line), element force (dashed line) and power
consumption (dotted line) for Motion 1. ...................................................................................... 84
Figure 5-6: The x (continuous line), y (dashed line) and z-components (dotted line) of the element
torque for Motion 1. ..................................................................................................................... 84
Figure 5-7: The x (continuous line), y (dashed line) and z (dotted line) components of the translational
displacement for Motion 1............................................................................................................ 85
xi
Figure 5-8: The x (continuous line), y (dashed line) and z components (dotted line) of the translational
velocity for Motion 1.................................................................................................................... 85
Figure 5-9: The x, y and z components of the translational acceleration for Motion 1........................ 86
Figure 5-10: ADAMS Model and Control System versus their input and output [ADAMS]. ............. 88
Figure 5-11: Simulink model for control block.................................................................................... 89
Figure 5-12: Simulation results a) position of the end effector (mm) b) output velocity (mm/s) and
c) input torque (N-mm). ............................................................................................................... 90
Figure 6-1: a) original image, b) binary image, c) removing small pixels from the edge detected image
3, d) image c after closing with square 3, e) filling gaps of image d, f) image 4 after closing with
square 5, g) filling gaps of image 7, h) segmentation and centroid extraction............................ 98
Figure 6-2: Correctly found centroids of image in Figure 6-1-1. ......................................................... 99
Figure 6-3: a) Binary image b) correctly found centroids. ................................................................... 99
Figure 6-4: a) Original image b) Error in final segmentation............................................................... 99
Figure 6-5: a) Original image, b) Error in final segmentation............................................................ 100
Figure 6-6: a) adjustment of the greyscale image, b) binary image after enhancement, c) filling the
holes of the edge image (square 5), d) first erosion of the filled gaps of the edge, e) fourth
erosion, f) sixth erosion. ............................................................................................................. 100
Figure 6-7: Results for some selected possible arrangements (6-7a to 6-7e) of three pieces of touching
cut toast....................................................................................................................................... 102
xii
List of Tables
Table 2-1: Aging demographics from 1998 to 2041 in Canada [2]........................................................ 6
Table 2-2: The mean minutes spent for daily activities of elderly with average age of 75.2 -79 [13]... 9
Table 2-3: Prices of the available feeding devices in the market. ........................................................ 24
Table 2-4: Input device familiarity [51]. .............................................................................................. 31
Table 3-1: Observation results of the nursing home of the “Village of Winston Park” senior home. . 36
Table 3-2: Different categories of different samples of food, desserts or salads.................................. 41
Table 3-3: Percentage of usage of spoon, fork or both in a one week menu........................................ 43
Table 4-1: Feeding robot user characteristics....................................................................................... 47
Table 4-2: Dimensions of a typical spoon for adults............................................................................ 51
Table 4-3: System variables and reference names................................................................................ 61
Table 4-4: Acceptable commands from users. ..................................................................................... 63
Table 4-5: Functions (subsystems) and the reference names. .............................................................. 63
xiii
Chapter 1
Introduction
The goal of this research was to design an intelligent robot, capable of simultaneously feeding
multiple elderly or disabled people sitting at the same table. This feeding robot can be used in senior
homes or similar places where people with upper-limb impairments often eat meals together.
The preliminary research for this project, started with exploration in the broad area of
rehabilitation, with service and assistive robotics in general, for those with upper limb disabilities or
dysfunctions. In addition to workstation robotics in places such as offices and hospitals, different
types of assistive robotics systems were reviewed, including mobile and stationary, attached to and
separate from the body, passively- and actively- controlled, and wheelchair- and table- mounted
systems. This helped to determine the state of the art and potential benefits and problems of
rehabilitation and feeding robots. The first intention was to come up with an assistive device for
upper-limb disabled people that would benefit them in gaining independence in accomplishing daily
activities. In the study, eating was found to be one of the most frequent and time consuming daily
tasks, which would pose many social and emotional problems for the disabled. Since the elderly, as a
population, have the most cases of upper-limb dysfunctions, the intention of the project was directed
more towards developing an assistive feeding machine specifically for them. A parallel preliminary
study aimed at the market analyses of the available feeding machines including their prices, success
rates, features, constraints, and drawbacks was conducted; and knowledge about the demographics
and conditions of potential and existing users of such assistive feeding devices was also acquired.
Consideration of some issues such as available resources, equipment and experience, made the
choice of assistive robotic system more clear; a table-mounted, actively controlled, stationary robot,
not to be used as an extender to any human body part, was ultimately decided upon as the focus of the
design. It was also decided that the robot should be an intelligent one, with the ability to provide a
more convenient and natural user-robot interaction than what is currently available. Since the eating
task was found to be an activity of daily living (ADL) that is repeated more frequently and is more
time-consuming during the week when compared to other daily tasks, the goal of the thesis was
further refined as follows: to design an assistive robotic manipulator to make the elderly as
independent as possible in feeding themselves. Therefore, the thesis literature review only reflects
those devices or machines that assist disabled and/or elderly users with eating and drinking tasks.
1
It was found that the elderly and their feeding requirements in environments such as senior homes
with many elderly residents dining together at least three times per day have not extensively been
researched before. This, the unavailability of multiple-user feeding systems in the market, and the
lack of related research motivated this project to focus on the design of multiple user feeding systems
for nursing homes. The final decision to change the single-user feeding robot to a multiple-user
device was made after a series of observations in an elderly behavioural reactions of the elderly
during meal time in the nursing home, resolved many uncertainties regarding the real needs of this
population in such places while feeding themselves. The user’s characteristics and requirements as
well as some information about the people and environment they were interacting with, such as
caregivers and service-providers in dining areas, were grouped and considered all together. The
outcome of assessing these observations both reinforced the idea about designing a multiple-users
device and solidified the potential benefits of such an assistive machine to make the elderly more
independent.
1. Determine the end-user and caregiver needs and environmental factors that need to be
considered in the design of a feeding system for elderly by conducting observations of seniors
eating at a nursing home.
2. Perform a preliminary design of a robot system based on the results of the observations at the
nursing home for elderly. The observations led to the initial design of a multiple-user feeding
robot that includes:
b) determining robot tasks required and their management for multiple users,
c) performing motion planning of the robot system to determine the robot joint angles
based on the end-effector position,
2
The layout of this thesis is as follows: Chapter 2 presents a literature review of previous and current
research attempts to design an assisting device to help the elderly or disabled with feeding
themselves; it also analyzes the existing market and reviews the available user interfaces utilized by
feeding machines or similar rehabilitation or service robots. Chapter 3 reveals the objectives and
results of a series of observations in a nursing home. The listed characteristics of the typical users and
specifications of the desired robot are based on the outcomes of these observations. Chapter 4
introduces the design of a feeding robot, including a robot manipulator and food trays and their
dimensions. Chapter 5 reviews the kinematic, dynamic and control issues of the proposed feeding
robot. It assigns the coordinate systems, defines Denavit-Hartenberg (DH) parameters and tables,
calculates the transformation matrices for each joint and finds the Jacobian matrix and singular
positions. The inverse kinematic analysis is provided along with the preliminary steps for controlling
the robot using ADAMS software. Chapter 6 explains the vision system and image processing for
recognition of some types of food inside the tray. This chapter shows the results of processed images
by the developed algorithm for segmentation of the pieces of solid foods inside the food tray and
finding the best insertion point for the fork. Finally, Chapter 7 concludes the project and highlights
plausible future directions of research that would complement the present study.
3
Chapter 2
Literature Review
The most important goals of this chapter are to review previous and current research attempts to
design assistive feeding devices and their user interfaces, and to perform a market analysis by
introducing similar products available in the existing market for use by elderly and disabled people
with any kind of upper-limb dysfunction. However, before presenting such a review, the issues of a
rapidly increasing elderly population, the escalating problem of their required personal and public
services, and different kinds of diseases which may lead to disabilities of upper-extremities are
discussed. This discussion will reflect the importance of designing assistive machines, rehabilitation
or service robotic systems for this population to use in different environments.
One of the important issues in designing assistive devices is laid on the demographics of their
users. The statistical data regarding the number and characteristics of the user population plays an
important role in motivating the continuation of such projects, as well as determining the design
limitations to be considered and necessary features to be added to the system. The next section
introduces the objectives of the market analysis for the feeding device and lists important issues that
will be discussed in the next sections.
2.1 Marketing
The objective of marketing is to understand both the market itself and the requirements of consumers
in order to be able to identify the design constraints of the proposed product and its price. In
rehabilitation and service robotics, many good designs have failed because of basic design flaws, such
as cost, ergonomics and difficulties in utilizing controls. Therefore, it is critical for a designer to
determine the user requirements as well as the design limitations beforehand.
One of the most important parts of analyzing the market for an assistive feeding device is the
needs analysis. The needs analysis looks at the statistics and studies about the people who are in need
of such devices. Furthermore, major criteria such as age, type of disability, gender and income level
4
of the users are important in the design considerations; and the priorities may be different based on
whether the user lives in an institution, with a family member, or independently with a caregiver to
assist in the activities of daily living (ADL).
Some of the issues to be discussed in the upcoming sections of this chapter are: 1) the number and
characteristics of people in need of assistive devices (demographics of the potential users), 2) the
demographics of existing consumers of available products (existing user demographics), 3) causes of
upper limb disabilities of the users and consequent dysfunctions in ADL, specifically with respect to
the elderly, 4) physical and mental capacity of the consumers to operate the device, 5) available
assistive devices in the market for people with difficulties using any part of their upper-extremities, 6)
previous and current related research projects that have been attempted or reached completion, 7)
features, constraints and prices of available products and useful applicable information; and results
from previous and existing research relevant to this project, 8) available user interfaces specifically
for feeding devices and similar rehabilitation devices in general.
Since the majority of the potential users of the proposed feeding system are elderly, 65 years of age
or older, the following section attempts to convey the fast growing problems of aging for today and
the future.
5
services for this population. The next section will discuss some aspects that affect the required
services of elderly people.
The focus of most national aging policies is on dignity, independence, participation, fairness and
security [6], since the quality of life of the elderly is very important. Consequently, older adults
require a huge share of special services and public support. The number of persons requiring formal
care (mainly nursing home care) and informal care (mainly care at home) will increase sharply even if
the proportion of persons at each age remains unchanged.
Another issue that will affect providing the necessary services for the elderly is the number of
available nurses and caregivers. A study about the workforce of aging registered nurses [7] reveals
that: a) within 10 years, 40 percent of working registered nurses (RNs) will be 50 years or older; and
b) as those RNs retire, the supply of working RNs is projected to be 20 percent below requirements
by the year 2020. This shortage of employed nurses and caregivers in the coming years will provide
6
significant opportunities for robotics and artificial intelligence (AI) researchers to develop assistive
technology that can improve the quality of life for the aging population. [8]
Those with essential tremors [12] have difficulty eating normally or holding a cup or glass without
spilling it, and if the voice or tongue is affected, difficulty in talking may occur. Parkinson [11], [12]
which affects muscle movement nerve cells, causes tremors of the fingers and arms, muscle rigidity in
the limbs and neck, slowed motion, impaired speech, loss of automatic movement, difficulty chewing
and swallowing and also problems with movement balance and coordination. Dementia and
Alzheimer’s disease [11], [12] can cause a decline in memory, comprehension, learning capability,
and ability to think, as well as language and judgment. People suffering from this kind of disease may
see food on their plate, but they cannot logically connect hunger to food to feeding.
1
Atrophy: A wasting of a part of the body because of disease or lack of use. [Wikipedia Encyclopedia]
7
Furthermore, people with SCI may have tingling or loss of sensation in their hands, fingers, feet, or
toes; partial or complete loss of control over any part of the body; and difficulty with balance. Those
with MS may experience coordination and memory problems, blurred vision, muscle spasticity,
indistinct speech, tremor, weakness and swallowing disorders. MD, on the other hand, is a muscle
disorder that causes weakness and wasting of the voluntary muscles that are responsible for
movement of body parts. Similarly ALS is a disease of the motor nerve cells in the brain and spinal
cord that causes those afflicted with it to have muscle weakness, twitching, cramping and stiffness of
muscles, slurred speech, and difficulty chewing or swallowing.
In general, an elderly person with limitations of vision, hearing or mobility can be made more
independent if the deficits are properly assessed and the environment appropriately designed. The
prevalence of sensory changes and injuries among the elderly dictates the importance of addressing
them in primary care settings. The elderly individual’s perception of the environment changes subtly
as the senses age. Changes in vision, hearing, taste and smell are almost universal. Only 5% of
persons over 80 have 20/20 vision, and nearly 60% of those aged 65 to 70 show evidence of cataracts
or glaucoma. Twenty-five percent of those over 65 have some type of hearing problem and among
persons over 75, the incidence increases to over 40%. Sixteen percent of the elderly report they can
hear only shouted speech. Similarly, the thresholds for taste and smell increase with age [12].
Lower frequency, lower pitch and tone of voices, an increase in sound threshold, especially for
high-pitched sounds, a decrease in speech discrimination and auditory judgment are some of the
typical characteristics of the elderly group. They are also more susceptible to eye diseases and having
vision problems [4]. They usually have difficulty in reading small print; have poor vision in
environments with insufficient light and need longer adaptation time to light changes.
Sensory losses, especially for the older population, limit self-care and activities of daily living, and
significantly alter communication and interaction patterns [4]. Impairment of the senses contributes
considerably to the decline in functional state of the elderly individual and leads to their increasing
isolation. The sensory impairments of the elderly, such as partial to complete loss of the ability to
hear, talk, or see will have the effect of decreasing their functionality in conducting everyday tasks.
The above analysis makes clear that, as with any new technology, it is important to consider the
characteristics of the users who will benefit from it before designing a new assistive device. Indeed,
the proportion of seniors with upper extremity disabilities, the cause of and physical manifestations of
8
those disabilities, as well as the natural degradation of sensory perception that may alter the
functional abilities of the elderly are all important considerations in the design of an assistive eating
robot.
Table 2-2: The mean minutes spent for daily activities of elderly with average age of 75.2 -79 [13].
The next section introduces different types of assistive feeding devices which are either
manufactured and available in the market, or are still in the research phase and have only been
designed or prototyped.
9
to not only increase self-esteem, confidence in accomplishing ADL tasks and independence, but also
to decrease the number of caregivers and institutional costs required to adequately care for this
population.
The desire to assist in feeding those with upper limb disabilities or dysfunctions with a machine or
robot, in an effort to help them accomplish their eating tasks independently, has been capturing the
minds of many researchers and designers for decades. Whether the devices are simple mechanical or
electromechanical machines or complicated, intelligent robots, gaining independence in ADL has
been the major motivation behind their development.
Using different human–machine interfaces, from simple switches activated by different body parts
(depending on type of disability), to more advanced ones, such as voice and speech recognition and
synthesis, laser pointing devices, object recognition and computer vision, researchers have tried their
best to accommodate the needs of users, patients, and elderly persons who have expressed the desire
for an assistive device that not only helps them eat more easily and neatly, but is both safe and
comfortable to use, and will allow them to minimize their dependence on nurses, caregivers or family
members. Some of the proposed and commercially available assistive feeding systems will be
mentioned in the following sections. These devices have been categorized as: arm supports, human
extenders, electro-mechanical devices, and intelligent automatic or semi-automatic machines.
Friction Feeder: Friction Feeder [15] is made for users suffering from spasticity (having
involuntary contraction of a muscle or group of muscles), mild tremors, ataxia (loss of the ability to
coordinate muscular movement) or mild-to-moderate uncoordination.
10
(a) (b)
Figure 2-2: (a) Action Arm [14], (b) Friction Feeder [15].
It helps in leading any inappropriate movement of the shoulder and elbow to the correct direction,
and assists in self-feeding and leisure activities. Bands are used to aid control of horizontal shoulder
abduction (drawing away from the midline of the body) and adduction (drawing inward toward the
median axis of the body), and flexion and extension of the elbow. (Figure 2-2(b))
Ball Bearing Feeder with Elevating Proximal Arm: The Ball Bearing Feeder [15] is a balanced
forearm orthosis designed as an arm support for feeding those with shoulder weakness. The device,
which can be clamped to most wheelchairs, consists of a metal arm trough with free swinging arm
support and a ball bearing joint.
Stable Self Feeding Support: Stable Self Feeding Support [15], represented in Figure 2-3(a),
guides the arm as it moves from plate to mouth. It provides a support for the forearm and allows it to
move into the smaller top section with a simple sliding motion. This gives stability and support, while
bringing food to the mouth. The roof attachment helps to keep the arm on the slide and provides
additional control and support.
Comfy Feeder: Comfy Feeder [15, 16] helps individuals with Multiple Sclerosis, Parkinson’s
disease, Cerebral Palsy, other neurological conditions, and those with generalized upper extremity
weakness, feed themselves by allowing them to guide an attached spoon through a food-pick-up
sequence. A gas-spring level damper absorbs tremors and jerky movements; and the self-levelled
spoon eliminates messy spills and ensures horizontal positioning from the bowl/dish to mouth. The
spoon and pivot assembly, shown in Figure 2-3(b), can be attached to operate either in, or at a right
11
angle to the plane of the arm. It has a rotating platform on a non-slip baseboard. Since the user only
controls the eating process, no external power source is used.
(a) (b)
Figure 2-3: (a) Stable Self Feeding Support [15], (b) Comfy Feeder [15].
Stable Slide: Stable slide [17] is an arm support designed to provide support during the activity of
self feeding for individuals with tremors, limited strength, or motor control disabilities. The portable
device can be clamped to tables, is fully adjustable both in height and angle, and is available for both
right and left handed individuals. Since it doesn’t have the ability to move the user's arm, it is not
appropriate for those with paralysis or severe weakness.
The next section introduces assistive feeding devices called teletheses, which attach to a human
body part, such as the head, leg or foot. They are passive mechanisms that act as an extension of the
person and rely on the remnant functional musculature of the coupled body part to transform its
motion into a usable motion of an end effector such as a spoon or fork. These mechanisms take
advantage of extended physiological proprioception (EPP)2 to use direct feedback control from the
users to operate the simple device with flexibility and reliability [18].
2
EPP: Extended Physiological Proprioception describes the ability to perceive at the tip of the tool such as a
human extender or a prosthetic limb. {Wikipedia Encyclopedia]
12
2.5.2 Human Extenders for Feeding
Eatery: Eatery, manufactured by Do It Yourself and available at Maddak Inc [19], is a non-
articulated device that allows bilateral upper-limb amputees to eat independently without prostheses.
The plastic tray has three compartments and a height adjustable plastic-coated stand. The front of the
tray, as shown in Figure 2-4(a), has two spoon-like projections; and the user uses their mouth to
directly take food off the tray at this projection by use of the head piece. The device requires the user
to have some trunk movement and good head control, which is a limitation since people with neck or
spinal cord injuries may not be able to benefit from it. However, these simple devices would be ideal
for non-prosthesis users that are in otherwise good physical condition. The lightweight headpiece is
adjustable and padded for a comfortable fit. The modified spoon and plastic tray are removable. The
headpiece can be used as a pointer if the spoon attachment rod is replaced with a head pointer rod.
Magpie: Magpie [21], represented in Figure 2-4(b), is a purely mechanical, leg operated,
wheelchair-mounted, low cost, assistive device which is designed and manufactured at the Nuffield
Orthopaedic Center in Oxford, England. It can help users not only with feeding, but with other tasks
such as typing, turning pages, and shaving. It has the advantage of providing the user with continuous
feedback by virtue of the direct coupling of the end effector of the feeding device and the human
joints (human legs in the case of Magpie). Its limitation is that it can only be used for those who are
able to move their legs but not their arms. Therefore, people with spinal cord injuries would be unable
to benefit from it, since they are often unable to move their legs as well as their hands.
(a) (b)
Figure 2-4: (a) Eatery [20], (b) Magpie assists in eating [21].
13
HAND Feeder: Head Actuated Nutritional Device (HAND) [21] is a passive, head-controlled
feeding device for quadriplegics. The mechanism, shown in Figure 2-5, is like a telethesis, coupled to
the user’s body part and acting as an extension of the person. The virtual model of the feeding
mechanism, developed at the University of Pennsylvania, is shown in Figure 2-8. This 3-DOF passive
mechanical feeder driven by cables uses head and neck movements to control the movement of a
spoon. The head yaw movement causes the linkage to rotate about a vertical axis and translate in a
horizontal plane to keep the spoon in the line of sight of the user.
The head pitch movement causes the spoon to perform a planar motion that involves scooping up
the food and bringing it up to the mouth. The head roll movement causes the spoon to pitch about a
transverse axis [21]. It transforms the user’s head motion into a usable motion of the end effector
such. One of the limitations is that it can only be used by those quadriplegics who have control of
their neck. It also consists of a 6-DOF user input subsystem and a 3-DOF end-effector subsystem,
which makes it very bulky for individual use and requires considerable of space.
The following section introduces the electro-mechanically powered devices that use an electrical
power supply to activate the machine.
14
It used a Compact Carriage Mechanism (CCM), utilizing the interaction of three shafts, three tension
springs, a rotational damper, and two cams to produce the optimum motion of the utensil. The device
consisted of a mechanism enclosed within a PVC case, a spoon that is detachable for cleaning, a
specially designed bowl, a pad switch for user input and a 12V DC power supply that plugs into a
wall outlet. The device was not commercialized and the spoon had limited degrees of freedom. [22].
My Spoon: My Spoon, manufactured at Secom Co Ltd [23], is a powered feeder designed for use
by individuals with spinal cord injury, upper extremity disabilities, or amputation, which allows users
to eat most types of everyday food with minimal help from a caregiver. A base unit, shown in Figure
2-6(a), sits on the table next to a dish with four compartments. The device can operate in manual,
semi-automatic, or automatic modes, with a joystick, button switch, or combination of joystick,
button or switch controller.
There is no vision system for food recognition. Therefore, it is the user’s responsibility to choose
the desired food and direct the arm, by interacting with the machine through a laser pointing system.
The user operates the robot only by head movement to point on the up/down/right/left/back and forth
buttons on the panel to move the robot arm in the required location and orientation. After the food is
removed from the spoon, the robot arm returns to the home-position automatically. Application of the
non-contact sensor and emergency switch did not work on this device for safety reasons, because of
low reliability of the sensor in defending the user and inability of a disabled person to quickly operate
the emergency switch. However it has been stated in [24] that the light weight of the robot arm and its
low speed ensures the safety of the user.
Beeson Feeder: Beeson Feeder from Maddak Inc [19], shown in Figure 2-6 (b), is for persons
with severe physical or cognitive limitations due to cerebral palsy, SCI, or other impairment
involving movement, coordination, or range of motion. One control operates a spoon to take food to
the mouth level and the other one rotates the plate to keep the food properly distributed for the spoon
to pick up. The user should be cognitively aware of the cause and effect of the two-switch operation,
have two consistent points of motor control for switch activation, and the ability to move the body or
head forward to take food off the spoon.
15
(a) (b)
Neater Eater: Neater Eater from Therafin Corporation [25], shown in Figure 2-7, is a powered
feeder with programmable arm. The device can be set up for five different diners, but only one diner
can utilize it at a time, and the automatic cycle of the spoon can be controlled in four different ways.
The user can control the spoon or plate cycle with one or two switches that can be pressed with the
hand or knee. It keeps the spoon level as the arm is moved. In a manual version, adjustable springs
help the user to smoothly guide the spoon down into the plate, and back up to the mouth. Adjustable
stops prevent the spoon from moving past the plate or too close to the user, and stop the spoon at the
right height for the user's mouth. In an adapted version, the adjustable handle allows the spoon to be
used with relatively small movement of the user's hands. A plate-turner wheel allows the user to turn
the plate without lifting their hand from their lap. Tall spacers underneath the base help to reduce the
distance the spoon has to travel from the plate to the user's mouth.
16
Assistive Dining Device: Assistive Dinning Device from Mealtime Partners Inc [27] is a powered
feeder that has rotating bowls, a mechanical spoon, and a positioning arm. The bowls rotate until the
desired food is located under the spoon. To avoid mixing, each food is contained within a single
bowl. (Figure 2-8) It can hold up to three bowls of food at one time, each of which holds one cup.
Three general modes of operation are: 1) fully automatic, 2) using one adaptive switch, and 3) using
two adaptive switches. The feeder can be set to operate with numerous combinations of rotational
speed, length of time the device pauses to allow the user to take food from the spoon, minimum dwell
times for the switches, and time settings for spoon retraction after user contact. The operation is done
with the help of a control panel.
Winsford Feeder: The Winsford feeder [31], shown in Figure 2-9, is a single-purpose feeding aid
which enables individuals to feed themselves independently from a standard dinner plate or bowl. It is
controlled by either a chin switch or other types of switches. The height of the feeder may be
adjusted, but the user should have stable head and trunk control. Food preparation and feeder setup is
performed by an attendant.
A rotating plate lets the user pick up food from any location on the plate by the help of a pusher for
placing the food on the spoon. If the amount of food is too little, the plate and pusher may be
activated again to add more food to the spoon; and if it is too much, it may be returned to the plate
and emptied. A cup holder is included to hold drinks that are normally accessed with a straw; and a
drip pan and shelf prevents food from spilling on the user.
17
Figure 2-9: Winsford feeder [31].
Automatic Feeding Device: The automatic feeding device from Sammons Preston Rolyan [18] is
a battery operated feeder. The speed and sequence of operation is controlled by a chin switch. It has
some features such as an adjustable height stand, spring supported spoon and remote switch for the
hand or foot, but it requires sufficient head control to push the switch and to position the mouth at the
spoon location.
Electric Self-Feeder: The electric self-feeder, made at Sammons Preston Rolyan [15], is a battery-
powered feeder which assists disabled people in eating meals at their own speed. A slight head
motion on the chin switch activates the motorized pusher to fill the spoon and then automatically
moves it to the mouth. The rotation of the plate is controlled for food selection. A bowl may be
substituted for the plate by removing the plate and pusher and adding the turntable, shelf, and drip
pan. The height can be adjusted. The feeder includes a removable hand or foot control for individuals
who are unable to operate the chin switch.
Mila One-Step Electrical Feeder: The Mila Electric Feeder, manufactured by Mila Medical
Company [29] and shown in Figure 2-10, is activated by hand, arm, shoulder or head in one simple
motion. By pushing the padded bar, it lowers a spoon to scoop food while a plate mechanically rotates
to a new position. The base, push bar, and aluminium bar support a detachable spoon, plate and cup
holder. This simple device needs the least physical control and can be activated by one’s head or
other parts of the body to scoop the food and automatically rotate the plate. It is adaptable to both
18
adult and children sizes and also various types of disabilities. The users have complete control and
can eat at their own speed. One of the limitations of the device is its dependency on a power supply.
More advances in robotic related technology and also the limited control of the user over the
machine in electro-mechanical feeders led the designer to develop a more intelligent assistive feeding
system [32]. Although there are many commercially available non-intelligent feeding devices, the
intelligent systems are mostly in the research state.
The following section introduces some of the robotic feeding systems which are mostly articulated
serial manipulators, fully automatic and actively controlled. Some of them use an intelligent user
interface, such as vision system, speech recognition or speech synthesis, to provide more autonomy
for the users.
19
Handy 1: Handy 1 [34, 35] was one of the early approaches (1987) to an intelligent eating assistant
system (not attached to the wheelchair) that has also been successful in the marketplace. Since then,
people with cerebral palsy, motor neuron disease, multiple sclerosis, stroke and also the elderly have
benefited from this assistive device. (Figure 2-11)
The ease of use, requiring only a slight touch from the user in order to operate the system, low cost
and aesthetically pleasing appearance have made it successful. It helps the user not only in eating and
drinking, but also in washing, brushing their teeth and make up application for women. The eating
and drinking system consists of a scanning system of lights that allows the user to select food from
any part of the dish. The user waits for the light to scan behind the desired column of food and then
presses the single switch which sets the Handy 1 in motion. Two years later, a unique input/output
board was designed to slot into the PC controller which incorporates capabilities for voice
recognition, speech synthesis, inputs for sensors, joystick control and stepper motor drivers, to ensure
that the design could be easily upgradeable for future developments [35].
ISAC (Intelligent Soft Arm Control): ISAC [36- 38], from the Center of Intelligent Systems in
Vanderbilt University (1991), used a vision system and speech recognition to interact with the elderly
through natural commands [36]. The system, shown in Figure 2-12, contained a 5-DOF manipulator
which was pneumatically controlled by a microprocessor-based controller. It benefited from
Rubbertuator, which was a pneumatic actuator that operated in a manner resembling human muscle. It
was light weight, had a high power-to-weight ratio and had inherent compliance control
characteristics [37].
20
Figure 2-12: ISAC at work [38].
The system was equipped with three CCD cameras, one located on top of the table for monitoring
the food and two in front and side of the user to monitor the user’s face. An image processing board
could capture images from up to four CCD cameras. The control software was distributed among
several workstations interconnected through an Ethernet LAN. For safety reasons, a collision
avoidance subsystem was added to the whole system by utilizing real-time face tracking and motion
prediction and reactive/predictive motion planning. Face tracking planned the approach path to the
face and helped in collision prediction/ detection. Motion prediction was added to enhance the
performance of the face tracking system and also for collision avoidance. Considering the fact that
this robot arm could feed only one individual person, it was very bulky and required considerable
space.
Eater Assist: Eater Assist [39- 41], from Kanagawa Institute of Technology, Japan, utilized a
Cartesian robot to handle, move, rotate, and withdraw a spoon. With a head space pointer and
personal computer display the user could control and operate the system with either head movement,
blowing into a tube or by selecting direction/location commands listed on the PC display located in
front of them. The system provides two options to the users to move the robot arm on CRT (Cathode-
Ray Tube) display. One is the use of various defined icons on a CRT display that has been assigned to
a specific movement of the arm, for instance the letter U for upward movement. The other is the use
of an image from the CCD camera. In the example shown in Figure 2-13(b), the robot is moving
towards the specified point on the picture (such as mouth).
21
(a) (b)
Figure 2-13: (a) The concept of Eater Assist robot, (b) CRT display [41].
Assistive Robot for Bedridden Elderly: The Kanagawa Inst. came up with another assistive
device that is used for bedridden elderly people to help them with handling drinking cups, and picking
up their belongings from unreachable locations. The user would use the laser pointing device to
communicate with the robot. [42,43]. As shown Figure 2-14, the robot is a Cartesian robot with a
hanging arm above user’s head that can move toward the specified object location selected by a laser
pointing device.
22
Assistive Robot Hand: A robot hand, designed at Yamaguchi University, Japan [44], is a 5-DOF
robot with a vision system to recognize and detect the positions of dishes, cups and utensils (Figure
2-15). It includes speech synthesis and recognition software for bilateral communication in case of
image processing failure. Some of the limitations of the proposed system are based on assumptions
about the users and environment that do not work properly in public situations or for users with
limited speaking and hearing abilities. That is, for this system it is assumed that the user can speak
well enough to select some simple commands. Also, the reconfirmation process is cumbersome.
Every time the recognition process is done for every object, the system reconfirms the recognition
result with the user by asking if this is the object (for instance the first dish) and then waits for “yes”
or “no” answer. It does this for all of the existing feeding utensils on the table. If the position of the
object is not right, it also asks how it can be corrected. This method of communication between the
robot and user is absolutely useless for locations where many people are dining together and the
abilities of the user to provide a clear and recognizable voice is limited.
Although the reconfirmation process for each object and vocal command may increase the
accuracy of results, it also significantly increases the time taken to complete a task. This time may
exceed the patience of users when they are hungry. In addition, no strategy has been specified to
handle the task of using a fork as a utensil for picking up the food.
23
Food Tray Carry Robot: People with difficulty in moving their arms can actuate the Food Tray
Carry Robot [45] with very little force applied by a finger. The robot arm is a lightweight
manipulator, set on the floor beside the patient’s bed.
Strain gauges installed in a man-machine interface that is attached to the robot’s tip, can detect the
force applied to the operation plate. The parallel link system in the radial direction has been used to
keep the food tray even with the ground. Therefore, no actuator or control system is required to
maintain the horizontal plane of the food tray.
The next section lists the prices of some of the previously mentioned feeding devices that have
made it to the marketplace. Prices are not available for all of the aforementioned devices, largely
because some have not yet been commercialized and others are still in the research phase of
production.
24
2.5.6 Discussion on Feeding Devices
The review of different available feeding devices reveals that most of them are specifically designed
for the purpose of home use by one individual person. There was no multiple-user feeding device
available in the market. The complete review also reveals that attention has not been paid to
environments outside the home either in the market or in research. The importance of environments
such as senior homes for the elderly and their consequent difficulties motivated this idea of designing
a special feeder for people in this environment.
The next section introduces some of the input and output devices and methods for sending
commands to the machine and releasing information to the users, respectively. Then the
appropriateness of each, with respect to its use in feeding devices, mostly for the elderly, and in
public dining areas such as senior homes, is discussed.
25
features and drawbacks of different possible and available user interfaces for interaction of the user
with a robot or a machine.
Button or Switch: A button is an easy to use input device which is able to enter just a single
command. It needs both a pushing force and a pushing device (finger, or simple stick attached to the
head/chin). For use with the elderly, buttons should be big, with big printed labels, and should require
as little activation force as possible, especially for users with weak muscles.
A switch is also a simple and reliable input device which is able to be in the state of on or off to
provide a single command, and can be issued by almost any body part, such as a hand, head, chin, or
shoulder.
Blow-Activated Switch: Blowing into a tube may be used as an option for clicking a mouse. As its
name suggests, it uses the power of blown air from the mouth instead of fingers; and the pressure of
the air may be transferred via a tube [39-43]. It may be suitable for users with severe upper-arm
disabilities who do not have breathing problems.
Bite-Activated Switch: Biting on a pressure sensor may also be used as an option to replace
manually clicking a mouse. It may be suitable for users with severe disabilities of the upper-
extremities, whose jaw muscles are functional and who can close their mouth and generate varying
degrees of bite pressure. This interface has been used in Chameleon [53], which is a body-powered
rehabilitation robot.
Foot-Activated Pedal: Typically used in seated positions, a foot activated pedal is a simple
interface which uses the force of the foot to move a robot arm. Foot movement information may be
transferred to the robot arm by way of cables. This is an appropriate device for those who have
enough ability in and control of their legs and feet, and want to have control on the robotic arm by
themselves.
Joystick: A joystick is an input device for controlling forward, backward, upward and downward
movements. It provides an easier grasp than a standard mouse for those who have grasping problems.
Some assistive devices such as wheelchair-mounted robots, Manus [52] and My Spoon [23], are
equipped with this device as an optional interface. However, people with cerebral palsy, stroke
patients who omit stimuli from one side, and quadriplegics may be unable to make fine movement
corrections necessary to use a standard joystick [51].
26
Touch Sensitive Panel: A touch sensitive panel is another button-free input device. It has a single,
solid-state sensor pad that can be activated by human touch. There is no membrane to tear, crack or
degrade over time; no moving parts to wear and potentially fail; and no need of significant force. It is
completely sealed within a rigid, laminated substrate that is impervious to many challenging
environments.
Laser Pointing Device: A laser pointing device is another input tool which may be used for those
who cannot use their arms properly. It can be attached to any part of the user’s body (such as the
head) to point to a control panel of a monitor located at a distant location. This interface has already
been applied in a feeding device [43-45].
Biosignals: An electrocardiogram (ECG or EKG) records the electrical voltage in the heart in the
form of a continuous strip graph for screening and diagnosis of cardiovascular diseases.
Electroencephalography (EEG) is the neurophysiological measurement of the electrical activity of the
brain. They are very sensitive to noise and are non-stationary (time varying with interacting external
environment). Electromyography (EMG) is the recording of the extracellular electric field potentials
produced by muscle. These biosignals can be used as input when cameras or microphones are not
desirable [54] and a more natural way of communication is preferred; however, they involve very
complex time sequential data.
Vision System: A vision system is one of the most popular interfaces used for intelligent devices.
It typically has three parts: a camera, frame grabber and image processing unit. A camera captures the
image and sends out a stream of video data, and then a frame grabber receives this stream and stores
it in memory as an array of digital pixels. A processing unit identifies features of interest in a digital
image. It usually provides information regarding a subject or object. In the case of a feeding system, it
can be used for detection of the user’s mouth and recognition of food, utensils, plates, bowls, or cups
depending on the application. Vision systems have already been applied for feeding devices such as
ISAC [36-38], Robotic Food Feeder [39-41], and Assistive Robot Hand [44].
Voice/Speech Recognition: Voice or speech can be used to convey input commands in a natural
and easy way for communication with a machine. Voice or speech recognition converts the natural
linguistic commands into computer instructions by passing through three steps: feature extraction,
measurement of similarity, and decision making. However, when the user’s voice is not very clear or
the environment is noisy, the recognition and information extraction might be error prone and
27
difficult to use. For the case of a feeding device, the use of voice recognition was not recommended
for My spoon [23] since the mouth was usually full while eating; however, it is used in ISAC [36-38]
and Assistive Robot Hand [44] for getting the commands from the users or confirming them.
Eye Blink: An eye blink sensor can be placed near the user’s eye to trigger a mouse click using
blinking, and to enable communication using blink patterns. The device automatically detects a user’s
blink and accurately measures its duration. Voluntary long blinks trigger mouse clicks while
involuntary short blinks are ignored, and sequences of long and short blinks may be interpreted as
semiotic (any material thing that signifies) messages. There is no need for manual initialization,
special lighting, or prior face detection. People who do not have the ability to use their hands, head,
shoulder, chin or other body part to active a switch or button, or cannot hold their neck and head up in
order to operate a machine may benefit from the eye blink sensor.
Facial/Emotional Expression: Facial or emotional expression-like gestures [55] are very natural
communication methods that may be used to interact with machines. Each facial expression such as
sad, happy, surprised, would be understood differently and would send a specific command to the
machine. Some of the challenges in interpretation of the expressions are: complexity, ambiguity, and
subjectivity. This interface may be suitable for people with speech and hearing impairments.
Head/Eye Movement: Eye or head movement may be used by a person interacting with a machine
as a control signal. Eye or head movements are detected by image processing; however, detecting the
movement may be different with poor lighting conditions [56].
28
Eye Gaze: Eye gaze [57], which can act as a pointer and command sender, is a biological signal
related to eye movements that indicate a person’s interest in their surrounding. Human intention is
determined by estimating the eye gaze direction; however, eye drifting and blinking may cause
problems. The information of face direction is necessary for gaze estimation. A user can move a
computer cursor using only eye-gaze or instruct the robot to pick up objects by looking at them
steadily.
Eye Mouse: An eye mouse, often called an “ocular prosthesis” [58], helps people with severe
upper-limb disabilities to control a computer by estimating the eye gaze direction of the user, and to
locate the mouse pointer of a computer at the fixation point of the user’s gaze. A small camera or
binocular eye-tracker, with the help of infrared sensors in front of the user, tracks and records the eye
movements. The data would be processed by related software to convert these movements into mouse
movements, mouse clicks or double-clicks. Systems that are equipped with a display or monitor and
have a graphical user interface where the user is supposed to enter commands or choices on the screen
may benefit from this user interface.
Light: Light can operate as a simple output signal in the role of a user interface. It might be used
for warning, reminding, or getting attention, when a device emits light at a specific time. Handy 1
[34, 35] used light to scan different foods inside a tray. When it scans the user’s desired food, the user
indicates their choice by pushing the assigned button for that food section.
Graphical User Interface: A graphical user interface (GUI) uses the graphical images to represent
information and actions that are available to users. A well-designed GUI makes it easier for users to
interact with a machine. An effective GUI facilitates the direct manipulation of data, learning process,
and interpretation of commands. It allows a user to select from among a dozen tasks and to select
options within those tasks and it sometimes can be used as a reminder (if it is not complex) for those
who have problems remembering commands. Some components of a GUI include a pointer, pointing
device (e.g. mouse or trackball), icons (which represent commands), desktop (area for grouping
icons), windows (for running different programs and displaying different files) and menus (to give
choices). The only feeding device that has used a GUI for the user interface so far is Robotic Food
Feeder [39-41].
Cathode-Ray Tube Display: A cathode-ray tube display acts as an output device that shows either
the images taken by a camera or graphical pictures or commands. It is used to: indicate status, identify
29
a function, instruct, give warnings, and display qualitative or quantitative information. If the
environment is very noisy or if the information to be displayed is complex, a visual display might
help for a more convenient communication with the machine.
In the case of a feeding device, a display has been used to show a picture of each food position to
let the user select the desired food, or to show the partial or full picture of user’s face, to allow them
to direct the robot manipulator toward the mouth by choosing the mouth location on the display [39-
41]. Although feasibility of this interface is presented in [39-41], nothing is mentioned regarding the
time it takes for the user to get the next bite.
Auditory Display: Auditory display is an output device which is used when an immediate
response from the listener is required, such as to an alarm, or to a reminder or for confirmation of a
choice. Auditory displays may consist of simple tones, complex tones and spoken messages. Tones
may be continuous, periodic or non-periodic. Complex tones consist of sounds having more than one
frequency component. Auditory signals should be recognizable from the noise or other auditory
signals. Therefore it is recommended to use signal frequencies that are different from those of the
background noise to prevent masking. The spoken messages should be short and simple; if the
message is complex it should be presented in such a way to get the user’s attention first and then give
the exact information in the message.
Auditory display, used as a spoken message, is applied in the Assistive Robot Hand [44] to be able
to confirm the existence of the objects on a table with the user and to verify the user’s choices. It
provides an optional interface in case the image processing system fails. Auditory display can also be
used to remind a user of the necessary steps of eating. This is useful for those having memory
problems associated with Alzheimer’s disease and Dementia.
The summary of the aforementioned user interfaces are shown in Figure 2-16.
30
completely sealed and impervious to food or drink spills which make it a good candidate for the
feeding system. In addition, joysticks [50], which may be acceptable for users who have retained
some motor dexterity in their hands, may not be suitable for people with upper-extremity disabilities,
since they require some mechanical force to be used as a control device.
Table 2-4 specifies how much users of different interfaces are familiar with each device. In terms
of choice of an input device, the majority of disabled people are only familiar with the joystick and
remote control. That is, they will not hesitate to use such an input device [51].
31
Head movement sensor
Roller-ball control
Chin operated control
Eye movement control Less than 5%
Ultrasonic sensor
Voice activated
Sip & puff switches
EEG based-switch
Among the more intelligent methods of user interaction with robots, the vision system [36-41], [44]
and voice/speech recognition [38, 44], have been utilized in systems that are specifically used for
feeding of one disabled person. However, to date there is no record of applying the other user
interfaces such as eye blink, human emotion/intention (bio-signs), hand or body gesture, head/eye
movement, biosignals (EMG, EEG, ECG), facial/emotional expression and eye gaze for the purpose
of feeding the disabled or assisting the elderly in an eating task.
Some earlier intelligent feeding systems benefited from having light, signal, sound, animation and
graphical images to warn users about unreachable points/locations, or the approach of dangerous
situations or areas, to scan a food tray (with light) [34, 35], to get confirmation of the receiving
command (for speech synthesis) [44], and to command by a menu displayer (monitors, CRT displays
and GUI) [39-41].
In general, some of the abovementioned interfaces may not be suitable for multiple-user feeding
robots that are intended to be used in dining areas with more than 20-30 people. One of the primary
intentions of the present study is to develop an assistive device to be utilized in dining areas of senior
homes, which are typically furnished with several four-seat tables in a single room. This makes the
environment noisy when residents are eating. Even if the volume is kept to a minimum, external
sounds may still interfere with the user’s voice commands and, in turn, make it difficult for them to
hear sound signals from the system.
Furthermore, for the proposed system, it may happen that two or more users issue commands at the
same time and since they are sitting close to each other, differentiating their voices/commands will be
a problem. Speakers can be applied for sound output only in restricted conditions. The sound can be
32
transferred to each user by an earphone to prevent making additional noise and interfering with the
other sounds from adjacent tables. In addition, if a visually-based interface were to be used instead,
variable lighting conditions may make seeing and identifying objects difficult for the users. Also, the
use of a laser head pointer may not be feasible for seniors with head tremors.
As discussed in the next chapter, many of the elderly may not be able to raise their hands properly
or hold their fingers in specific configurations to communicate with the robot using gestures or other
hand-related signalling. Indeed, not only do many seniors have problems in grasping and flexing their
fingers, but to assign a gesture for a specific command and expect those gestures to be remembered,
will likely be beyond the abilities of some elderly users. As training such a population would be a
challenge for any interface, it was recommended to use a system that needs little to no training.
After reviewing different design ideas, analysis of the available products in the market and
characteristics of end users, the elderly population was chosen as the target end user population of a
new feeding device. Since many elderly live in senior homes, and none of the previous designs have
been considered for use in such environments, the project focused on designing a feeding device
which can meet many of the elderly user and caregiver requirements in the dining area of a nursing
home. The next chapter will discuss the observations made of seniors and their caregivers during
meal times at a senior home to better understand the needs of potential users of the proposed device.
33
Chapter 3
Observation
The main objectives of conducting the observations in the nursing home were to more closely
investigate the eating tasks or procedures of elderly or disabled people in order to: a) find the
potential users of the feeding machine; b) estimate frequency of their needs for such a system; c)
understand the user’s characteristics, behaviour and physical or mental capabilities; d) investigate the
problems that hinder the potential users’ ability to eat or that make eating very messy and/or lengthy;
e) determine the design constraints; f) explore the features that should be added to or removed from
the system according to the user’s impairments; g) inspect different types of foods served, special
utensils used and the methods applied to handle each kind of food while eating; and h) determine the
feasibility of different human-machine user interfaces.
The physical and cognitive differences that may exist among users are important in the design of a
feeding system. These are therefore discussed in the next section.
34
perceptual capability, such as short term and long term memory, spatial and sequential processing
skills, and learning; 3) differences in affective attributes, such as level of anxiety, tolerance for
frustration, and the need for status or recognition [86]. In general, the robot should be designed so that
it can be safely and effectively operated by users with varying capabilities.
The basic steps for the correct use of anthropometric data are to: 1) define the anticipated user
population; 2) select the percentage of users that is to be accommodated; 3) identify all body
dimensions that are relevant for the design of the product; and 4) obtain an appropriate
anthropometric data table and find the values that are needed. The observations made at the senior
home helped to complete the first two steps by providing useful information about the user
population. The related tables (anthropometric data) for the last two steps are provided in
Appendix A. Relevant anthropometric dimensions, specifically for the feeding system, are: sitting
height, sitting mouth height, sitting eye height, arm reach, head reach, and rotation angle of head.
The appropriateness of anthropometric data depends on the similarity between the sample used in
the survey and the population of anticipated product users. Designing for persons confined to
wheelchairs and the elderly presents special challenges. The eye level and functional reach envelope
for a person in a wheelchair are significantly different from those of an ambulatory non-disabled
person. Since body dimensions vary with age, it is important to know the ages of the product users. In
addition, body dimensions may vary from generation to generation.
The next section reflects the questions raised before and during the observation sessions, followed
by the answers to those questions and a discussion of the findings. The conducting of observations
received ethics review and clearance from the Office of Research Ethics and was approved by the
Human Research Ethics Committee at the University of Waterloo (UW ORE). Appendix B contains
the authorization for this observation by the Office of Research Ethics.
35
attention in the special care unit. Some residents in this unit were physically healthy and did not have
any difficulty handling tasks that needed muscular ability and coordination while eating. People who
received care in the regular care unit predominantly demonstrated physical difficulties, although a
few exhibited symptoms of the beginning stages of cognitive problems such as dementia. Table 3-1
summarizes the observation findings.
Table 3-1: Observation results of the nursing home of the “Village of Winston Park” senior home.
People who had upper limb 2 were not able to use their 24 (40% of RC)
physical disability that hindered hands to feed themselves. 40% of regular care unit
the eating process (were unable to (5.7%) population
feed themselves)
People who had upper limb Lack of strength in the hand to 3-4 people used special
physical disability that made the grab the utensil or the cup was utensils (spoon/fork with
eating process very difficult or observed in many cases. inclined head), because they
very long and untidy (spilling food Hand tremor and lack of couldn’t grasp the required
or drink) strength were the biggest cause utensil properly in their hands.
of untidy eating process.
Swallowing/chewing problems They often dropped the utensil
(identified by caregiver and because of lack of strength in
type food) as well as lack of their hands. Some didn’t have
hand strength were reasons for enough strength to cut food by
the eating process being themselves.
lengthy.
36
Observed Facts Findings
Special Care (SC)Unit Regular Care (RC)Unit
People who had tremor in their Most of them had tremor in Most of them had tremor in
hand while eating their hand, but its severity was their hand, but its severity was
different from person to different from person to
person. A few people did not person.
have this problem but they
were slow in eating.
People whose hand tremor None. All were able to feed themselves, but it was untidy and
hindered eating process almost half of the food in the spoon was gone before reaching
the mouth.
People who forgot the required The forgetting of steps was not
steps in eating process (choosing counted in detail. From one
food, choosing appropriate utensil, day to the other, various steps None
picking up food, bringing food to were forgotten. In one case the
mouth, taking food off utensil, person didn’t know what she
chewing, swallowing) should do. There were 18
people who could not choose
the type of food.
People who could not cut their Those 12 who were totally Those 24 who were totally
food dependent on nurses plus those dependent on nurses plus
who do not have enough those who do not have enough
strength in their hand to strength in their hand to
manipulate the knife easily and manipulate the knife easily
safely and safely
People who could not scoop up the At least those 12 who were At least those 24 who were
spoon totally dependent to nurses, but totally dependent to nurses,
it differs from day to day, and but it differs from day to day,
from food to food and from food to food
37
Observed Facts Findings
Special Care (SC)Unit Regular Care (RC)Unit
People who could eat solid food 25 (71.43% of SC) 34 (56.67% of RC)
The ability of the gel food to be It has not been tried yet in both units, but two nurses thought
sipped by a straw that it would be difficult to sip through a straw, because it is
very viscous.
People who used lipped and There were 6 plates with some There were 7 plates with some
divided plates dividers. This helped users dividers. This helped users
scoop up the food more easily. scoop up the food more easily.
(mostly for independent (some were for dependent
people) people)
38
Observed Facts Findings
Special Care (SC)Unit Regular Care (RC)Unit
People who used small size Those who were fed by Those who were fed by
spoons caregivers used small spoons caregivers used small spoons
People who would likely have the One of the nurses was thinking It’s unpredictable, since their
ability to choose the required steps that it would be more cognitive behaviour changes
if they are able to see a picture or confusing for these elderly if everyday. This was difficult
hear a sound as the reminder the numbers of choices are for nurses to predict without a
many, but it depends on what system to test.
you show them or depends on
their behaviour on a given day.
He believed that this should be
tested to determine its
feasibility.
People who were not opening their All, some of them opened their 2 persons are very difficult to
mouth when caregivers tried to mouth but the nurse had to feed. Most of the days, they
feed them push the spoon in their mouth close their mouth very hard
and some part of food will even when the caregiver tries
remain on their lips. to push the spoon a little bit to
their lip.
39
Observed Facts Findings
Special Care (SC)Unit Regular Care (RC)Unit
Sequence and pace of eating from Between 10-15 seconds for Some were fast in chewing or
one spoon to the other those who have swallowing or swallowing if they were fed
chewing problem, by somebody, but for some of
Between 5-15 seconds for them it took longer. It took
independent people. Most of almost 10 seconds for a
the time for independent person who was fed by a
people is consumed by nurse and was not very fast in
scooping and lifting the spoon chewing and swallowing.
rather than chewing, (The sequence of eating from
swallowing, or struggling to each spoon can be found for
move the spoon as smooth as each person in next
possible to their mouth observations and the average
time can be calculated)
Another part of the observation was exploring different typical foods served in the nursing home
for each mealtime during a one-week period in order to categorize them based on their shape (e.g.
solid, semi-solid, liquid, etc.), the way the diners handle them for eating (using hand, fork, spoon or
knife), and the possible method a robot would choose to pick up that particular kind of food. This
information is given in Error! Reference source not found.. This particular part of the observation
not only specified the pick up method for the robot, but also revealed the frequency of using the
spoon, fork, hand or both, which helped in deciding whether a fork should be used in the system at
all. Table 3-3, provides the frequency of using each of the utensils.
40
Table 3-2: Different categories of different samples of food, desserts or salads.
Split pea & ham Thick, Spoon Spoon Possible, easy Not possible
or soup/yogurt blended
Carrot &thyme Thick, not Spoon Spoon Possible, easy, Only for the
soup/fruit yogurt/ blended, has solid parts solid parts
Cream of solid material should fit in the
wheat/oatmeal inside spoon
Grilled cheese Solid, semi- Fork or Fork Possible, if cut Possible, easy if
soft spoon in small pieces the pieces not
that fit in a too small
spoon
Hamburger or Solid, hard Knife and Fork Possible, if cut Possible, easy if
Fish sticks/tartar fork in very small the pieces not
pieces too small
Macaroni & Solid with Fork or Fork/ Possible, better Possible, better
Cheese small parts spoon spoon when when macaronis
macaronis big
small
Mashed potato Solid, soft, Spoon/ Spoon/ Possible, easy Possible, easy
sticky fork fork
41
Meals /Soups /Sandwiches /Desserts /Salads
Beef/ Hot chicken Solid Hand or Gripper Not possible Possible, if cut
/Hot dog fork /fork in pieces that
sandwich can be picked up
Toast/ Beard Solid, fluffy Hand/ Gripper/ Not possible Possible, easy
or dense knife fork when cut in
(depend on its pieces and it is
type) dense but
difficult when
it’s so fluffy
Mixed vegetable Solid, semi- Spoon Spoon Possible, usual, Possible, if the
salads (cucumber, soft, minced, when the pieces are big
tomato, has juice pieces are very enough to
broccoli…) small handle with fork
42
Meals /Soups /Sandwiches /Desserts /Salads
Assorted cakes Solid, fluffy Hand/fork Fork/ Possible if cut Possible, if the
spoon in small pieces, pieces are not
or wants to too small or
pick up small fluffy to take
parts remaining apart when
in the plate, but picking up
not usual
Table 3-3: Percentage of usage of spoon, fork or both in a one week menu.
43
hunger to food or to feeding. They forgot the required steps for feeding themselves, even chewing or
swallowing. Some of them were frequently in need of being reminded about the next task after
finishing each step. According to the observations and also the nurses’ experience, they behaved
differently from day to day, with no regular or predictable pattern, and they easily got confused when
they had many options to choose from.
The behaviour of elderly residents with cognitive problems, in response to a new device and the
level of their adaptability might be quite unpredictable. Therefore, it may not be necessary to have a
particular design of a feeding device for this population. However, the ways the machine and user
interact with one another may be extremely important in ensuring a user’s cognitive disabilities are
addressed, to ultimately permit a comfortable and stress-free feeding. This suggests that, much focus
of the design of a feeding system for this group of potential users may be more on the application of
appropriate user interfaces. An appropriate interface would help the users obtain a good understanding
of the environment and the required tasks for the procedures of eating.
Any device or method applied or integrated with a feeding system that can keep track of the
forgotten, wrong steps and can guide the user through the next required step by reminding them and
giving them the required instruction, would be extremely helpful. For this population, a feeding device
equipped with an appropriate user interface(s) might assist those who suffer from upper-limb physical
disabilities or malfunctions in addition to memory problems.
44
feeding machine. They would be unable to reach the end of a spoon or fork and would need to be
closely monitored by their caregivers to avoid unpredictable accidents.
Among the rest of the 60%, more than 40% had problems such as hand tremor, lack of strength in
holding the utensil, and severe joint pain in arm, wrist, or finger. They had difficulties in
manipulating the spoon or fork and directing it toward the mouth. In many cases, almost half of the
food fell from the spoon because the person could not hold the spoon at a right angle after scooping.
About 11.7% of the elderly used a lipped plate with dividers to help them more efficiently scoop their
food. For each user, 3- 4 different kinds of food and desserts and 2- 4 cups were considered. Most of
the solid foods (between 40%-60%) were already cut into pieces for those who did not have enough
strength to do this task, and many of the foods (about 31.7%) were pureed for those who had
chewing or digestion problems. Approximately 11.7% of the residents consumed gel foods because of
chewing and swallowing difficulties.
The eating process was considered fast if the sequence of putting the spoon/fork into the mouth was
between 4- 6 s and was slow if it was more than 10 s. The results showed that more than 43% of the
people who had chewing or swallowing problems were slow or very slow in eating, while the interval
between inserting the spoon/fork into the mouths of the rest of the individuals, who did not share
those physical disabilities, varied from 5 to 15 s in the slowest cases. According to the observations
and the caregivers’ opinion, at the present time, there are many elderly people who can benefit from
being assisted by such a feeding device in that environment, although there is some uncertainty in the
level of their adaptability to be expected should they attempt to utilize such a system.
In both the special and regular care units of the nursing home, many elderly people dined together
at standard four-seat tables. The limited time allocated for each meal and the daily progression of
physical and mental disabilities of the elderly made mealtime very challenging not only for the
residents, but also for their caregivers. Indeed, one nurse could respond to a maximum of two diners
at the same time and could only manage to respond to the needs of all diners with the assistance of the
limited number of staff members available.
45
1) Assignment of one feeding device to a maximum of four people in such institutions would
dramatically reduce the number and consequent costs of machines and nurses or caregivers.
2) The time-gap required for one person to chew and swallow could be allotted to feed another
person sitting at the same table; particularly since the gap might be longer for elderly
individuals with slower paces of eating.
3) To date, almost all of the proposed feeding systems to assist elderly or disabled people with
upper limb dysfunction, have been applied to single-user use. Little effort to design a
multiple-user feeder machine has been made. The novelty of a multiple-user feeding system
would be additional motivation to test the feasibility of the system in environments where it
would be useful.
The next chapter provides details of the design of a multiple-user feeding robot, food tray and the
setting of the whole system, along with both the user and the robot characteristics.
46
Chapter 4
Design of Feeding Robot
The focus of the design is a system capable of feeding multiple elderly or upper-limb disabled adults
using a serial articulated robot located on a table with a maximum of four seats. The typical
characteristics of the potential users, robot, and the design assumptions needed to be defined before
proceeding to the design. Throughout the project, a virtual feeding robot system has been used to
evaluate the feasibility of the proposed device as a multiple-user feeding system. The design of a
virtual prototype consisting of a robotic manipulator, food trays, and table are explained in this chapter
and the feeding process is planned for multiple users. The virtual prototype not only provides us with
the information needed for fabrication, but is also used as a communication tool, for architectural
development and evaluation.
47
Profile User Characteristics of Feeding Device
Mental status Cognitively aware of the environment (Those with sever dementia and
Alzheimer’s symptoms are not included as target users)
Physical status Those who have weak muscles or joints in their hands or arms, or suffer
from muscle stiffness and cannot grab or handle a spoon or fork easily, or
have significant tremor in their hands while eating, are the target users of
this product. The user has the control on neck and head muscles.
Talking status Able to articulate clearly, such that all words and characters are recognizable
by others.
Occupation Usually are unemployed, retired, and jobless and reside in senior houses,
nursing homes or hospitals where they receive special care.
Specialized skills It should be easy to use but for long term care, training would be provided
48
4.2 User’s Safety
User safety is a very important factor to be considered in a feeding system since the robot and its users
will be closely interacting in the same unstructured environment. In an unstructured space, there are
some possibilities for user injury; for example, if the robot accidentally pushes, pinches, or hits a user’s
body part. Some criteria should be met to guarantee user safety. These factors are as follows:
1. The robot’s end effector should avoid hurting the user by stopping at the closest defined
distance to the user’s mouth. This will be more important when the robot is using a fork which
has pointed tines. If the location of the user’s mouth is beyond the workspace of the robot
(when the user is farther than the defined allowable distance from the robot), the robot should
notify the user to sit closer to the table’s edge. To reduce the need for such notification,
information related to the proper sitting distance from the table can be given to the user during
the period of training for the new feeding machine.
2. The user must have sufficient control of their neck and head, enough to keep it in an upright
position or at an angle that would be safe in the nurse’s opinion. This decreases the potential of
choking while swallowing. The end effector should not reach the user’s mouth, but should
force the user to reach slightly for the spoon. The amount of force applied by the robot should
stay within a range where the likelihood of injury to the user is minimal. Also, the spoon or
fork should not retract when it is inside and touching the user’s mouth.
3. The robot should not work when the user has a continuous head tremor. Not only would the
condition make the user’s mouth very difficult to track, but it may cause the force sensor at the
end of the end effector to be unreliable when touching the user’s mouth. Incorrect data may
lead to an extra applied force to the user that causes injury.
49
would be placed into the shallowest section of the food tray. Soups or liquid foods would be poured
into the deepest section of the food tray; and solid/semi-solid foods which require a spoon to be
scooped, would be placed in the remaining sections of the food tray that have medium depth. Also, the
contents of drinking cups (juice, milk, water or coffee/tea) would already be known to the user either
by labelling, color or by their fixed position. The user, who would have control of his/her neck and
head, would be seated in an upright position or an angle that is safe for eating.
From some of the possible shapes, the arc-shaped plate shown in Figure 4-1(c) has been chosen for
these reasons: 1) The robot can be located in the center of the arc ,which makes it easier for the robot
to feed multiple persons; 2) Scooping the food will be much easier compared to the square or round
plate with three or four compartments, as shown in Figure 4-1(a) and Figure 4-1 (b); and, 3) The food
trays can be put beside each other with one robot at the center for feeding four users (as shown in
Figure 4-7)
50
on the cross sectional area of the container, but the volume should be at least 250 ml. For this design,
a circular cross section has been chosen. Drink containers should have handles to make grasping
easier for the robot gripper; and the shape of the handle should be carefully considered since it will
affect the type of gripper, grasp type and grasp pose of the end effector.
2cm
handle
cup
1. 5
cm
9 cm
o
60
7 cm
To simplify grabbing the handles of the cups, forks and spoons, all the handles are cylindrical with
the same diameter, at the same angle, 60 deg, with respect to the horizontal axis, as shown in Figure
4-2. Since the robot is placed at the center of the table, there will not be any difference in the robot’s
ability to reach each user. It is planned to simplify the robot’s task by assuming that the robot places
the cups, forks and spoons in the same position and orientation in each user’s tray. The spoons or
forks have holders to keep them in a predefined position and orientation. The dimensions of typical
spoons for adults are given in Table 4-2.
The size of the food plate, the number of sections and the positions of the cups should be specified
in the food tray layout. The capacity of each food section is based on the capacity needed for a typical
serving; and the numbers of food sections depends on the number of different foods that are served
51
for each individual. The inner shape of the food compartments should be specified based on the type
and shape of the food. Liquid or semi-liquid foods, such as soups, need a deeper plate with an inner
structure ergonomically designed to facilitate the scooping process. Solid foods, which are typically
cut into pieces and are assumed to be picked up by a fork, can be placed in shallow plates without
specially modified inner structures.
Figure 4-3: Possible feeding angles (a) straight spoon with thick handle for front feeding, (b) inclined
spoon for easier scoop, (c) inclined spoon for semi-side feeding, (d) inclined spoon for side feeding.
The goal is to fit four cups and four food sections in the following available space: a 90 o arc with
26 cm width and outer radius of 55.5 cm. Based on calculations of the minimum amount of food and
liquid required by users, the positions of food sections, cups, spoon and fork, were determined in
order to fit all utensils and food sections in the limited arc-shaped area in Figure 4-4(a). The final
layout of the food tray was set as shown in Figure 4-4(b). The area of each food section in this layout
is approximately 275 cm 2 , which is slightly more than the typical volume of each serving and it
guarantees having enough space for food.
Two sections of the food tray are flat for the foods that are supposed to be picked up by the fork,
and two sections of the food tray are deep and sloped for foods that are to be scooped up by the
spoon. The amount of empty space is minimized and the available room is used for fitting four food
sections, four similar cups, one spoon, and one fork. The food sections are located in the center of the
arc and the cups and fork/spoon are positioned at the sides. The layout is considered almost
symmetrical to make it easier for the robot to face each object in the tray with almost identical
approach.
52
m
29.5c
55.5cm
(a) (b)
Figure 4-4: (a) Top view of the considered area for fitting utensils, (b) Arrangement of the food
plates, cups, fork and spoon. The directions of all handles are towards the center.
m n
d1 ≅ 4cm , d 2 ≅ 2cm
M
H1 L H2
N n ≅ 1.5cm , m ≅ 1cm
d2 d1
Figure 4-5: Deep sloped plate for liquid/semi-liquid foods/desserts which can be scooped by a spoon.
d2 d ≅ d2
d1 d1 ≅ d1 ≅ 2cm
Figure 4-6: Flat plate for the foods/desserts which can be picked up by a fork.
53
After completion of each task, the robot arm can return to its last position, where the end effector
and all arms are coplanar. If the waist turns slightly, it can align the arms in the plane for the object
that is about to be placed or picked from the tray. As mentioned, the depth and inner shape of the food
sections are specified according to the maximum required volume and the type of food. The design of
the deep sloped plate, shown in Figure 4-5, has the following advantages: 1) The slopes on the sides
of the walls match better with the slope of the spoon as it reaches towards the food and provides a
smoother path as the spoon dips into and out of the food plate; 2) Rounding the sharp corner angles
makes a better path or trajectory for the spoon; 3) The slope at the bottom of the tray helps the fluid or
semi-fluid foods slide down and pool in the deeper points to ensure that any food remaining in the
plate can be scooped by the spoon. However for foods that are supposed to be picked up by the fork, a
flat shallow plate, shown in Figure 4-6, works better.
Figure 4-7: Top view of the position and arrangement of four food trays for four users, the users are at
least 25 cm away from the food tray edge.
54
The position and arrangement of four food trays for four users are shown in Figure 4-7. Top view
of this arrangement helps in the understanding the distance of the user’s mouth from the edge of the
table. The users in this design are at almost 15 cm away from the edge of the table and 25 cm away
from the edge of the food tray. A three-dimensional virtual representation of four food trays
containing deep and flat plates, along with four cups, a spoon and a fork, for each user, as well as, the
robot in the center, were modeled in ADAMS, as shown in Figure 4-8.
Figure 4-8: 3D model of the robot located in the center of the table along with four food trays.
1. It is small enough to fit on a four-seat table with standard height of 72-74 cm (in an area with
a diameter of almost 60 cm with no object inside and no extra obstacles).
55
2. It is able to feed 3-4 people at the same time.
3. It is a serial manipulator that can rotate almost 350-360 degree at the base to provide large
workspace and respond to all users.
4. The spoon or fork lifts no more than the weight of the food, and, therefore, a payload of 2-3
kg is sufficient.
5. It can reach to predefined locations on the dining table to pick up a spoon, a fork or any of the
cups for each user.
6. Feeds users different kinds of solid or liquid foods; provided the solid ones have been already
cut.
7. It picks up the user’s desired food each time by using an input device or command.
8. Scoops up the user’s chosen food with the spoon and takes it to the user’s mouth.
9. Feeding pace may be changed by the user. The eating process would be repeated until the
dish is empty. The speed of the robot would be changed accordingly.
10. Feeding pace is expected to be matched according to the user’s eating pace.
11. Optimally has optional user interfaces for different capabilities of the elderly or disabled
users.
12. The operation does not require specialized knowledge of the user related to the feeding
machines.
14. It accomplishes the tasks safely with minimal supervision on the part of care providers.
15. It takes the spoon or fork to a position close to the user’s mouth, but not into the mouth. (the
safest distance should be defined)
16. In the case of having any kind of button or switch, to command or to control the machine, the
button or switch should be big enough to be pushed, moved or grabbed by the user.
17. All written notes, warnings, names or pictures should be printed in big fonts to be seen by the
users. (since most of them have poor vision)
56
18. The rotation angles of joints and the length of link should be able to provide the maximum
reach between 800- 836 mm.
19. The height of the robot’s waist is preferably lower than the user’s eye level when the user sits
behind the table (this is psychologically better since it is not too obtrusive).
20. In case of accident or emergency, the robot should be able to stop immediately.
The next section provides information about the selected robot which has similar characteristics to
the desired robot. The reachability of the robot and the robot’s workspace will be evaluated versus
location of the user, especially the location of the mouth, and eyes.
57
To be able to evaluate the reachability of the selected robot’s end effector, the schematic side view
of the robot links and their rotation angles, as well as a standard table and one food tray for a typical
user was used, as shown in Figure 4-9.
Figure 4-9: Average anthropometric dimension of an adult user [25], size of a typical standard chair
and table, with respect to one food tray and also the proposed robot which has the dimensions of a
Thermo CRS-A465 articulated robot (schematic diagram is to scale, dimensions in mm).
The anthropometric data (Appendix A) based on the maximum amount in the given range for the
average size of an adult man, was used to represent the typical user of the robot. Most of the heights
shown in Appendix A are slightly less for the elderly, 65 years of age and older, since their backs are
more curved; and they shrink in size as they age.
58
To be conservative in workspace calculations, the highest body heights should be considered. This
ensures that the robot’s end effector should not have any problem handling the users should their
mouths be located at a shorter height. As shown in Fig. 4-9, the selected robot is able to cover the
desired points in the space and reach to the closest safe distance to the user’s mouth. It is assumed
that the user’s mouth is almost 15 cm away from the edge of the table for safety reasons and the end
of the spoon/fork is would not be further than the edge of the table.
The next step was to add cameras and specify their locations in the system for acquiring images
from the users’ faces and the food tray.
59
Figure 4-10: Arrangement of cameras versus food trays and users (user 4 is not shown). Cam U i is
tracking the i th user’s mouth and Cam Fi is recognizing food and presence of utensils in i th tray.
Figure 4-11: Arrangement of eight cameras with respect to the users and the food trays.
60
The locations of the user’s mouth can be defined symbolically to provide the link to further
research on the system. If the real-time mouth tracking is set up in the future, the result for the
location of the lips can be substituted in the assumed locations.
The following section categorizes the required tasks which the robot should accomplish. A
breakdown of each task into detailed subtasks was attempted.
61
Table 4.3. Continued.
System Variables/Parameter Reference Name
Order of commands (with respect to time) r
Location of the center of mouth of user I (i =1:4) CMi
Norm of the base of the spoon, user i (i =1:4) NSi
Norm of the base of the fork , user i (i =1:4) NFi
Norm of the bottom of the cup m, user i (i =1:4)(m=1:4) NCmi
Location of the end of the fork handle, user i (i =1:4) EFi
Location of the end of the spoon handle, user i (i =1:4) ESi
Location of the end of the cup m handle, user i (i =1:4) ECmi
Orientation of the fork handle (with respect to stationary frame) , user i (i =1:4) OFi
Orientation of the spoon handle (with respect to stationary frame) , user i (i OSi
=1:4)
Orientation of the cup m handle (with respect to stationary frame) , user i OCmi
(i = 1:4) (m =1:4)
Food tray inner edge geometry for the user i Inedi
Food tray outer edge geometry for the user i Outedi
The path of the fork for all users (array of points) PF
The path of the spoon for all users (array of points) PS
The path of the cup m for all users (array of points)(m = 1:4) PCm
Closest distance with any user CD
Tip: is a vector representing the point that make the closest distance to the user’s tip
mouth and the edge or tip of the utensil should reach to that point
Other user waiting time after sending the command WT
Other user maximum waiting time after sending the command WTmax
Utensil holding time (for being unloaded) HT
Utensil maximum holding time (for being unloaded) HTmax
All the points in the workspace of the robot (considering the constraints) Workspace
General Command GComd
General Command with order r, received from user i GComdri
62
Table 4-4: Acceptable commands from users.
63
Table 4.5: Continued.
Function (Subsystem) Function Name
Lift the cup mi and move in predefined path and keep the cup in horizontal LiftCupmi
position
Calculate the tip position of the fork, spoon, or the edge of the drinking cup Calc tip
Hold the utensil in calculated tip position and reads the holding time from the Hold
timer
Return the fork from the holding position to the same food section and remove the DumpF
food from the fork (It assumes that user is refusing to eat and then it waits for the
next command)
Return the spoon from the holding position to the same food section and remove DumpS
the food from the spoon (It assumes that user is refusing to eat and then it waits
for the next command)
Return the cup from the holding position to its original place (It assumes that user RtnCm
is sending the return Cm command and then it waits for the next command)
Return the fork close to the inner edge of the same foods section and get ready to RtnF
pick up the food
Return the spoon close to the inner edge of the same foods section and get ready RtnS
to scoop up the food
Displaying Message Section
Display message F: ”There is no fork , please insert it or select only from section MsgF
1 or 2 in the food tray”
Display message S: “There is no spoon, please insert it or select only from section MsgS
3 or 4 in the food tray.
Display message Cm: ”There is no Cup m, choose other cups during the process” MsgCm
Display message Seck: ”Section k is not been found , please choose from other MsgSeck
sections”
64
Table 4.5: Continued.
Function (Subsystem) Function Name
Display message “ChooseS” , ”Please choose your food from section 1 or 2” MsgChooseS
Display message “ChooseF” , ”Please choose your food from section 3 or 4” MsgChooseF
Display message ”ChooseSecS”, ”Please choose spoon for your food” MsgChooseSecS
Display message ”ChooseSecF”, ”Please choose fork for your food” MsgChooseSecF
The fact that the robot is interacting with multiple users, cameras and objects, means some
additional tasks must be accomplished, such as managing the received commands from different users
and acquired images from different cameras. The procedures that the robot should do to accomplish
the required tasks are shown with different flowcharts. These flowcharts in Figures 4-10 to Figure 4-
19 make it easier for the system to be programmed.
65
Figure 4-12: Multiple-camera management.
66
Figure 4-13: User's face recognition and mouth tracking section.
67
Figure 4-14: Checking the availability of the users and objects, and object recognition section.
68
Figure 4-15: Messages sent to the users in case of unavailability of each object.
69
Figure 4-16: Acceptable commands by the feeding robotic system.
70
Figure 4-17: Robot's tasks after receiving the command for picking up the fork.
71
Figure 4-18: Robot's tasks after receiving the command for picking up the spoon.
72
Figure 4-19: Robot's tasks after receiving the command for picking up any of the cups.
73
Figure 4-20: Messages sent to the users for choosing an appropriate utensil for picking up the food
according to the chosen section of food.
74
Figure 4-21: Robot's tasks after receiving the command for holding any of the utensils.
75
The next chapter explains the kinematics and dynamics of the system. In the kinematic section, the
transformation matrices are found and the inverse problem is discussed. The dynamic section
provides the related information and data when the robot is in action, such as velocities and
accelerations of the links, joints and desired specific points. It also discusses the singular positions of
the system that should be avoided. The control system section provides details regarding position
control of the end effector on the desired path.. The control procedures are done with the help of
ADAMS control and Matlab 7.2 to control the path of the end effector in a virtual environment.
76
Chapter 5
However, two drawbacks of the numerical techniques are an inability to find all the solutions [59]
and that they are too slow for practical applications. Pieper [62] proved that if manipulators have
three consecutive joints with collocated frames, a closed-form inverse position solution exists. To
77
lessen the amount of calculation and to ensure closed form solutions, it is possible to arrange the last
three joints in such a way that they meet the criteria specified by Pieper. In the case of the selected
manipulator in this project, the 6-DOF CRS robot is chosen because all of the axes of the three wrist
joints intersect at one point. This simplifies the equations and reduces the problem to one that has a
closed form second order solution.
For the forward kinematic problem, the Denavit-Hartenberg table is used to model the 6R
manipulator and to develop the transformation matrices. The results are summarized in Appendix C.
78
trajectory generation based on time-scaling; 7) workspace transformation; and 8) bordered matrix
method [66].
T6 ,θ 6
T5 ,θ 5
T4 ,θ 4
T3 , θ3
T2 ,θ 2
T1 , θ1
If the joint angles are defined as shown in Figure 5-1, the simplified form of the Jacobian matrix for
the selected 6-DOF serial articulated manipulator is:
79
where:
s 23 = sin(θ 2 + θ 3 ) (5-4)
c 23 = cos(θ 2 + θ 3 ) (5-5)
In the singular positions the above determinant should be equal to zero. det(J ) = 0 . Since l b and
l c are not zero, any of the three cases may lead to the singular positions:
s3 = 0 (5-9)
s5 = 0 (5-10)
c 23 l
c 23 l c + c 2 l b = 0 or =− b (5-11)
c2 lc
This implies that in either of the following joint angles, the robot arm is in a singular position:
lb
θ 2 + θ 3 = cos −1 (− c 2 ) , where θ 2 ≠ +90 o ,−90 o (5-14)
lc
80
Since the range of motion for joint 3 is ± 110 o (in Appendix C), from the singular conditions shown
in Equation 5-12, only θ 3 = 0 leads to a singularity. Similarly, the range of motion for joint 5 is
±105 o and from the possible singular position for this joint represented in Equation 5-13, only θ 5 = 0
results in a singularity for this robot. However for the second joint, with ± 90 o range of motion, the
internal singularity, as presented in Equation 5-14, happens exactly at or in the vicinity of
θ 2 = +90 o ,−90 o , which is better to be avoided.
To avoid the singularities, the ranges of motion defined by the user’s manual will be modified
slightly by considering the singular angles and conditions.
Rigid bodies are represented by blocks and joints by arrows. The arrows connect reference frames
that are fixed on each body, which are shown as circles on the bodies to which they are affixed. The
position and orientation of any other frame on the body is defined relative to this primary reference
frame. After saving the system model in a DynaFlexPro input file, both kinematics and dynamic
equations of the system were generated. In the model, all the generalized coordinates are independent,
and dynamic equations governing the system response constitute a set of ordinary differential
equations (ODEs). The ODEs for the dynamic response can be solved simultaneously with nonlinear
algebraic equations.
After formulating the system equations, a step forward in simulating kinematic, inverse dynamic
and forward dynamic equations was taken. Maple uses built-in numerical routines (e.g. fsolve,
81
dsolve) to solve these equations for the time response of the system [67]. The complete descriptions
of the DynaFlexPro input model generated by MB, is in Appendix D. This .dfp file was exported to
Maple for generation of the equations.
82
Motion_1 attached to Joint_3 (between part_14 and part_7)
Figure 5-3: Magnitude of the translational displacement (continuous line), translational velocity
(dashed line) and translational acceleration (dotted line) for Motion 1.
Figure 5-4: Magnitude of angular velocity (continuous line) and angular acceleration (dashed line) for
Motion 1.
83
Figure 5-5: Magnitude of the element torque (continuous line), element force (dashed line) and power
consumption (dotted line) for Motion 1.
Figure 5-6: The x (continuous line), y (dashed line) and z-components (dotted line) of the element
torque for Motion 1.
84
Figure 5-7: The x (continuous line), y (dashed line) and z (dotted line) components of the translational
displacement for Motion 1.
Figure 5-8: The x (continuous line), y (dashed line) and z components (dotted line) of the translational
velocity for Motion 1.
85
Figure 5-9: The x, y and z components of the translational acceleration for Motion 1.
For an activity that requires force in measurement and control, in addition to position control,
compliance is necessary. An example of this is a task where movement continues until contact is
made with a surface, and constrained motion follows. Compliance is actually important in planning
fine motion strategies. Compliance is required when the robot is constrained by task geometry, or
when the robot is in contact with its environment. It can be achieved by active or passive means.
Active compliance control relies on a force sensor and an algorithm to move the robot according to
86
force sensor readings. Passive compliance is needed to overcome the limited position resolution and
to enhance the disturbance rejection capabilities [70].
Compliance is undoubtedly a first step in ensuring safety when workspace sharing is allowed, but it
is particularly useful in facilitating effective human-robot interactions that permit physical contact
and cooperation. Eating action requires adaptability of robot positioning to the user movements; to the
relative position between the user’s body and the robot arm, as well as to the shape and the current
position of their body parts, depending on the specific task. In designing the control of human
assistive robots, three important considerations are: safety, human-robot interaction, and
functionality. Then the goal is to find the best trade-off between safety and effective human robot
interaction and accurate execution of the tasks [71].
Service robots are designed to live among humans, to be capable of manoeuvring in human-
oriented environments and to have substantial autonomy in performing the required tasks in such
complex environments. They must coexist with humans who are not trained to cooperate with robots
and who are not necessarily interested in them. Safety must be guaranteed with these robots, since
they are in the presence of humans in the same work space [72]. The method of collision free
planning for industrial robots, which is based on previous knowledge of the environment, is not
applicable in unstructured situations [73]. The non-contact obstacle avoidance approaches [74-76],
based on optical, ultrasonic and proximity sensors can improve human safety, but may also suffer
from problems with dead angles, disturbances as well as poor image processing capabilities and
ambiguity of detectable volume in proximity sensing techniques. High reliability may not be achieved
with these sensors. Other methods for safety improvement have been developed, such as impedance
control (covering the robot body with viscoelastic material) [77], use of a mechanical impedance
adjuster equipped robot with linear springs and brake systems [78], robots with flexible joints [79],
compliant shoulders [80], and viscoelastic passive trunks [81]. Addressing these safety issues is
beyond the scope of this thesis.
87
to the robot-end effector that would move the end effector along a defined path to track the user’s
mouth or to approach a recognized food part. The torque that pivots the robot joints was supplied. The
torque level was computed by a control system, based on the error between the actual end effector
position and its desired position. Figure 5-4 describes the process of combining control with a
mechanical system.
Figure 5-10: ADAMS Model and Control System versus their input and output [ADAMS].
After loading the ADAMS/Controls plug-in in ADAMS/View, the model was imported, ADAMS
control was loaded, and the trial simulation was run. Then the motions on the model were deactivated
and the torques applied to the joint, based on values that the control-system package provides.
In the second step, the ADAMS plant inputs and outputs were identified. When an input control
torque was supplied to the robot model, the output position and velocity was sent to the controller.
Then to achieve the closed-loop circuit, it was necessary to define the input and output variables in
ADAMS/View, read in the plant and input/output variables using MATLAB, and create a
MESC.ADAMS plant and run a simulation. The simulation results in ADAMS /View were animated
and plotted and the variables were modified and the process was repeated as many times as necessary.
Then after all these procedures, ADAMS/Controls saved the input and output information in an .m
(for MATLAB) file. It also generated the command files (.cmd) and dataset files (.adm) that were
used during the simulation process. ADAMS/Controls setup was complete after the plant files had
been exported. Then the link between the controls and mechanical systems was completed by going
through the specific controls application (MATLAB).
88
In the third step, control was added to the ADAMS block diagram using MATLAB. In MATLAB a
new model in Simulink, was made which contains the S-function block of the MSC Software that
represents the mechanical system of the feeding robot. The S-function represents the nonlinear
ADAMS model and state-space block represents a linearized ADAMS model. Names automatically
match up with the information read in from the .m file. The adams_sub contains the S-Function, but it
also creates several useful MATLAB variables. The defined input and outputs of the model appear in
the sub-block. The sub-blocks, created based on the information of .m file in MATLAB, and I/O of
the model, along with ADAMS/MATLAB interface are represented in Appendix G.
Using the Simulink in MATLAB and existing adams_sub block a new model was created, as
shown in Figure 5-11, and the simulation results appear in Figure 5-12.
torque
velocity
1 1
PID PID
s s
Step Integrator Discrete Discrete Integrator1
PID Controller PID Controller1
position
adams_sub
89
Solver options: Type: variable-step
a b
Figure 5-12: Simulation results a) position of the end effector (mm) b) output velocity (mm/s) and
c) input torque (N-mm).
90
The next chapter justifies the use of vision system as the interface of the feeding system, and then it
discusses the different approaches available for processing the acquired images and the effect of their
application. All the processing is performed on food images because at present, this project is not
dealing with face recognition of the potential users. It is assumed that the user’s mouth locations are
known for now. In the future, the results of face recognition will be replaced with the data currently
assumed.
91
Chapter 6
Since it is assumed that the food inside the deep plates are soft, with no specific shape or
differentiability, they may not be easily segmentable or recognizable in the image and therefore, no
image information regarding that section will be processed. The spoon moves in a predefined smooth
path to sweep through the deep section of the plate and scoops up the food. Visual information then
specifies the location of the closest safe point to the user’s mouth where the robot’s end effector
should stop. At the present time, the developed image processing system in Matlab is able to
92
recognize and specify the fork insertion points for the pieces of cut toast and sandwiches with
acceptable accuracy.
The second part consists of an active system which uses sensory or visual feedback to understand
the environment. The work environment in this case is non-static and unconstrained. Some of the
objects which should be recognized are different pieces of solid foods that are not necessarily placed
in the same position and orientation in the food tray sections. Incorporating feedback into the system
allows non-determinism to creep into the deterministic control of the robot. The challenge is to
incorporate these sensors into a system and to make use of the data provided by them.
The main purpose of this part of the work is to improve the robotic performance of object
recognition tasks which are a precursor to other tasks, such as grasping and manipulation. Therefore,
the ability to recognize the relevant objects, such as spoon, fork, cups and also pieces of the solid
foods in the feeding environment is absolutely necessary.
Since the location of the cups, spoon and fork are predefined and almost fixed in the system, the
vision system for this part, only checks their presence and assigns the number one for their presence
inside the food tray. If any of these objects are missing from the tray, the system assigns zero for that
specific object. The system of four cameras for users and four cameras for the food trays working
together, have been presented in detail in the flowchart of the system in Chapter 4.
93
6.3 Image Acquisition and Preprocessing
94
output grey level distribution is uniform. An important issue in image enhancement is quantifying the
criterion for the enhancement.
6.4.3 Segmentation
Segmentation involves partitioning an image into a set of homogeneous and meaningful regions, such
that the pixels in each partitioned region posses an identical set of properties or attributes [82]. An
image is thus defined by a set of regions that are connected and non-overlapping, so that each pixel in
the image acquires a unique region label that indicates the region it belongs to. The set of objects of
interest in an image, which are segmented, undergoes subsequent processing, such as object
classification and scene description.
95
Segmentation algorithms are based on one of the two basic properties of grey-level values,
discontinuity and similarity among the pixels. In the first algorithm, the image is partitioned based on
sudden changes in grey level. The areas of interest within this category are the lines and edges in an
image. Thus if the edge of an image can be detected and linked, then the region can be described by
the edge contour that contains it. In the second algorithm, the connected sets of pixels, having more or
less the same homogeneous intensity, form the regions. Thus the pixels inside the regions describe the
region and the process of segmentation involves partitioning the entire scene in a finite number of
regions.
96
6.4.6 Region Analysis
Each region is further analyzed for extracting the centroid, the area, the perimeter, or other useful and
necessary information. The primary purpose of region analysis for images of solid food parts is to
find the centroid of each piece inside the flat section of the food tray. This is the point where the robot
inserts the fork. This becomes particularly important when only a few pieces of food remain inside
the food section and the chances of picking up the food, without accurately detecting the centroid
areas, drastically decreases. The adjacency relations, as an important part of the analysis, will be used
in matching against the model database. They can be found by examining contour pixels that separate
regions and by looking at the colors of their 8-connected neighbours.
directions and 2 in diagonal directions. The perimeter is found by the weighted sum of the number
of pixels of the contour, according to their relative position with respect to their neighbours.
97
binary image remove small pixles from the edge binary
a b c
edge image after closing - square 3 filling the holes edge image after closing - square 5
d e f
filling the holes - square 5 Metrics closer to 1 indicate that the object is approximately square, small black circles is centroid
0.80
0.86
0.60
0.15 0.64
0.31
0.21
g h
Figure 6-1: a) original image, b) binary image, c) removing small pixels from the edge detected image
3, d) image c after closing with square 3, e) filling gaps of image d, f) image 4 after closing with
square 5, g) filling gaps of image 7, h) segmentation and centroid extraction.
This approach used the edge detected image (by log filter) for further processing such as closing
and filling the gaps. Even though the pieces of toast were not touching or even very close to each
other, the series of procedures failed to detect two of the pieces correctly. They led to detecting two
adjacent pieces as one region and putting the centroid somewhere between the two bounded regions.
However, further modifications, such as applying a canny filter instead of a log filter and removing
more small pixels from the image, could solve the problem of finding correct centroids for this
98
particular image as shown in Figure 6-2. However, bread crumbs in the images formed small bounded
areas causing an overestimation in the number of closed boundaries; 10 parts were detected instead of
6. Applying a threshold could remove these small bounded areas. For instance, areas smaller than 500
or 700 pixels could be eliminated.
Metrics closer to 1 indicate that the object is approximately square, small black circles is centroid
0.45
0.52
0.72
0.330.23
0.40
0.32
0.07
0.42
0.52
binary image of the cut toast in pink backgroud Metrics closer to 1 indicate that the object is approximately square, small black circles is centroid
0.87
0.67
0.77 0.96
0.17
0.36
a b
Original image Metrics closer to 1 indicate that the object is approximately square, small black circles is centroid
0.16 0.13
0.08
0.22
0.17 0.06
0.21
0.05
0.06
a b
99
Original image Metrics closer to 1 indicate that the object is approximately square, small black circles is centroid
0.44
0.19
a b
adjustment of the grayscale image I binary image of the cut toast on red backgroud after enhancement filling the holes of edge30 - square 5
a b c
first erosion of the filled image of the edges fourth erosion of the filled image of the edges sixth erosion of the filled image of the edges
d e f
Figure 6-6: a) adjustment of the greyscale image, b) binary image after enhancement, c) filling the
holes of the edge image (square 5), d) first erosion of the filled gaps of the edge, e) fourth erosion,
f) sixth erosion.
Although some images, such as the one shown in Figure 6-3 work properly with this algorithm, it
can be seen that it fails considerably in properly segmenting the pieces of toast and locating the
centroids in others (see Figure 6-4 and Figure 6-5). The image enhancement functions imadjust and
adapthisteq were applied to the image shown in Figure 6-6-1 to add contrast to the image and to
equalize the histogram, respectively. The graythresh function computed a global threshold using
100
Otsu’s method to convert an intensity image to a binary image. A morphological flat structuring
element of the type specified by a disk shape with radius 5 was created.
101
Original image remove small pixles < 100 from the filled holes of binary Metrics closer to 1 indicate that the object is approximately square, small black circles is centroid
0.61
0.65
0.66
a
Original image binary image of the cut toast after enhancement Metrics closer to 1 indicate that the object is approximately square, small black circles is centroid
0.74
0.72
0.65
0.33
contrast-limited adaptive histogram equalization of the "grayscale" image binary image of the cut toast after enhancement Yellow Metrics closer to 1 indicate that the object is approximately square,Red metrics is minor-axis-lenght/major-axis-lenght, small black circles is centroid
0.96
0.91
0.83
0.75 0.84
0.82
c
Original image binary image of the cut toast after image enhancement Metrics closer to 1 indicate that the object is approximately square, small black circles is centroid
0.86
0.80 0.86
0.87
0.84
0.71
d
Original image binary image of the cut toast after image enhancement Yellow Metrics closer to 1 indicate that the object is approximately square,Red metrics is minor-axis-lenght/major-axis-lenght, small black circles is centroid
0.77
0.88
0.84
0.90
0.96
0.81
Figure 6-7: Results for some selected possible arrangements (6-7a to 6-7e) of three pieces of touching
cut toast.
102
6.7 Discussion of Results
The algorithm identifies each piece of toasted bread inside the food section. Since it ignores segments
smaller than a specific area (number of pixels) it will identify only the blobs of interest. The
information related to the location and orientation of each blob, such as its area, and its closeness to a
specific shape (such as square, or triangle) is identified. This information is then used to determine
which possible segments correspond to the piece of toasted bread that should be picked up by the
fork. The centre of each blob and its two dimensional coordinates will be available for use by
planning and action agents. These points demarcate where the fork is to be inserted into each piece of
toasted bread. The sensing agent determines the initial locations of the objects inside the food tray
during the eating process, such as cups, fork and spoon and pieces of solid food.
Since the pieces of toast or any other solid food inside the food section can come in a variety of
colors, it would be difficult to teach the system to pick up specific pre-learned colors. However we
assume that the solid foods can be cut into smaller pieces and simple shapes, such as squares (or
triangles); therefore, the blobs that closely resemble a specified shape can be selected as the regions
of interest. To make this happen, a metric for any segment to be square (or triangle) is defined. The
metrics which are closer to 1 are more similar to squares. Applying a filter can have the advantage of
getting rid of blobs that have very irregular shapes and are not similar to the square or the specific
shape being looked for.
Similar results have been observed for most of the other cases. It seems that the illumination of the
image, as shown in Figure 6-7(b), plays an important role in the image. Enhancement of the image
may have an affect on the final binary image. It has been attempted, through several trials, to
determine which features of the program (illumination, enhancement technique, filling or closing
method) have the greatest effects on the results. This type of investigation helped to understand the
parameters or thresholds which should be used in the program, as well as, the steps that should be
considered more carefully. Removing the adjustment step from the algorithm, as shown in
Figure 6-7(c), made the area of the non-object regions much smaller. The falsely detected regions,
shown in Figure 6-7(e), can be removed by either applying a higher threshold, more erosion, or
defining a parameter such as closeness to the square, thus excluding them from the group of centroid
locations, which are the regions intended to be specified. That is, the small circles, representing the
centroid of the pieces of toast, are not placed over the false regions.
103
The light direction, its quality, and intensity have a significant influence on the final image
processing results. The shadows of 3D pieces of cut toast in the image are a kind of distortion caused
by the lighting system, which conceals information relevant to the recognition of each piece, such as
their edges. False information (noise), dimensional distortion, and concealing of information are some
of the negative effects on image processing. The shadows that the pieces of cut toast, project on the
background plate lead to a shift in the limits between the object and the background in the image, thus
changing the observed geometric magnitude. It is obvious from the above results that this distortion
has caused difficulties in recognition and segmentation of each piece of cut toast, and contributes to
errors in computations of the centroid location of each piece.
The information from the image processing section such as the central point of the solid food parts
would be transferred to the ADAMS model which has also been integrated with MATLAB. However
this part of the global project is beyond the scope of this thesis and will be carried out as the next
phase of the project in the future.
104
Chapter 7
Closure
A preliminary study on an intelligent multiple-user feeding robot was presented. Various feeding
devices, including those available in the market and those still in various stages of development, were
introduced and discussed. Different user interfaces with the potential to be used in the proposed
feeding system, as well as their advantages and disadvantages, have also been explained. The idea for
a multiple-user feeding device was generated during observation sessions in a nursing home, where
continued examination of the elderly and their caregivers during meal time has provided ample
support, both in terms of motivation and supply of critical information, for the development of such a
device.
7.1 Observations
The design concept and criteria for the feeding device were based on general and special requirements
of the elderly and specific limitations in their eating capabilities. The behaviour of the elderly while
eating, the challenges of both senior people and the caregivers in the dining area during the meal time
were closely investigated in both the regular care unit and the special care unit of the nursing home.
The observations helped to determine the characteristics and needs of the population who can benefit
from such a feeding device, and they also clarified the scope of the design. This information provided
a guideline for decisions regarding the type of robot and its configuration, and also a user interface for
simultaneous feeding of multiple users sitting at a four-seat table.
The residents in special care unit with Alzheimer’s disease could not logically connect hunger to
food or to feeding. They needed to be reminded about the next task after finishing each step, since they
were forgetting the necessary steps for feeding themselves, even chewing or swallowing. Different and
unpredictable daily behaviour (related to different foods or a new device) and getting easily confused
with some options to choose from, were important factors that had to be considered in making a
comfortable and stress-free feeding for this population. The ways the machine and user interact with
105
one another are extremely important; an appropriate user interface helps to address users’ cognitive
disabilities. A good comprehension by the users of the environment and the required tasks for the
procedures of eating will only be achieved by an appropriate user interface.
The residents in the regular care unit mostly suffered from upper-limb dysfunctions, which made it
difficult for them to eat by themselves. In addition, having no control of their head and neck, severe
head tremor, inability to open their mouths to be fed, and severe swallowing or chewing problems,
were typical physical difficulties that made 40% of the population dependent on caregivers. Among
the rest of the 60%, more than 40% had problems such as hand tremor, lack of strength in holding the
utensil, and severe joint pain in arm, wrist, or finger. They had difficulties in manipulating the spoon
or fork and directing it toward the mouth. The lipped plates with dividers helped about 11.7% of the
elderly in better scooping. For each user, 3- 4 different kinds of food and dessert and 2- 4 cups were
considered. Many of the foods (about 31.7%) were pureed for those with chewing or digestion
problems and many of them were cut into pieces for those with lack of strength in their hands. Some
were using gel foods because of chewing and swallowing difficulties.
According to the observations and the caregivers’ opinion at the present time, having a feeding
device would be beneficial in the environment where many elderly people dine together at standard
four-seat tables. The meal time was very challenging for both residents and caregivers since the time
allocated for eating and the numbers of caregivers were limited. Indeed, one nurse could respond to a
maximum of two diners at the same time and could only manage to respond to the needs of all diners
with the assistance of the limited number of staff members available. The target users in the proposed
design have been considered as either female or male adults including elderly people (no children at
the present time) in senior houses, nursing homes or hospitals (where they receive special care) who
have weak muscles or joints in their hands or arms. They may suffer from muscle stiffness and cannot
grab or handle a spoon or fork easily or have significant tremor in their hands while eating. The users
should have control of neck and head muscles and; be cognitively aware of the environment; be able
to see and read labels, hear sounds, word, tones and characters and; be able to talk in such a way that
the words and characters are recognizable by others.
Form the safety point of view, the users should have sufficient control of their neck and head,
enough to keep it in an upright position or at an angle that would be safe in the nurse’s opinion. This
reduces the potential of choking while swallowing. The end effector does not reach the user’s mouth,
106
but should force the user to reach slightly for the spoon. The force applied by the robot must be
within a range that does not hurt the users. The robot’s end effector should avoid hurting the user by
stopping at the closest predefined distance to the user’s mouth. This will be more important when the
robot is using a fork which has pointed tines. Also, the spoon or fork should not retract when it is
inside and touching the user’s mouth. If the location of the user’s mouth is beyond the workspace of
the robot (when the user is further than the predefined allowable distance), the robot should notify the
user to sit closer to the table’s edge. Continuous head tremor, not only makes the user’s mouth
tracking very difficult, but it may cause the force sensor at the end of the end effector to be unreliable
when touching the user’s mouth. Incorrect data may lead to an applied force to the user that causes
injury.
This idea has moved beyond conceptualization to virtual design of the whole system; including the
food tray, appropriate selection of the robot, and the careful arrangement of the robot and food trays
on a four-seat dining table. It was assumed that issues related to food (e.g. cutting solid foods into
pieces and putting them in the right place in the food tray), the user (making the users sit in an upright
safe position for eating) and the environment (having sufficient light) would be taken care of or
checked by the care or service providers in the dining area.
7.3 Design
In the design, it was attempted to fit four cups and four food sections in an arc shape tray because this
way the robot could be located in the center of the arc to make it easier for the robot to feed multiple
persons. Scooping of the food would be much easier compared to using a square or round plate with
three or four compartments. The food trays could also be put beside each other with one robot at the
107
center for feeding four users Based on calculations of the minimum amount of food and liquid
required by users, the positions of food sections, cups, spoons and forks, were determined in order to
fit all utensils and food sections.
The robot was chosen such that it can be small enough to fit on a four-seat table with a standard
height of 72-74 cm (with a diameter of almost 60 cm). A serial manipulator was selected so it can
rotate almost 350-360 degree at the base to provide large workspace and respond to all users. A
payload of 2-3 kg was considered sufficient for picking up the weight of the food and utensil or
drinking cups. The rotation angles of joints and the length of link were supposed to be able to provide
the maximum reach between 800- 836 mm and reach to predefined locations on the dining table to
pick up a spoon, fork or any of the cups for each user. The height of the robot’s waist was chosen to
be lower than the user’s eye level when user sits behind the table (it might not be too obtrusive). A
non-redundant robot with six DOF was selected, to freely position and orient the objects in a
Cartesian workspace.
The minimum or desired system requirements such as type of robot joints, length of links,
maximum weight, maximum payload, maximum and minimum reach, and workspace of the robot
were specified based on the determined user’s characteristics and also the feeding environment. Some
of the data that impacted on this decision were: the desired model configuration, strength and
dimensions of a standard four-seat table to hold the robot on top, the weight of the utensils plus food
and cups filled with drink, the distance between the outer edge of the food tray with the edge of the
table, the anthropometric data of a typical adult in seated position such as the height of the mouth and
eye, and the distance of the head/mouth from the table. The selected robot was a CRS-A465, with
31 kg weight and maximum 2 kg payload on the end effector. The waist of the robot could rotate
from -175 to +175 degrees. The maximum reach of the robot was 711 mm without the end effector
and 864 mm with a standard end effector (not considering the length of the spoon/fork). The three
joint axis of the 3-DOF wrist intersected at one point, which had the advantage of providing the
closed-form solution for the kinematic and dynamic analyses.
The whole feeding system, including the robot, food trays, and table, was simulated in ADAMS to
help in three-dimensional visualization of the robot and its environment. The rationale for the
application of a vision system as an interface, along with its arrangement and settings with respect to
the food trays and the users were presented. The method of interaction between cameras, users, and
108
robot manipulators was explained in detail in the robot and vision related task section, and it was
schematically shown in flowcharts of the system. It was designed that the presence of the users
behind the table and the mouth locations would be checked and tracked by four cameras, one beside
each user. In addition, for recognizing the locations of the central parts of solid food parts and
checking the existence of utensils or food parts inside the tray, four other cameras were planned to be
used, one for each food tray. The interaction of multiple users, cameras and objects, requires
considerable management of received commands from different users and captured images from
different cameras.
The feeding robot task was divided into two parts. The first part was a pick-and-place type
operation in a constrained environment, where only partial knowledge of the relevant objects to be
manipulated would be assumed to be known. The spoon, the fork, and the cups were objects to be
recognized. This meant that the robot would know the vicinity of its approach and the exact location
and orientation of the objects would not be known. The second part consisted of an active system
which used sensory or visual feedback to understand the environment. Some of the objects to be
recognized were different pieces of solid foods that were not necessarily placed in the same position
and orientation in the food tray sections.
In order to achieve acceptable accuracy levels of food recognition, specifically the centroid
locations in small pieces of toast, an image processing algorithm was developed, which also aided in
checking the location of the cups and utensils.
Future research can address the following issues as force control and user safety, the addition of
compliant devices to reduce the risk of injury to users, expanding or optimizing the image processing
algorithm for other types of foods, seamless integration of the robotic and vision systems, addition of
109
alternative user interfaces in response to the vast range of user needs, production of a prototype of the
system, testing and evaluating the prototype on real users.
110
Appendix A
Anthropometric Data3 of an Adult Person
3
These are data related to dimensions of living human body parts mostly in static positions.
111
Table A.1. Anthropometric data of Men and Women4 [87], all dimensions are in [mm].
4
The table relates to British person and the size range shows the mid 90% range of people sizes in the UK.
112
5% 95% 5% 95%
21-Abdominal Depth 220 325 205 305
22-Shoulder-Elbow Length 330 395 300 360
23-Elbow Fingertip Length 440 510 400 460
24-Upper Limb Length 720 840 655 760
25-Shoulder Grip Length 610 715 555 650
26-Head Length 180 205 165 190
27-Head Breadth 145 165 135 150
28-Hand Length 175 205 160 190
29-Hand Breadth 80 95 70 85
30-Foot Length 240 285 215 255
31-Foot Breadth 85 110 80 100
32-Span 1655 1925 1490 1725
113
Appendix B
114
Appendix C
Table C-1: Joint specifications for A465 robotic arm [A465 User’s Guide].
Axis Joint 1 Joint 2 Joint 3 Joint 4 Joint 5 Joint 6
Range of Motion ± 175 o ± 90 o ± 110 o ± 180 o ± 105 o ± 180 o
Figure C.1. Workspace5 and dimensions of CRS A465 robot [A465 User’s Guide].
5
Workspace is the volume swept by all robot parts and the end effector and the workpiece.
115
Appendix D
Kinematic and Dynamic of the Manipulators
D.1. Kinematic
The Denavit-Hartenberg (DH) [88]technique proposes a matrix method that systematically assigns
coordinate systems to each link of an articulated chain. The axis of revolute joint i is aligned
with z i −1 . The xi −1 axis is directed along the normal from z i −1 to z i and for intersecting axes is
θ i is the joint angle which is the angle between the xi −1 and xi axes about the z i −1 axis.
α i is the twist angle which is the angle from z i −1 axis to the z i axis about the xi axis.
a i is the link length that is the distance between the z i −1 and z i axis along the xi axis.
d i is the link offset that is the distance from the (i − 1) th frame to the xi axis along the z i −1 axis.
For the revolute axis θ i is the joint variable and d i is constant. The 4 × 4 homogenous
transformation matrix for each revolute joint, which represent each link’s coordinate frame with
respect to the previous link’s coordinate system, is:
116
cos θ i − sin θ i cos α i sin θ i sin α i a i cos θ i ci − s i λi si µ i a i ci
sin θ cos θ i cos α i − cos θ i sin α i a i sin θ i s i c i λi − ci µ i a i si
i −1
Ai (θ i ) = i
=
0 sin α i cos α i di 0 µi λi di
0 0 0 1 0 0 0 1
(D-1)
where: λi = cos α i , µ i = sin α i , ci = cos θ i , and si = sin θ i . The values of α i s, a i s and d i s are
found from the defined DH table for the selected robotic system. The problem of inverse kinematics
corresponds to computing the joint angles θ1 to θ 6 such that:
L = [0 l b l c 0 0 0 ]
α = [ α1 α 2 α 3 α 4 α 5 α 6 ]
d = [0 0 0 0 0 0 ] (D-3)
θ = [θ1 θ 2 θ 3 θ 4 θ 5 θ 6 ]
l b = 0.35 m, l c = 0.33 m
117
and the transformation matrices are:
c1 0 s1 0
s 0 − c1 0
T01 = 1 (D-5)
0 1 0 0
0 0 0 1
c 1 c 2 - c1s 2 s1 l b c1c 2
s c − s 1s 2 − c1 l b s1c 2
T02 = 1 2 (D-6)
s2 c2 0 s2
0 0 0 1
c1 c 2 3 - c1s 2 3 s1 c1 (l c c 2 3 + l b c 2 )
s c − s1s 2 3 − c1 s1 (l c c 2 3 + l b c 2 )
T03 = 1 2 3 (D-7)
s23 c23 0 lcs 2 3 + l bs 2
0 0 0 1
c1c 234 s1 c1s 234 c1 (l c c 2 3 + l b c 2 )
sc − c1 s1 (c 4 s 23 − s 4 c 23 ) s1 (l c c 2 3 + l b c 2 )
T04 = 1 234
(D-8)
c 4 s 23 − s 4 c 23 0 - c 234 l c (s 23 + l b s 2 )
0 0 0 1
c1 (c 6 c 5 c 234 − s 6 s 234 ) + c 6 s1s 5 - s 6 (c1c 5 c 234 + s1s 5 ) − c1c 6 s 234 - c1s 5 c 234 + s1c 5 c1 (l c c 2 3 + l b c 2 )
c (s c c − c s ) − s s s - s1 (s 6 c 5 c 234 + c 6 s 234 ) + s 6 c1s 5 - s1s 5 c 234 − c1c 5 s1 (l c c 2 3 + l b c 2 )
T06 = 6 1 5 234 1 5 1 6 234
(D-10)
118
D.2. Dynamics
The dynamic model of the robot consist of an ordinary differential equation where the variable
.
corresponds to the vector of positions and velocities, which may be in joint coordinates θ and θ or
. ⋅
in operational coordinates x and x [30]. The Lagrangian of L(θ ,θ ) of a robot manipulator of n DOF
and the Lagrange equations of motion for the robot manipulator are:
. .
⋅ ⋅
d ∂L(θ , θ ) ∂L(θ , θ )
L(θ ,θ ) = K (θ ,θ ) − U (θ ) , [ .
]− = τi (D-11)
dt ∂θ ∂θ i
i
where K is the kinetic energy of the system and U is the total potential energy of the system,
τ i corresponds to the external forces and torques (delivered by the actuators) at each joint as well as
to other (non-conservative) forces. In the class of non-conservative force, we include those due to
friction, the resistance to the motion of the solid in a fluid and in general all those that depends on
⋅
time and velocity and not only on position. Considering the kinetic energy function K (θ ,θ ) as:
. 1 ⋅T ⋅
K (θ , θ ) = θ M (θ ) θ (D-12)
2
where M (θ ) is a symmetric and positive definite matrix of dimension 6 × 6 referred to as the inertia
matrix, the dynamic equation in compact form would be:
.. .
M (θ ) θ + C (θ , θ ) + g (θ ) = τ (D-13)
where
. . . . 1 ∂ ⋅T . ∂U (θ )
C (θ ,θ )θ = M (θ ) θ − [θ M (θ )θ ] , g (θ ) = (D-14)
2 ∂θ ∂θ
Equation (a) is the dynamic equation for robots of n DOF. Notice that (a) is a nonlinear vectorial
. T . .
differential equation of the state [θ T θ ]T . The C (θ , θ ) θ is the vector of dimension n called the vector
119
and τ is a vector of dimension n called the vector of external forces, which in general corresponds to
torques and forces applied by the actuators at the joints.
.. .
Each element of M (θ ) θ , C (θ ,θ ) and g (θ ) is, in general, a relatively complex expression of the
. .. .
positions and velocities of all the joints, that is, of θ and θ . The elements of M (θ ) θ , C (θ ,θ ) and
g (θ ) depend on the geometry of the robot. The inertia matrix is positive definite and its inverse
exists. This is what allows us to express the dynamic model of any robot of n DOF in terms of the
. T
state vector [θ T θ ]T , that is:
d θ
.
= θ (D-15)
. .. .
dt θ M (θ ) −1 [τ (t ) − M (θ ) θ + C (θ ,θ ) + g (θ )]
120
Appendix E
Model of 6-DOF Robot in DynaFlexPro-Maple
Figure E-1: Dynamic model of 6-DOF robot in DynaFlexPro Model Builder in Maple.
121
# Node 10: COM_4 on Wrist-1
# Node 11: D4 on Wrist-1
# Node 12: E4 on Wrist-1
# Node 13: COM_1 on Waist
# Node 14: A1 on Waist
# Node 15: B1 on Waist
# Node 16: COM_5 on Wrist-2
# Node 17: E5 on Wrist-2
# Node 18: F5 on Wrist-2
# Node 19: COM_6 on Wrist-3
# Node 20: F6 on Wrist-3
# Node 21: P on Wrist-3
# -============== Components ==============-
# Rigid Body "Shoulder": ,
rMData["Shoulder"] :=
"SubIdent", "mRigidBody",
"Description", "Rigid Body",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 1]],
"NodeMap", [[DOM_MT, "mGND", "COM_2"], [DOM_MR, "mGND", "COM_2"]],
"Params", ["Mass" = m2,
"Inertia" = [[Jxx_2,0,0],
[0,Jyy_2,0],
[0,0,Jzz_2]],
"TranVars" = [x_2, y_2, z_2],
"RotVars" = [zeta_2, eta_2, xi_2], "RotType" = "EA123",
"AngVelVars" = [wx_2, wy_2, wz_2], "AngVelType" = "End"]:
# Mech Frame 5 (B2)
rMData["B2"] := "SubIdent", "mRigidBodyFrame",
"Description", "B2",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_2", "B2"], [DOM_MR, "mGND", "COM_2", "B2"]],
"Params", ["TranConsts" = <-Lc2,0,0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
122
"RotReactVars" = [],
"TranReactVars" = []]:
# Mech Frame 6 (C2)
rMData["C2"] := "SubIdent", "mRigidBodyFrame",
"Description", "C2",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_2", "C2"], [DOM_MR, "mGND", "COM_2", "C2"]],
"Params", ["TranConsts" = <rc2,0,0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
"RotReactVars" = [],
"TranReactVars" = []]:
# Rigid Body "Arm": ,
rMData["Arm"] :=
"SubIdent", "mRigidBody",
"Description", "Rigid Body",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 1]],
"NodeMap", [[DOM_MT, "mGND", "COM_3"], [DOM_MR, "mGND", "COM_3"]],
"Params", ["Mass" = m3,
"Inertia" = [[Jxx_3,0,0],
[0,Jyy_3,0],
[0,0,Jzz_3]],
"TranVars" = [x_3, y_3, z_3],
"RotVars" = [zeta_3, eta_3, xi_3], "RotType" = "EA123",
"AngVelVars" = [wx_3, wy_3, wz_3], "AngVelType" = "End"]:
# Mech Frame 8 (D3)
rMData["D3"] := "SubIdent", "mRigidBodyFrame",
"Description", "D3",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_3", "D3"], [DOM_MR, "mGND", "COM_3", "D3"]],
"Params", ["TranConsts" = <rc3,0,0>,
"RotConsts" = [0, Pi, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
"RotReactVars" = [],
123
"TranReactVars" = []]:
# Mech Frame 9 (C3)
rMData["C3"] := "SubIdent", "mRigidBodyFrame",
"Description", "C3",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_3", "C3"], [DOM_MR, "mGND", "COM_3", "C3"]],
"Params", ["TranConsts" = <-Lc3,0,0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
"RotReactVars" = [],
"TranReactVars" = []]:
# Rigid Body "Wrist-1": ,
rMData["Wrist-1"] :=
"SubIdent", "mRigidBody",
"Description", "Rigid Body",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 1]],
"NodeMap", [[DOM_MT, "mGND", "COM_4"], [DOM_MR, "mGND", "COM_4"]],
"Params", ["Mass" = m4,
"Inertia" = [[Jxx_4,0,0],
[0,Jyy_4,0],
[0,0,Jzz_4]],
"TranVars" = [x_4, y_4, z_4],
"RotVars" = [zeta_4, eta_4, xi_4], "RotType" = "EA123",
"AngVelVars" = [], "AngVelType" = "Current"]:
# Mech Frame 11 (D4)
rMData["D4"] := "SubIdent", "mRigidBodyFrame",
"Description", "D4",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_4", "D4"], [DOM_MR, "mGND", "COM_4", "D4"]],
"Params", ["TranConsts" = <0,0,0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
"RotReactVars" = [],
"TranReactVars" = []]:
124
# Mech Frame 12 (E4)
rMData["E4"] := "SubIdent", "mRigidBodyFrame",
"Description", "E4",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_4", "E4"], [DOM_MR, "mGND", "COM_4", "E4"]],
"Params", ["TranConsts" = <0, 0, 0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1, 0, 0>, <0, 1, 0>, <0, 0, 1>],
"RotReactVars" = [],
"TranReactVars" = []]:
# Rigid Body "Waist": ,
rMData["Waist"] :=
"SubIdent", "mRigidBody",
"Description", "Rigid Body",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 1]],
"NodeMap", [[DOM_MT, "mGND", "COM_1"], [DOM_MR, "mGND", "COM_1"]],
"Params", ["Mass" = m1,
"Inertia" = [[Jxx_1,0,0],
[0,Jyy_1,0],
[0,0,Jzz_1]],
"TranVars" = [x_1, y_1, z_1],
"RotVars" = [zeta_1, eta_1, xi_1], "RotType" = "EA123",
"AngVelVars" = [wx_1, wy_1, wz_1], "AngVelType" = "End"]:
# Mech Frame 14 (A1)
rMData["A1"] := "SubIdent", "mRigidBodyFrame",
"Description", "A1",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_1", "A1"], [DOM_MR, "mGND", "COM_1", "A1"]],
"Params", ["TranConsts" = <0,0,0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
"RotReactVars" = [],
"TranReactVars" = []]:
# Mech Frame 15 (B1)
125
rMData["B1"] := "SubIdent", "mRigidBodyFrame",
"Description", "B1",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_1", "B1"], [DOM_MR, "mGND", "COM_1", "B1"]],
"Params", ["TranConsts" = <0,0,0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
"RotReactVars" = [],
"TranReactVars" = []]:
126
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "C2", "C3"], [DOM_MR, "mGND", "C2", "C3"]],
"Params", ["RotVars" = [theta_3], "RotReactVars" = [M1_3, M2_3], "TranReactVars" = [Fx_3, Fy_3, Fz_3],
"RotAxis" = <0,0,1>, "ReactAxis1" = <1,0,0>,
"K"=0, "Ang0"=0, "D"=0, "Moment"=T3,
"RotDrivers" = [f(t)], "RotDrvReactVars" = [Torque]]:
# Revolute joint "joint 4-rotate":
rMData["joint 4-rotate"] :=
"SubIdent", "mRevJt",
"Description", "Revolute joint",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "D3", "D4"], [DOM_MR, "mGND", "D3", "D4"]],
"Params", ["RotVars" = [theta_4], "RotReactVars" = [M1_4, M2_4], "TranReactVars" = [Fx_4, Fy_4, Fz_4],
"RotAxis" = <0,0,1>, "ReactAxis1" = <1,0,0>,
"K"=0, "Ang0"=0, "D"=0, "Moment"=T4,
"RotDrivers" = [f(t)], "RotDrvReactVars" = [Torque]]:
# Rigid Body "Wrist-2": ,
rMData["Wrist-2"] :=
"SubIdent", "mRigidBody",
"Description", "Rigid Body",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 1]],
"NodeMap", [[DOM_MT, "mGND", "COM_5"], [DOM_MR, "mGND", "COM_5"]],
"Params", ["Mass" = m5,
"Inertia" = [[Jxx_5,0,0],
[0,Jyy_5,0],
[0,0,Jzz_5]],
"TranVars" = [x_5, y_5, z_5],
"RotVars" = [zeta_5, eta_5, xi_5], "RotType" = "EA123",
"AngVelVars" = [wx_5, wy_5, wz_5], "AngVelType" = "End"]:
# Mech Frame 17 (E5)
rMData["E5"] := "SubIdent", "mRigidBodyFrame",
"Description", "E5",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_5", "E5"], [DOM_MR, "mGND", "COM_5", "E5"]],
127
"Params", ["TranConsts" = <0,0,0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
"RotReactVars" = [],
"TranReactVars" = []]:
# Mech Frame 18 (F5)
rMData["F5"] := "SubIdent", "mRigidBodyFrame",
"Description", "F5",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_5", "F5"], [DOM_MR, "mGND", "COM_5", "F5"]],
"Params", ["TranConsts" = <0,0,0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
"RotReactVars" = [],
"TranReactVars" = []]:
# Revolute joint "joint 5-pitch":
rMData["joint 5-pitch"] :=
"SubIdent", "mRevJt",
"Description", "Revolute joint",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "E4", "E5"], [DOM_MR, "mGND", "E4", "E5"]],
"Params", ["RotVars" = [theta_5], "RotReactVars" = [M1_5, M2_5], "TranReactVars" = [Fx_5, Fy_5, Fz_5],
"RotAxis" = <0,0,1>, "ReactAxis1" = <1,0,0>,
"K"=0, "Ang0"=0, "D"=0, "Moment"=T5,
"RotDrivers" = [f(t)], "RotDrvReactVars" = [Torque]]:
# Rigid Body "Wrist-3": ,
rMData["Wrist-3"] :=
"SubIdent", "mRigidBody",
"Description", "Rigid Body",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 1]],
"NodeMap", [[DOM_MT, "mGND", "COM_6"], [DOM_MR, "mGND", "COM_6"]],
"Params", ["Mass" = m6,
"Inertia" = [[Jxx_6,0,0],
[0,Jyy_6,0],
128
[0,0,Jzz_6]],
"TranVars" = [x_6, y_6, z_6],
"RotVars" = [zeta_6, eta_6, xi_6], "RotType" = "EA123",
"AngVelVars" = [wx_6, wy_6, wz_6], "AngVelType" = "End"]:
# Mech Frame 20 (F6)
rMData["F6"] := "SubIdent", "mRigidBodyFrame",
"Description", "F6",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_6", "F6"], [DOM_MR, "mGND", "COM_6", "F6"]],
"Params", ["TranConsts" = <0,0,0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
"RotReactVars" = [],
"TranReactVars" = []]:
# Mech Frame 21 (P)
rMData["P"] := "SubIdent", "mRigidBodyFrame",
"Description", "P",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "COM_6", "P"], [DOM_MR, "mGND", "COM_6", "P"]],
"Params", ["TranConsts" = <0,0,0>,
"RotConsts" = [0, 0, 0],
"RotAxes" = [<1,0,0>, <0,1,0>, <0,0,1>],
"RotReactVars" = [],
"TranReactVars" = []]:
# Revolute joint "joint 6-roll":
rMData["joint 6-roll"] :=
"SubIdent", "mRevJt",
"Description", "Revolute joint",
"TreeEdges", [0, [DOM_MT, 1], [DOM_MR, 2]],
"NodeMap", [[DOM_MT, "F5", "F6"], [DOM_MR, "mGND", "F5", "F6"]],
"Params", ["RotVars" = [theta_6], "RotReactVars" = [M1_6, M2_6], "TranReactVars" = [Fx_6, Fy_6, Fz_6],
"RotAxis" = <0,0,1>, "ReactAxis1" = <1,0,0>,
"K"=0, "Ang0"=0, "D"=0, "Moment"=T6,
"RotDrivers" = [f(t)], "RotDrvReactVars" = [Torque]]:
129
end use:
# -============== End of model description ==============-
130
Appendix F
Figure F-1: The x, y, z components and magnitude of the element force for joint 3.
Figure F-2: The x, y, z components and magnitude of the element torque for joint 3.
131
Figure F-3: The x, y, z components and magnitude of the translational displacement for joint 3.
Figure F-4: The x, y, z components and magnitude of the translational velocity for joint 3.
132
Figure F-5: The x, y, z components and magnitude of the translational acceleration for joint 3.
Figure F-6: The x, y, z components and magnitude of the angular velocity for joint 3.
133
Figure F-7: The x, y, z components and magnitude of the angular acceleration for joint 3.
Part 7: Link
134
Figure F-9: The x, y, z components and magnitude of the acceleration of CM of part 7.
Figure F-10: The x, y, z components and magnitude of the angular velocity of CM of part 7.
135
Figure F-11: The x, y, z components and magnitude of the angular acceleration of CM of part 7.
Figure F-12: Kinetic energy, Translational kinetic energy and angular kinetic energy and potential
energy of part 7.
136
Part 11 (link):
Figure F-13: The x, y, z components and magnitude of the velocity of CM of part 11.
Figure F-14: The x, y, z components and magnitude of the acceleration of CM of part 11.
137
Figure F-15: The x, y, z components and magnitude of the angular velocity of CM of part 11.
Figure F-16: The x, y, z components and magnitude of the angular acceleration of CM of part 11.
138
Figure F-17: The kinetic energy of the part 11.
Figure F-18: The translational and angular kinetic energy of part 11.
139
Figure F-19: The x, y, and z components of the angular momentum about CM of part 11.
Figure F-20: The magnitudes of the position, velocity and acceleration of CM of part 11.
140
Figure F-21: The magnitudes of the angular velocity and acceleration of CM of part 11.
Part 12 (wrist)
141
Figure F-23: The x, y, and z components of acceleration of CM of part 12.
Figure F-24: The x, y, and z components of the angular velocity of CM of part 12.
142
Figure F-25: The x, y, and z components of the angular acceleration of CM of part 12.
Figure F-26: The kinetic energy, translational kinetic energy and angular kinetic energy of part 12.
143
Figure F-27: The x, y, z component and the magnitude of the translational momentum of part 12.
Figure F-28: The x, y, z component and the magnitude of the angular momentum about CM of part 12
144
Figure F-29: Magnitudes of the Position, Velocity and Acceleration of CM of part 12.
Figure F-30: Magnitudes of the angular Velocity and Acceleration of CM of part 12.
145
Appendix G
ADAMS/MATLAB Interface
Based on the information of .m file in MATLAB, the adams_sub block was created, as shown in
Figure G-2 and the input and outputs of the model appearing in the sub-blocks are shown in Figure G-
1.
MSCSoftware
endeffector_velocity
end-effector velocity
S-Function
x' = Ax+Bu
y = Cx+Du
endeffector_position
end-effector position
State-Space
adams_sub
146
ADAMS_uout
1
U To Workspace
endeffector_velocity
end-effector velocity
control_torque
ADAMS Plant
2
ADAMS_yout endeffector_position
end-effector position
ADAMS_tout
Clock T To Workspace
Figure G-2: Defined input and outputs of the model appearing in the sub-blocks.
The names appear according to the information read from the following .m file:
ADAMS/MATLAB Interface
147
addpath(temp_str)
temp_str=strcat(topdir, '/controls/', arch);
addpath(temp_str)
temp_str=strcat(topdir, '/controls/', 'matlab');
addpath(temp_str)
ADAMS_sysdir = strcat(topdir, '');
else
addpath( 'C:\MSC~1.SOF\MSC~1.ADA\2005r2\win32' ) ;
addpath( 'C:\MSC~1.SOF\MSC~1.ADA\2005r2\controls/win32' ) ;
addpath( 'C:\MSC~1.SOF\MSC~1.ADA\2005r2\controls/matlab' ) ;
ADAMS_sysdir = 'C:\MSC~1.SOF\MSC~1.ADA\2005r2\' ;
end
ADAMS_exec = '' ;
ADAMS_host = 'Zone.uwaterloo.ca' ;
ADAMS_cwd ='E:\New Folder (2)' ;
ADAMS_prefix = 'control_01' ;
ADAMS_static = 'no' ;
ADAMS_solver_type = 'Fortran' ;
if exist([ADAMS_prefix,'.adm']) == 0
disp( ' ' ) ;
disp( '%%% Warning : missing ADAMS plant model file.' ) ;
disp( '%%% Please copy the exported plant model files in working
directory.' ) ;
disp( '%%% However, it is OK if the simulation is TCP/IP-based.' ) ;
disp( ' ' ) ;
end
ADAMS_init = '' ;
ADAMS_inputs = 'control_torque' ;
ADAMS_outputs = 'endeffector_velocity!endeffector_position' ;
ADAMS_pinput = '.model.new_control.ctrl_pinput';
ADAMS_poutput = '.model.new_control.ctrl_poutput';
ADAMS_uy_ids = [1
5
3];
ADAMS_mode = 'non-linear' ;
tmp_in = decode( ADAMS_inputs ) ;
tmp_out = decode( ADAMS_outputs ) ;
disp( ' ' ) ;
disp( '%%% INFO : ADAMS plant actuators names :' ) ;
disp( [int2str([1:size(tmp_in,1)]'),blanks(size(tmp_in,1))',tmp_in] ) ;
disp( '%%% INFO : ADAMS plant sensors names :' ) ;
disp( [int2str([1:size(tmp_out,1)]'),blanks(size(tmp_out,1))',tmp_out] ) ;
disp( ' ' ) ;
clear tmp_in tmp_out ; % ADAMS / MATLAB Interface - Release 2005.2.0
148
References
1. Mihailidis, A., Carmichael, B., Boger, J., “The use of computer vision in an intelligent
environment to support aging-in-place, safety, and independence in the home,” IEEE
Trans on Information Technology in Biomedicine Sep (2004), v8, n3: 238-247.
3. Administration on Aging:
http://www.aoa.gov/prof/statistics/future_growth/aging21/summary.asp.
5. CCAA: Canadian Center for Activity and Aging, Community collaboration, Restorative
Care education and training program: http://www.uwo.ca/actage.
6. http://www.hc-sc.gc.ca/seniors-aines/nfa-cnv/nfaguide1_e.htm.
7. Buerhaus P.I., Staiger D.O., Auerbach D.I., “Implications of an Aging Registered Nurse
Workforce,” JAMA. (2000); Jun 14, 283 (22), p.2948-2954.
8. Pineau J., Montemerlo M., Pollack M., Roy N., Thrun S., “Towards Robotic Assistants in
Nursing Homes: Challenges and results,” Robotics and Autonomous Systems, v 42, Issue
3-4, 31 March (2003), p. 271-281.
9. Calkins E., Boult C., Wagner E., et al. “New ways to care for older people, Building
systems based on evidence,” New York: Springer; (1999).
10. Fried, L.P., Guralnik J.M. “Disability in older adults: evidence regarding significance,
etiology, and risk.” J. Am. Geriatr. Soc. (1997); 45 (1), p. 92-100.
149
15. Sammons Preston Rolyan, An Ability One Company, 270 Remington Blvd., Suite C
P.O. Box 5071 Bolingbrook, Illinois 60440 U.S. A.,
URL: http://www.sammonspreston.com.
16. Lenjoy medial Engineering Inc., 13112 S. Crenshaw Blvd., GARDENA, CA 90249-2466
URL: http://www.comfysplints.com/comfy-feeder.htm.
17. Canoe Creek Rehabilitation Products, Inc, Pittsburgh, Pennsylvania 15235, U.S.A.
18. Krovi V., Feehery P., Heinrichs T., Kumar V., “Design and Virtual Prototype of a Head
Controlled Feeder,” (1997).
URL: http://www.cim.mcgill.ca/~venkat/PUBLICATIONS/AMR_97_DESIGN.pdf.
19. Maddak Inc., 661 Route 23, South Wayne, NJ 07470,
URL: http://maddak.com.
20. http://www.amputee-coalition.org/sect1.pdf.
21. Kumar V., Rahman T., Krovi V., “Assistive Devices for People with Motor Disabilities,”
Wiley Encyclopedia of Electrical and Electronics Engineering, (1997).
URL:http://www.wtec.org/robotics/us_workshop/June22/Wiley.pdf.
22. B. Fay, Division of Rehabilitation Education and Department of General Engineering,
University of Illinois at Urbana Champaign, Champaign ,“Feeding Mechanism,” 216
NSF (1992) Engineering Senior Design Projects to Aid the Disabled.
23. SECOM Co., Ltd., 1-5-1 Jingumae, Shibuya, Tokyo 150-0001 Japan
URL:http://www.secom.co.jp/english.
24. Ishii S. (SECOM Co, Ltd); Tanaka, S.; Hiramatsu, F., “Meal assistance robot for severely
handicapped people,” Proceedings - IEEE International Conference on Robotics and
Automation, v 2, (1995), p. 1308-1313.
25. Therafin Corporation, URL: http://www.therafin.com.
26. Neater Solutions Ltd., URL: http://www.neater.co.uk/main.htm.
27. Mealtime Partners, Inc., , 1137 S.E. Parkway, Azle, TX, 76020,
URL: http://www.mealtimepartners.com.
28. http://www.abledata.com.
29. Mila Medical Company, 11554 Encino Avenue, Granada Hills, CA 91344.
30. http://www.cooper.edu/engineering/projects/gateway/me/concurrent/feeders/mila.html.
31. Winsford Products Inc, 179 Pennington-Harbourton Rd, Pennington, New Jersey
United States, http://www.activeforever.com.
32. Mahoney R.M., A. Phalangas, “Consumer evaluation of powered feeding devices,”
RESNA (1996) proceedings p. 56.
150
33. Kingma, Y. J. “Robotic feeding device for quadriplegics,” Proceedings of the Sixteenth
Annual Hawaii International Conferences on System Sciences, (1983), p. 495-499.
34. Topping M., “Handy 1, a robotic aid to independence for severely disabled people,”
Technology and Disability 5 (1996) p. 233-234.
35. Topping M. (Center for Rehabilitation Robotics), “An overview of the development of
handy 1, a rehabilitation robot to assist the severely disabled,” Journal of Intelligent and
Robotic Systems: Theory and Applications, v 34, n 3, (2002), p. 253-263.
36. Gan, W., Sharma, S., Kawamura K., “Development of an Intelligent Aid to the Physically
Handicapped,” Proceedings of the Annual Southeastern Symposium on System Theory,
(1990), p. 298-302.
37. Kara A. (Vanderbilt Univ.), Kawamura, K.; Bagchi, S.; El-Gamal, M. “Reflex control of a
robotic aid system to assist the physically disabled,” IEEE Control Systems Magazine, v
12, n 3, Jun, (1992), p. 71-77.
38. Kawamura K., Bagchi S., Iskarous M., Bishay M., “Intelligent Robotic System in Service
of the Disabled,” IEEE, Trans. On Rehabilitation Engineering, Vol. 3, No. 1, March
(1995).
39. Takahashi Y. (Kanagawa Inst of Technology); Hasegawa, N.; Ishikawa, S.; Ogawa, S.
“Robotic food feeder,” Proceedings of the SICE Annual Conference, (1999), p. 979-982.
40. Takahashi Y., Hasegawa N., “Human Interface Using PC Display with Head Pointing
Device for Eating Assist Robot and Emotional Evaluation by GSR Sensor,” Proceedings
of the 2001 IEEE Int. Conf. on Robotics and Automation, Seoul, Korea, May 21-26,
(2001).
41. Takahashi Y., Suzukawa S., Dept. Of System Design Eng. , Kanagawa Institute of
Technology, Japan, “Eating Assist Robot with Easy Human Interface for Severely
Handicapped Person, ” 7th Int. Conf. on Control, Automation, Robotics and Vision
(ICARCV’02), Dec. (2002), Singapore.
42. Takahashi Y., Yashige M., “Hand System of Robotic Manipulator with Human Interface
Using Laser Pointer,” IECON (2001), The 27th Annual Conf. of the IEEE Industrial
Electronics Society, v 1,( 2001), p. 2160-2165.
43. Takahashi Y., Yashige M., “Robotic Manipulator Operated by Human Interface with
Positioning Control Using Laser Pointer,” IEEE (2000), p. 608-613.
151
44. Yamamoto M. (Yamaguchi Univ); Sakai, Y.; Funakoshi, Y.; Ishimatsu, T., “Assistive robot
hand for the disabled,” Proceedings of the IEEE International Conference on Systems,
Man and Cybernetics, v 1, (1999), p. I-131 - I-134.
45. Takahashi Y. (Kanagawa Inst of Technology); Kikuchi Y., Ibaraki T., Oohara T.,
Ishibashi Y., Ogawa S., “Man-Machine Interface of Assist Robot for Aged Person,”
IECON Proceedings (Industrial Electronics Conference), v 2, (1999), p. 680-685.
46. “Feeding Device for people with disabilities,”
URL: http://www.ewh.ieee.org/soc/es/Aug1996/030/cd/feeding/report/toc.htm.
47. Dario P., Guglielmelli E., Genovese V., Toro M., “Robot assistants: applications and
evolution,” Robotics and autonomous systems 18 (1996) p. 225-234.
48. Jackson R.D., “Robotics and its role in helping disabled people,” IEE Engineering
Science and Education Journal, (1993), 2, 267-272.
49. Leifer L., “Factoring the robot user interface,” RESNA 1992 - Proceedings. p. 580-583.
50. Fox J., “Quality through design: The key to successful product delivery,” Book, London;
New York: Spon Press, (2001).
51. Drexel University senior design team, “Marketing Analysis of a children’s wheelchair-
mounted robotic arm,” Report for Gateway Coalition, January 11, (1998).
52. Yanco H.A. (Computer science Department, University of Massachusetts Lowell),
“Evaluating the Performance of Assistive Robotic Systems,”
URL:http://www.isd.mel.nist.gov.
53. Exact dynamics, Bouriciusstraat 3, NL-6814 CS, Arnhem, The Netherlands,
URL:http://www.exactdynamics.nl.
54. “Chamelon: A Body Powered Rehabilitation Robot,”
URL: http://www.asel.udel.edu/robotics/chameleon/chameleon.html.
55. Riseberg J., Klein J., Fernandez R., Picard R.W., (MIT Media Laboratory), “Frustrating
the User on Purpose: Using Biosignals in a Pilot study to Detect the User’s Emotional
State,” CHI 98, 18-23 April (1998), ACM ISBN 1-58113-028-7,
URL:http://delivery.acm.org/10.1145/290000/286715/p227-riseberg.pdf.
56. Ishimatsu T., Irie N. and Takami O., “Computer Interface Device for Handicapped
People Using Head Movement,” IEEE (1997).
57. Atienza R., Zelinsky A., “Active gaze tracking for human-robot interaction,”
URL:http://users.rsise.anu.edu.au/~rowel/atienzar_icmi2002.pdf.
152
58. Wu T., “Eye Mouse,” URL: http://www-rcf.usc.edu/~wdutton/comm533/EYEM-
WU.htm.
59. Wang L.C.T., Chen C.C., ”A Combined Optimization Method for Solving the Inverse
Kinematics Problem of Mechanical Manipulators,” by, IEEE Transactions on Robotics
and Automation, v 7, n 4, August (1991), p. 489-499.
60. Manocha D., Canny J.F., “Efficient Inverse Kinematics for General 6R Manipulators”,
IEEE Transactions on Robotics and Automation, v 10, n 5, Oct. (1994), p. 648-657.
61. Gray J.O., Caldwell D.G., “Advanced Robotics and Intelligent Machines,” Publisher:
London: Institution of Electrical Engineers, (1996).
62. Williams II R.L. “Inverse Kinematics and Singularities of Manipulators with Offset
Wrist,” International Journal of Robotics and Automation, v 14, n1, (1999), p. 1-8.
63. Gogu G., “Families of 6R orthogonal robotic manipulators with only isolated and sudo-
isolated singularities,” Mechanism and Machine Theory, v 37, (2002), p. 1347-1375.
64. Lloyd J.E., “Removing Singularities of Serial Manipulators by Transforming the
Workspace”, Proceedings - IEEE International Conference on Robotics and Automation,
vol. 4, (1998), p. 2935-2940.
65. Lloyd J.E. (Dept. of Comput. Sci., British Columbia Univ., Vancouver, BC, Canada),
“Desingularization of Nonredundant Serial Manipulator Trajectories Using Puiseux
Series,” IEEE Transactions on Robotics and Automation, Vol. 14, No. 4, August (1998),
p. 590-600.
66. Fang Y., Tsai L., “Feasible Motion Solutions for Serial Manipulators at Singular
Configurations,” Journal of Mechanical Design, March (2003), Vol.125, p.61-69.
67. Shi, P., McPhee, J. (System Design Engineering, University of Waterloo), “DynaFlex
User’s Guide,” version 5 and 6, August (2002).
68. Gray J.O., Caldwell, D. G., “Advanced Robotics and intelligent Machines,” Book, 1995.
69. L. Zollo, B. Siciliano, C. Laschi, G. Teti, P. Dario, “Compliant control for cable-actuated
anthropometric robot arm: an experimental validation of different solutions,” Proceedings
of the 2002 IEEE International Conference on Robotics and Automation, Washington
DC, May (2002).
70. Sim Tian-Soon, Marcelo H. Ang JR and Lim Kah-bin, “A compliant End-effector
coupling for Vertical Assembly: Design and Evaluation,” Robotics and computer-
integrated manufacturing Vol. 13, No. 1, p. 21-30, (1997).
153
71. Zollo, L., Sciliano, B., Laschi, C., Teti, G., Dario, P., “An experimental study on
compliance control for a redundant personal robot arm,” Robotics and Autonomous
systems (2003), 44, p.101-129.
72. Lu S., Chung J.H., Velinsky S.A., “Human-Robot Interaction Detection: A Wrist and
Base Force/Torque Sensors Approach,” Robotica (2006) Vol. 24, p. 419-427.
73. Yang S.X., Meng M., “Neural Network Approaches to dynamic collision-free trajectory
generation,” IEEE Transactions on Systems, Man and Cybernetics, Part B 31, p. 302-318
(2001).
74. Hwang K.S., Ju M.Y., Chen Y.J., “Sensor covering of a robot arm for collision
avoidance,” IEEE Transactions on Industrial Electronics 50, (2003). p. 385-393
75. Lumelsky V.J., Cheung E., “Real time collision avoidance in teleoperated whole-
sensitive robot arm manipulators,” IEEE Transactions on Systems, Man and Cybernetics,
v 23, n 1, Jan-Feb, (1993), p. 194-203.
76. Novak J.L., Fedderma J.T., “A capacitance-based proximity sensor for whole arm
obstacle avoidance,” Proceedings-IEEE International Conference on Robotics and
Automation, v 2, (1992), p. 1307-1314.
77. Gandhi D., Cerveraa E., “Sensor covering of a robot arm for collision avoidance,”
Proceedings of the 2003 IEEE International Conference on Systems, Man and
Cybernetics, Washington D.C. (2003), v 5, p. 4951-4955.
78. Morita T., Sugano S., “Double safety measure for human symbiotic manipulator,”
IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Tokyo,
Japan (1997) p. 130.
79. Morita T., Sugano S., “Design and development of a new robot joint using a mechanical
impedance adjuster,” Proceedings of the 1995 IEEE International Conference on
Robotics and Automation, Nagoya, Japan (1995), v3, p. 2469-2475.
80. Nakamura T., Saga N., Nakazawa M. and Kawamura T., “Development of a soft
manipulator using smart flexible joint for safe contact with humans,” Proceedings, 2003
IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Port
Island, Kobe, Japan (2003), pt. 1, vol. 1, p. 441-446.
81. Okada M., Nakamura Y., Ban S., “Design of programmable passive compliance shoulder
mechanism,” Proceedings 2001ICRA. IEEE International Conference on Robotics and
Automation, Tokyo, Japan (2001), pt. 1, vol. 1, p. 348-353.
82. Lim H.O., Tanie K., “collision –tolerant control of human-friendly robot with viscoelastic
trunk,” IEEE/ASME Transactions on Mechatronics 4, (1999), p. 417-427.
154
83. Acharya T., Ray A.K., “Image Processing: Principles and Applications,” Publisher:
Hoboken, N.J.: Wiley-Interscience, (2005).
84. Allen P. K., Academic K., “Robotic Object Recognition Using Vision and Touch,”
Publisher, Boston/ Dordrecht/ Lancaster.
85. Torras C. (Ed.), “Computer Vision, Theory and Industrial Applications,” Print: Springer-
Verlag, July (1992).
86. Cushman W.H. and Rosenberg D. J., “Advances in human factors/ergonomics: Human
factors in product design”, Book.
87. http://www.roymech.co.uk/Useful_Tables/Human/Human_sizes.html.
88. Robotics Toolbox for MATLAB.
155