Date of Award

12-2018

Degree Name

Doctor of Philosophy

Department

Electrical and Computer Engineering

First Advisor

Dr. Ikhlas M. Abdel-Qader

Second Advisor

Dr. Janos Grantner

Third Advisor

Dr. Maureen Mickus

Abstract

Dementia is a syndrome used to describe an array of significant declines in cognitive abilities due to progressive and irreversible loss of neurons and brain functioning. This neurodegeneration seriously affects daily life activities like driving, shopping, working and speaking. Among these, Alzheimer’s disease is the most common type of dementia, with individuals experiencing loss of memory and thinking and reasoning skills. Due to cognitive decline, individuals with Alzheimer’s often suffer from malnutrition, since they do not eat, even when food is presented, and must be fed with assistance. This assistance presents a significant burden of time to caregivers, and consequently to the public health costs of the disease. Past approaches for food intake monitoring have involved sensors attached to the subject’s body. Such systems are not suitable for persons with Alzheimer’s, who may simply remove these sensors or refuse to wear them in the first place since they may perceive them as obtrusive. Therefore, using a vision-based monitoring system is practical and has the significant advantage of monitoring food intake without disturbing the individual. The system can also promote independent eating by prompting the individual to eat on his/her own and can alert the caregiver when the individual has stopped eating. Fostering independent eating for as long as possible reduces caregiver involvement and enhances the dignity of the person with Alzheimer’s.

In this study, a vision-based framework for food-intake monitoring has been designed for persons with Alzheimer’s disease. The proposed framework is based on recognizing skin color as the main cue for region-of-interest-based segmentation and is designed with a built-in tracking system that uses a controlled bounding boxes technique with the ability of tackling the hand-over-face occlusion problem. Additionally, motion cue information is infused with skin information for better detection of moving objects. The work also focuses on detecting eating and non-eating gestures using feature extraction and classification methods. The Upper Body Region (UBR) is detected using the Viola-Jones method while a histogram of oriented gradients (HOG) is used for feature extraction and support vector machine (SVM) is used for classification. To reduce false positive results, Haar-like feature detection is integrated with the combined template image (CTI) technique. This unique integration allowed for the detection of hand movement within UBR with higher accuracy. The framework, using any of the proposed techniques, can be custom-designed for the person’s specifics and the eating environment to ensure optimal results.

The system has been tested using videos generated in the Digital Signal and Image Processing Laboratory at Western Michigan University, with the MOBISERV-AIIA dataset. The experimental results successfully demonstrate the effectiveness of the proposed framework in monitoring food intake, and the capacity to provide timely feedback. This prototype will be tested in an assisted living community in the Kalamazoo area. The proposed system has the capacity to ease caregiver workload at mealtime, and also lay the groundwork for further work with cognitively impaired populations. As prevalence rates for Alzheimer’s disease continue to rise dramatically, further research on the efficacy of communicating with patients with Alzheimer’s via human-computer interaction (HCI) systems, particularly in the realm of behavioral modifications, is warranted.

Access Setting

Dissertation-Campus Only

Restricted to Campus until

12-2020

Share

COinS