In the absence of objective, reliable assessment and outcomes measurement methodologies in a nursing home, effectiveness of behavioral and pharmacological interventions cannot be determined. Pervasive technology holds the promise of developing objective, real-time, continuous assessment and outcomes measurement methodologies that were previously unfeasible. Such technologies can contribute greatly to a deeper understanding of the activity and behavior patterns of individual residents, and the physical, environmental and psychosocial correlates of these patterns. Bharucha, A., Allin, S. and Stevens, S., “CareMedia: Towards Automated Behavior Analysis in the Nursing Home Setting,” in The International Psychogeriatric Association Eleventh International Conference, Aug. 17-22, 2003. Several hours of surveillance-type video were captured in a nursing home. The task of data reduction and extraction of highlevel activity information was approached through both automated and manual techniques. For the manual encoding, 4 undergraduate students were trained by a geriatric psychiatrist to code the data frame-by-frame. A computer interface allowed coders to annotate behaviors of interest, as well as physical pose and ambulatory status. Behaviors of interest were identified with the CohenMansfield Agitation Inventory and grouped into 4 sub-categories: CareMedia Carnegie Mellon University 8 CareMedia: Automated Video and Sensor Analysis for Geriatric Care March 2003 Annual Progress Report physically aggressive, physically non-aggressive, verbally aggressive, and verbally non-aggressive. These manual encodings are currently forming the development of automated techniques at Carnegie Mellon University to extract information relevant to the detection of anomalous and disruptive physical activities. This includes automated tracking and extraction of navigational patterns. Gao, J., Hauptmann, A.G., Barucha, A. and Wactlar, H.D., “Dining Activity Analysis Using Hidden Markov Models,” accepted to The 17th International Conference on Pattern Recognition (ICPR’04), Cambridge, United Kingdom, Aug. 23-26, 2004. Abstract: We describe an algorithm for dining activity analysis in a nursing home. Based on several features, including motion vectors and distance between moving regions in the subspace of an individual person, a hidden Markov model is proposed to characterize different stages in dining activities with certain temporal order. Using HMM model, we are able to identify the start (and ending) of individual dining events with high accuracy and low false positive rate. This approach could be successful in assisting caregivers in assessments of resident's activity levels over time. We describe an algorithm for dining activity analysis in a nursing home. Based on several features, including motion vectors and distance between moving regions in the subspace of an individual person, a hidden Markov model is proposed to characterize different stages in dining activities with certain temporal order. Using HMM model, we are able to identify the start (and ending) of individual dining events with high accuracy and low false positive rate. This approach could be successful in assisting caregivers in assessments of resident's activity levels over time. Gao, J., Hauptmann, A.G. and Wactlar, H.D., “Combining Motion Segmentation with Tracking for Activity Analysis,” submitted to The Sixth International Conference on Automatic Face and Gesture Recognition (FG’04), Seoul, Korea, May 17-19, 2004. Abstract: We explore a novel motion feature as the appropriate basis for classifying or describing a number of fine motor human activities. Our approach not only estimates motion directions and magnitudes in different image regions, but also provides accurate segmentation of moving regions. Through a combination of motion segmentation and region tracking techniques, while filtering for temporal consistency, we achieve a balance between accuracy and reliability of motion feature extraction. To identify specific activities, we characterize the dominant directions of relative motions. Experimental results show that this approach to motion feature analysis could be successful in assisting caregivers at a nursing home in assessments of patient's activity levels over time. We explore a novel motion feature as the appropriate basis for classifying or describing a number of fine motor human activities. Our approach not only estimates motion directions and magnitudes in different image regions, but also provides accurate segmentation of moving regions. Through a combination of motion segmentation and region tracking techniques, while filtering for temporal consistency, we achieve a balance between accuracy and reliability of motion feature extraction. To identify specific activities, we characterize the dominant directions of relative motions. Experimental results show that this approach to motion feature analysis could be successful in assisting caregivers at a nursing home in assessments of patient's activity levels over time. Hauptmann, A.G., Gao, J., Yan, R., Qi, Y., Yang, J., and Wactlar, H.D., “Aiding Geriatric Patients and Caregivers through Automated Analysis of Nursing Home Observations,” to be published in IEEE Pervasive Computing, April-June special issue: Pervasive Computing for Successful Aging. Abstract: Through pervasive activity monitoring in a skilled nursing facility, a continuous audio and video record is captured. Through pervasive activity monitoring in a skilled nursing facility, a continuous audio and video record is captured. CareMedia Carnegie Mellon University 9 CareMedia: Automated Video and Sensor Analysis for Geriatric Care March 2003 Annual Progress Report Our CareMedia Project research analyzes this video information by automatically tracking people, assisting in efficiently labeling individuals, and characterizing selected activities and actions. Special emphasis is given to detecting eating activity in the dining hall and to personal hygiene. Through this work, the video record is transformed into an information asset that can provide geriatric care specialists with greater insights and evaluation of behavioral problems for the elderly. Evaluations of the effectiveness of analyzing such a large video record illustrate the feasibility of our approach. Hauptmann, A.G., Jin, R. and Wactlar, H.D., “Data Analysis for a Multimedia Library, in Text and Speech-Triggered Information Access,” Renals, S and Grefenstette, G. (eds)., Springer, Berlin, pp. 6-37, 2003. Abstract: This book section describes the indexing, search and retrieval of various combinations of audio, video, text and image media and the automated content processing that enables it. The intent is to provide a framework for data analysis in multimedia digital libraries. The introduction briefly distinguishes the digital from traditional libraries and touches on the specific issues important to searching the content of multimedia libraries. The second section introduces the Informedia Digital Video Library as an example of a multimedia library, including a quick tour of the functionality. The next section discusses the processing of audio and image information, as it relates to a multimedia library. Section four illustrates the interplay between audio and video information using a video information retrieval experiment as an example. Section five discusses the exporting and sharing of metadata in a digital library using MPEG-7. Finally, section 6 provides one vision of a future digital library, where all personal memory can be recorded and accessed. This book section describes the indexing, search and retrieval of various combinations of audio, video, text and image media and the automated content processing that enables it. The intent is to provide a framework for data analysis in multimedia digital libraries. The introduction briefly distinguishes the digital from traditional libraries and touches on the specific issues important to searching the content of multimedia libraries. The second section introduces the Informedia Digital Video Library as an example of a multimedia library, including a quick tour of the functionality. The next section discusses the processing of audio and image information, as it relates to a multimedia library. Section four illustrates the interplay between audio and video information using a video information retrieval experiment as an example. Section five discusses the exporting and sharing of metadata in a digital library using MPEG-7. Finally, section 6 provides one vision of a future digital library, where all personal memory can be recorded and accessed. Jin, R., Hauptmann, A., Carbonell, J., Si, L., Liu, Y., “A New Boosting Algorithm Using Input Dependent Regularizer,” 20th International Conference on Machine Learning (ICML'03), Washington, DC, August 21-24, 2003. Abstract: AdaBoost has proved to be an effective method to improve the performance of base classifiers both theoretically and empirically. However, previous studies have shown that AdaBoost might suffer from the overfitting problem, especially for noisy data. In addition, most current work on boosting assumes that the combination weights are fixed constants and therefore does not take particular input patterns into consideration. In this paper, we present a new boosting algorithm, “WeightBoost”, which tries to solve these two problems by introducing an input-dependent regularization factor to the combination weight. Similarly to AdaBoost, we derive a learning procedure for WeightBoost, which AdaBoost has proved to be an effective method to improve the performance of base classifiers both theoretically and empirically. However, previous studies have shown that AdaBoost might suffer from the overfitting problem, especially for noisy data. In addition, most current work on boosting assumes that the combination weights are fixed constants and therefore does not take particular input p
[1]
Jianbo Shi,et al.
Finding ( Un ) Usual Events in Video CMU-RI-TR-0305
,
2003
.
[2]
Jernej Barbic,et al.
Segmenting Motion Capture Data into Distinct Behaviors
,
2004,
Graphics Interface.
[3]
Datong Chen,et al.
Image Registration with Uncalibrated Cameras in Hybrid Vision Systems
,
2005,
2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05) - Volume 1.
[4]
Datong Chen,et al.
Online learning of region confidences for object tracking
,
2005,
2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance.
[5]
Wei-Hao Lin,et al.
Label Disambiguation and Sequence Modeling for Identifying Human Activities from Wearable Physiological Sensors
,
2006,
2006 IEEE International Conference on Multimedia and Expo.
[6]
Michael G. Christel.
Evaluation and user studies with respect to video summarization and browsing
,
2006,
Electronic Imaging.
[7]
R. Collins,et al.
Articulated Motion Modeling for Activity Analysis
,
2004,
2004 Conference on Computer Vision and Pattern Recognition Workshop.
[8]
Howard D. Wactlar,et al.
Combining motion segmentation with tracking for activity analysis
,
2004,
Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..
[9]
Datong Chen,et al.
Towards automatic analysis of social interaction patterns in a nursing home environment from video
,
2004,
MIR '04.
[10]
Howard D. Wactlar,et al.
Dining activity analysis using a hidden Markov model
,
2004,
Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..
[11]
Michael G. Christel,et al.
Finding the right shots: assessing usability and performance of a digital video library interface
,
2004,
MULTIMEDIA '04.
[12]
Louis D. Burgio,et al.
The Pittsburgh Agitation Scale: A User-Friendly Instrument for Rating Agitation in Dementia Patients.
,
1994,
The American journal of geriatric psychiatry : official journal of the American Association for Geriatric Psychiatry.
[13]
Howard D. Wactlar,et al.
Putting active learning into multimedia applications: dynamic definition and refinement of concept classifiers
,
2005,
MULTIMEDIA '05.
[14]
Datong Chen,et al.
Multimodal detection of human interaction events in a nursing home environment
,
2004,
ICMI '04.
[15]
Jianbo Shi,et al.
Summarizing Human Activity in Video ∗
,
2002
.
[16]
Michael G. Christel,et al.
Evaluating content-based filters for image and video retrieval
,
2004,
SIGIR '04.
[17]
Jianbo Shi,et al.
Multiple frame motion inference using belief propagation
,
2004,
Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..
[18]
Wen Gao,et al.
Modeling background from compressed video
,
2005,
2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance.
[19]
Yanjun Qi,et al.
Automated analysis of nursing home observations
,
2004,
IEEE Pervasive Computing.
[20]
Ambuj K. Singh,et al.
ViVo: visual vocabulary construction for mining biomedical images
,
2005,
Fifth IEEE International Conference on Data Mining (ICDM'05).
[21]
Rong Yan,et al.
Learning query-class dependent weights in automatic video retrieval
,
2004,
MULTIMEDIA '04.
[22]
C. Atkeson,et al.
Toward the Automatic Assessment of Behavioral Distrubances of Dementia
,
2003
.
[23]
Howard D. Wactlar,et al.
A system of video information capture, indexing and retrieval for interpreting human activity
,
2003,
3rd International Symposium on Image and Signal Processing and Analysis, 2003. ISPA 2003. Proceedings of the.
[24]
Michael G. Christel,et al.
Exploiting multiple modalities for interactive video retrieval
,
2004,
2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.
[25]
Alexander G. Hauptmann,et al.
Towards robust face recognition from multiple views
,
2004,
2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).