The iBracelet and the Wireless Identification and Sensing Platform promise the ability to infer human activity directly from sensor readings.

Many ubiquitous computing scenarios require an intelligent environ-ment to infer what a person is doing or attempting to do. Historically,human-activity tracking techniques have focused on direct observa-tion of people and their behavior—with cameras, worn accelerome-ters, or contact switches. A recent promising avenue [4, 7] is tosupplement direct observation with an indirect approach, inferringpeople’s actions from their effect on the environment, especially on theobjects with which they interact. Researchers have applied three main techniques to human-activitydetection: computer vision, active sensor beacons [4], and passiveRFID. Vision involves well-known robustness and scalability challenges.Active sensor beacons provide accurate object identification but requirebatteries, making them impractical for long-term dense deployment.RFID tags have the same object-identification accuracy as active bea-cons, with the advantage of being battery-free; however, unlike sensorbeacons, they are unable to detect motion.