The wearable platform is an important perspective from which to collect developmental data. This gives a developmental agent the ability to experience many rich aspects of human behavior, locomotion, interaction, and social structure without knowing how to actively participate in these activities. In the I Sensed project, we combine natural sensor modalities (camera, microphone, gyros) in a wearable framework to build a first prototype of such an agent. We have also taken the next step to build robust statistical models with a massive data collection experiment: 100 days of full surround video, audio, and orientation, amounting to over 500 Gigabytes of data. The first challenge with this data is the discovery and prediction of daily patterns-can we automatically infer the typical paths through someone's day and their daily activities, predicting what they will do next and detecting anomalies. Armed with this kind of omnivideo sensor data, we can also apply our learning work tools to conversational scene analysis to help the agent develop a rudimentary understanding of social interactions. This work is aimed towards understanding the structure of face-to-face conversations.