Video data generated in a very large amounts, there is an urgent demand for understanding video context on semantic level. In this paper, Video Observation Information Service architecture for meeting this requirement is putting forward. This framework can be conceptually divided into 4 parts: Data providers for collection of sensor observation video; Intelligent event analysis mainly with Video Decoder and Digital Signal Processor (DSP) as core component for analyzing frames in video to extract features and detect occurred events; Building observing event ontology for computer understanding real-world events on semantic level; Finally, interaction with users by SOS-based Video Observation Information Service. New Sensor systems will be registered into web service and observation information will be inserted into service. Events semantic information can be accessed, queried and obtained by users and clients according to their different demands. It bridges the gap of video data collection, data processing, information sharing through web service, and user requests by reasonable and feasible implement on our test platform.
[1]
Zhi Huilai.
Research on Event-oriented Ontology Model
,
2009
.
[2]
Thomas R. Gruber,et al.
Toward principles for the design of ontologies used for knowledge sharing?
,
1995,
Int. J. Hum. Comput. Stud..
[3]
Liping Di,et al.
Use of ebRIM-based CSW with sensor observation services for registry and discovery of remote-sensing observations
,
2009,
Comput. Geosci..
[4]
Zhou Li.
Video Semantic Models and Their Evaluation Criteria
,
2007
.
[5]
John Davidson,et al.
Ogc® sensor web enablement:overview and high level achhitecture.
,
2007,
2007 IEEE Autotestcon.
[6]
Takeo Kanade,et al.
A System for Video Surveillance and Monitoring
,
2000
.
[7]
Shih-Fu Chang,et al.
The holy grail of content-based media analysis
,
2002
.