In this paper, we propose an efficient and effective driving-view generation system as a module of "Mixed-Reality Traffic Experiment Space", an enhanced driving/traffic simulation framework which we and our colleagues have been developing for Sustainable ITS project at the University of Tokyo. Conventional driving simulators represent their view by a set of polygon-based objects, which leads to less photo-reality and huge human cost for data construction. We introduce our image/geometry-based hybrid method to realize more photo-reality with less human cost at the same time. Images for datasets are captured from real world along a public road by video cameras mounted on our data acquisition vehicle. And the view for the system is created by synthesizing the image dataset in real-time. The paper mainly describes details on data acquisition and view rendering.