In order for an Unmanned Ground Vehicle (UGV) to operate effectively it must be able to perceive its environment in an accurate, robust and effective manner. This is done by creating a world representation which encompasses all the perceptual information necessary for the UGV to understand its surroundings. These perceptual needs are a function of the robots mobility characteristics, the complexity of the environment in which it operates, and the mission with which the UGV has been tasked. Most perceptual systems are designed with predefined vehicle, environmental, and mission complexity in mind. This can lead the robot to fail when it encounters a situation which it was not designed for since its internal representation is insufficient for effective navigation. This paper presents a research framework currently being investigated by Defence R&D Canada (DRDC), which will ultimately relieve robotic vehicles of this problem by allowing the UGV to recognize representational deficiencies, and change its perceptual strategy to alleviate these deficiencies. This will allow the UGV to move in and out of a wide variety of environments, such as outdoor rural to indoor urban, at run time without reprogramming. We present sensor and perception work currently being done and outline our research in this area for the future.