Stroke survivors or patients suffering from paralyses face a lot of challenges when it comes to mobility. Autonomous vehicle technologies can help to increase the accessibility for persons having limited mobility. The design of such vehicle requires a network of sensors along with sensor fusion system to accurately map the surrounding environment necessary for its safe driving. This study proposes a brain-actuated intelligent vehicle with network of sonars and vision-based sensors which can be controlled by either using Steady-State Visual Evoked Potential brain signals or joystick. For electroencephalogram signal detection, four frequencies were flickered and displayed using a GUI on LCD screen and signals were recorded having 1000 Hz sampling frequency. These signals were segmented into 2-second window size with 50% overlap and features were extracted by using canonical correlation analysis. In addition, a low-level navigation control system consisting of smart sensor network was also implemented to maximize safety of the user. Data from various sensors was sent to the intelligent control unit for sensor fusion in order to improve the performance and robustness of the designed vehicle. The results were evaluated for both offline and online classification. For offline control, the accuracy of 96% was achieved with 104.48 bits/min while 95% accuracy was achieved for run-time control. The results show that the proposed system has improved accuracy as well as information transfer rate is concerned as compared to the existing systems.