Decision making is a fundamental component to ensure safe autonomous driving in highway scenarios. The mainstream architecture for this task is the classical deep Q learning network (DQN). However, there remain two major issues with the DQN: 1) because of its traditional experience replay mechanism, the model tends to learn bias from imbalanced data, and 2) for multiobjective tasks, the unitary reward function limits the model to learning representative domain knowledge. To address these problems, this article proposes a DQN model based on a prioritized experience replay (PER) mechanism with a multireward architecture (MRA) for highway driving decision making. For balanced training, the importance of memory samples is encoded with the error between the Q estimation and Q target. For more directional training, a single reward function is decomposed into three minor ones based on prior knowledge, emphasizing speed, overtaking, and lane changing. Experimental results indicate that the proposed prioritized MRA (PMRA) DQN is superior to the traditional DQN, with higher driving speeds, less lane changing, and safer overtaking.