For many data-parallel computing systems like Spark, a job usually consists of multiple computation stages and inter-stage communication (i.e., coflows). Many efforts have been done to schedule coflows and jobs independently. The simple combination of coflow scheduling and job scheduling, however, would prolong the average job completion time (JCT) due to the conflict. For this reason, we propose a new abstraction of scheduling unit, named coBranch, which takes the dependency between computation stages and coflows into consideration, to schedule coflows and jobs jointly. Besides, mainstream coflow schedulers are order-preserving, i.e., all coflows of a high-priority job are prioritized than those of a low-priority job. We observe that the order-preserving constraint incurs low inter-job parallelism. To overcome the problem, we employ an urgency-based mechanism to schedule coBranches, which aims to decrease the average JCT by enhancing the inter-job parallelism. We implement the urgency-based coBranch Scheduling (BS) method on Apache Spark, conduct prototype-based experiments, and evaluate the performance of our method against the shortest-job-first critical-path method and the FIFO method. Results show that our method achieves around 10 and 15 percent reduction in the average JCT, respectively. Large-scale simulations based on the Google trace show that our method performs better and reduces JCT by 23 and 35 percent, respectively.