In the DIMOSIC (DIfferent MOdels, Same Initial Conditions) project, forecasts from different global medium-range forecast models have been created based on the same initial conditions. The dataset consists of 10-day deterministic forecasts from seven models and includes 122 forecast dates spanning one calendar year. All forecasts are initialized from the same ECMWF operational analyses to minimize the differences due to initialization. The models are run at or near their respective operational resolutions to explore similarities and differences between operational global forecast models. The main aims of this study are: (1) evaluate the forecast skill and how it depends on model formulation, (2) assess systematic differences and errors at short lead times, (3) compare multi-model ensemble spread to model uncertainty schemes, and (4) identify models that generate similar solutions. Our results show that all models in this study are capable of producing high-quality forecasts given a high-quality analysis. But at the same time we find a large variety in model biases, both in terms of temperature errors and precipitation. We are able to identify models whose forecasts are more similar to each other than they are to those of other systems, due to the use of similar model physics packages. However, in terms of multi-model ensemble spread, our results also demonstrate that forecast sensitivities to different model formulations skill are substantial. We therefore believe that the diversity in model design that stems from parallel development efforts at global modeling centers around the world remains valuable for future progress in the numerical weather prediction community.