Ensuring transparency in computational modeling

Computational models are of great s Cien tifiC and societal importance because they are used every day in a wide variety of products and policies. However, computational models are not pure abstractions, but rather they are tools constructed and used by humans. As such, computational models are only as good as their inputs and assumptions, including the values of those who build and use them. The role of ethics and values in the process of computational modeling can have farreaching consequences, but this is still a significantly understudied topic in need of further research. This article focuses on one particular value, transparency, documenting both why models should and how models can be transparent. Transparency is the capacity of a model to be clearly understood by all stakeholders, especially users of the model. Transparent models require that modelers are aware of the assumptions built into their models, and that they clearly communicate these assumptions to users. It is important that computational modelers recognize the potential for and importance of building computational models to be transparent. This article builds on an earlier article that makes the argument that computational models should be designed transparently to ensure parity and understanding among stakeholders, including modelers, clients, users, and those affected by the model. Data from an empirical study of computational modelers working in a corporate research laboratory is used to support this argument by demonstrating the importance of transparency from political, economic, and legal perspectives. This article also illustrates how transparency can be embedded in computational models throughout the stages of the modeling process.