The ‘beauty’ of epidemiology is its ability to generate hypotheses. Observations in the clinic or in large datasets can be formally tested and can be used to create new insights or dismiss unsupported ideas. Observational research can identify new lines of inquiry that warrant fundamental or experimental research for validation. The ‘beasts’ within this type of research are spurious findings. These false findings can be scientifically robust and result from consensus on statistical significance (a study with 20 statistical comparisons is expected to have one significant finding owing to the consensus on a Pvalue of 0 05). More common reasons for spurious findings include undersized sample size, residual confounding, multiple comparisons, subgroup analyses without prespecified hypotheses or insufficient power. Therefore, replication is important in establishing a true association between a disease and exposure. It is essential to be able to reproduce the findings of an observational study in another population in order to strengthen the validity of the initial findings. This is often done in dermatoepidemiological studies, but the reproduction of the findings usually follows the original study in subsequent separate publications. In genetic epidemiology, replication studies are required (if possible) as part of the original publication because of the high rate of potential false-positive findings, especially in big datasets (such as genome-wide association studies) that use hypothesis-free approaches. The almost ridiculously large sample sizes required to detect small effects of individual single-nucleotide polymorphisms necessitates the formation of large international consortia collaborating on the same disease. These consortia are similar to large randomized clinical trials evaluating new drugs and have multiple centres that recruit hundreds of patients in more than 20 countries. If it is possible (and almost mandatory) to collaborate in order to come to sound scientific findings in these fields, those working in other areas should be motivated to do so as well. The inclusion of replication studies, different observational designs and/or populations within the initial study clearly increases the validity of findings and strengthens a causal relationship of association. In recent years, several excellent examples demonstrated that studying the same topic in different populations, and/or using different designs, elevates the scientific value of findings (e.g. atopic eczema and cardiovascular comorbidities, partner bereavement in herpes zoster and healthcare utilization in patients with actinic keratosis). At the other end of the research continuum, the number of observational publications on the same topic may become so high that the next ‘me too’ study loses its relevance (e.g. psoriasis and cardiovascular comorbidities). Too much replication, without adding value to existing literature, might actually be harmful because it reinforces hypotheses to a point where they become widely accepted without the required level of valid evidence. When specific topics or diseases become dominant in the literature, e.g. as seen in hidradenitis suppurativa or atopic dermatitis, it is often related to the introduction of new pharmaceutical treatments entering the market. The focus of the pharmaceutical industry on specific diseases often elevates the level of understanding of the disease and its burden, but may lead to replication studies that also emphasize the burden or consequences of disease. However, in addition to gaining greater insight into diseases of interest from the pharmaceutical industry, there can also be a tendency to over-study some aspects of disease, which can result in scientific waste. In general, in epidemiological studies, reutilization of the same dataset [e.g. routinely collected (claims) data, national registries or surveys, or large population-based cohorts] is encouraged for different research questions in order to prevent scientific waste. However, we discourage the use of the same dataset for similar research questions. Patients and the scientific community would benefit from more comprehensive analyses of these often unique and rich data sources rather than dividing the results of a single project into several publications over time. We fully understand that this advice currently works against the metrics used by authors’ institutions, and challenges the word count restrictions imposed by journals, but we would nevertheless encourage authors and editors not to slice the salami too thin. Finally, research needs to address clinically relevant issues. Deriving relevant questions in patient-centric research is challenging. Although a biological explanation of study objectives is common, with the overwhelming volume of scientific literature one can probably connect any two biological phenomena by considering the ‘six degrees of separation’ concept. Qualitative research takes the opposite approach by creating a list of issues that are of concern to specific patient groups or by testing whether certain research questions are important from the perspective of a specific group. One less invasive alternative is to include a patient stakeholder in the project group while developing study questions and methodology. It is worthwhile to invest time in defining the most appropriate question, utilizing the available data sources, before embarking on a research project.
[1]
J. Ioannidis,et al.
Validation and Utility Testing of Clinical Prediction Models: Time to Change the Approach.
,
2020,
JAMA.
[2]
R. Stern,et al.
Healthcare utilization and management of actinic keratosis in primary and secondary care: a complementary database analysis
,
2019,
The British journal of dermatology.
[3]
C. Gieger,et al.
Association of Atopic Dermatitis with Cardiovascular Risk Factors and Diseases.
,
2017,
The Journal of investigative dermatology.
[4]
L. Smeeth,et al.
Partner Bereavement and Risk of Herpes Zoster: Results from Two Population-Based Case-Control Studies in Denmark and the United Kingdom
,
2016,
Clinical infectious diseases : an official publication of the Infectious Diseases Society of America.
[5]
Thomas Agoritsas,et al.
How to use a subgroup analysis: users' guide to the medical literature.
,
2014,
JAMA.
[6]
R. Tibshirani,et al.
Increasing value and reducing waste in research design, conduct, and analysis
,
2014,
The Lancet.
[7]
J. Scadding,et al.
The James Lind Alliance: patients and clinicians should jointly identify their priorities for clinical trials
,
2004,
The Lancet.