Multiple Authors, Multiple Problems

T he average number of authors on scientific papers is skyrocketing. That's partly because labs are bigger, problems are more complicated, and more different subspecialties are needed. But it's also because U.S. government agencies like the National Institutes of Health (NIH) have started to promote “team science.” As physics developed in the post-World War II era, federal funds built expensive national facilities, and these served as surfaces on which collaborations could crystallize naturally. That has produced some splendid results. Multidisciplinary teams have been slower to develop in biology, but now the rush is on. NIH recently sponsored a meeting entitled “Catalyzing Team Science”—something new for an agency traditionally wedded to the investigator-initiated small-project kind of science. Increasingly complex problems, NIH seems to be saying, will require larger and more diversely specialized groups of investigators. So team science is part of its road map: a “Good Thing.” That may be right. Multiple authorship though—however good it may be in other ways—presents problems for journals and for the institutions in which these authors work. For the journals, long lists of authors are hard to deal with in themselves. But those long lists give rise to more serious questions when something goes wrong with the paper. If there is research misconduct, should the liability be joint and several, accruing to all authors? If not, then how should it be allocated among them? If there is an honest mistake in one part of the work but not in others, how should an evaluator aim his or her critique? Such questions plagued the committee that examined the recent high-profile case of fraud in the physics community, the Schon affair, and surely will trouble others. When penalties for research misconduct are considered, it is often argued that an identification of each author's role in the research should be required, in order to help us fix blame. Critics of the notion that authors should share the blame ask, for example, “how can the molecular biologist be expected to certify the honesty and quality of the crystallographer's work?” Some would answer “by knowing that person well enough to rely on him or her.” I rather like that response, so with respect to assigning blame for research misconduct, I take the “joint and several” position, knowing that it puts me in a quirky minority. Various practical or impractical suggestions have emerged during the longstanding debate on this issue. One is that each author should provide, and the journal should then publish, an account of that author's particular contribution to the work. Although Science will make it possible for authors to do that, we cannot monitor the authors' designations or negotiate possible disputes over which author actually did what (there's enough of that already, thank you). And listing the individual contributions of each of a couple of dozen authors will, even if it appears only electronically, add some length and complexity to the communication. But a different view of the problem, and perhaps of the solution, comes as we get to university committees on appointments and promotions, which is where the authorship rubber really meets the road. Half a lifetime of involvement with this process has taught me how much authorship matters. I have watched committees attempting to decode sequences of names (“is it good or bad that her major professor's name wasn't at the end of the author roster?”), agonize over whether a much-cited paper was really the candidate's work or a coauthor's, and send back recommendations asking for more specificity about the division of responsibility. Problems of this kind change the argument, supporting the case for asking authors to define their own roles. After all, if quality judgments about individuals are to be made on the basis of their personal contributions, then the judges better know what they did. But if questions arise about the validity of the work as a whole, whether as challenges to its conduct or as evaluations of its influence in the field, a team is a team, and the members should share the credit or the blame. Thus, Science would be glad to see authors define their roles—briefly, please!—but has no plans to pass out the Newcomb Cleveland prize, our annual award for the best Science paper, in little bits and pieces.