Evidence-Based Medicine: An Oral HistoryRichard Smith discusses the history of Evidence-based Medicine with Iain Chalmers, Kay Dickersin, Paul Glasziou, Muir Gray, Gordon Guyatt, Brian Haynes, Drummond Rennie and David Sackett.
The phrase evidence-based medicine (EBM) was coined by Gordon Guyatt1 and then appeared in an article in The Rational Clinical Examination series in JAMA in 1992,2 but the roots of EBM go much further back. The personal stories of the origins of EBM were recently explored in a filmed oral history of some of the individuals most strongly associated with the birth of the movement (see Video, Evidence-Based Medicine: An Oral History).
JAMA and the BMJ invited 6 individuals (including us, with one of us as host, R.S.) who have played a prominent part in the development of EBM to participate in an oral history event and filming. Videos of this event and of interviews with 3 other EBM leaders (Box) have been woven together and may be accessed at http://ebm.jamanetwork.com. Just 20 years after the term EBM began to be used, an early and informal history has emerged.
Leaders Participating in Evidence-Based Medicine: An Oral History
Iain Chalmers, MBBS, DSc, coordinating editor, The James Lind Library and Testing Treatments Interactive
Kay Dickersin, MA, PhD, director, Johns Hopkins Center for Clinical Trials and director of the US Cochrane Center
Paul Glasziou, FRACGP, PhD, general practitioner, clinical researcher, and former director of Oxford University’s Centre for Evidence-Based Medicine
Muir Gray, CBE, DSc, MD, FCLIP, director, Better Value Healthcare
Gordon Guyatt, BSc, MD, MSc, FRCPC, distinguished professor of medicine and of clinical epidemiology and biostatistics, McMaster University
Brian Haynes, MD, PhD, professor of clinical epidemiology and medicine and chief of the Health Information Research Unit, McMaster University
Drummond Rennie, MD, FRCP, former contributing deputy editor, JAMA, and adjunct professor of medicine, P.R. Lee Institute for Health Policy Studies, University of California, San Francisco
David L. Sackett, OC, FRSC, MD, MDHC, FRCP, director, Trout Research and Education Centre at Irish Lake
Richard Smith, MBChB, CBE, FMedSci, FRCPE, FRCGP, director, UnitedHealth Chronic Disease Initiative and former editor, BMJ
Three individuals from an earlier generation were particularly important in inspiring the people interviewed: Thomas C. Chalmers, Alvan R. Feinstein, and Archibald Cochrane. Some of the contributions of Tom Chalmers, who died in 1995 at age 78, were recently described in JAMA.3 David Sackett memorializes Chalmers’ 1955 report of a randomized factorial trial of bed rest and diet for hepatitis4: “Reading this paper not only changed my treatment plan for my patient. It forever changed my attitude toward conventional wisdom, uncovered my latent iconoclasm, and inaugurated my career in what I later labeled ‘clinical epidemiology.’”5 The rigorous approach taken by Tom Chalmers toward randomized trials and his early adoption of meta-analysis were key to the development of these tools of evidence. Alvan Feinstein, a clinician and researcher at Yale who died in 2001 at age 75, was important in defining clinical epidemiology and in first showing how medical practice could be studied.6 Archie Cochrane, a clinician, epidemiologist, and professor at the Welsh National School of Medicine, who died in 1988 at age 79, published his seminal book Effectiveness and Efficiency: Random Reflections on Health Services in 1972.7 His work8 was the inspiration for the Cochrane Collaboration, which has played a central role in promoting EBM.
Several of those interviewed identified when they began to be aware of the deficiencies in what might be called “expert-based medicine.” Brian Haynes, a professor of clinical epidemiology and biostatistics at McMaster University, began his journey to EBM in medical school in 1969 when he was lectured on the theories of Sigmund Freud. He asked the lecturer for the evidence that the theories were “true.” The lecturer answered candidly that he did not think that there was any evidence and that he had been sent by the chair of the department, a Freudian, to give the lecture. “I had,” says Haynes, “an intense tingle in my body as I wondered how much of my medical education was based on unproved theories” (Evidence-Based Medicine: An Oral History Video).
Iain Chalmers, cofounder of the Cochrane Collaboration and now editor of the James Lind Initiative, attended medical school in the 1960s and like every other student was filled full of facts to regurgitate in examinations. He wasn’t given the tools to find out what worked, and “in retrospect,” he says, “I’m angry about that” (Evidence-Based Medicine: An Oral History Video). In the early 1970s, Iain Chalmers moved to Cardiff to research a perinatal epidemiology database. Using the observational studies in the database, he looked for evidence of benefit from the increasing number of interventions in obstetrics and could not find any. The work then expanded to the establishment of the UK National Perinatal Epidemiology Unit, which had a guiding principle of using existing evidence before generating new research. Chalmers and colleagues conducted systematic reviews of evidence, particularly randomized trials, and in 1989 published Effective Care in Pregnancy and Childbirth.9
David Sackett, former professor of medicine at McMaster University, is regarded by many as “the father of evidence-based medicine.” In the late 1960s at the age of 32, he was invited by John Evans, an internist, to join a new and different kind of medical school at McMaster. Students would learn from the problems of patients, and epidemiology and statistics would be taught together with the clinical disciplines. After some years of the McMaster program, Sackett and his colleagues decided that they wanted to share what they were doing and wrote a series of articles on what they called “critical appraisal,” which appeared in the Canadian Medical Association Journal in 1981.10At that time, while on sabbatical in Dublin, Sackett began to write with others Clinical Epidemiology: a Basic Science for Clinical Medicine,11 which started in 1985 as a book about the critical appraisal of research and developed in the second and third editions into a book about research methods and “the bible” of EBM.
Evidence-based medicine grew out of critical appraisal. When Gordon Guyatt, currently a professor of epidemiology and biostatistics and medicine at McMaster University, took over as director of the internal medicine residency program at McMaster in 1990, he wanted to change the program so that physicians managed patients based not on what authorities told them to do but on what the evidence showed worked. He needed a name, and the first was “scientific medicine.” The faculty reacted against this name with rage, arguing that basic scientists did scientific medicine. The next name was “evidence-based medicine” (Evidence-Based Medicine: An Oral History Video).
Subsequently, JAMA (through one of us, D.R.) established relationships with Sackett and Guyatt that led eventually to 2 pioneering series of articles in JAMA. The first was The Rational Clinical Examination,12,13 which was intended “to make a science out of taking a history and doing an examination.” These enterprises are fundamental to medicine but had not been scientifically studied. The second was the Users’ Guides to the Medical Literature, which was designed to help clinicians keep up to date by enabling them to interpret the burgeoning medical literature and to facilitate clinical decisions based on evidence from the medical literature rather than hope or authority.14
In the Oral History Video, Sackett distinguishes EBM from critical appraisal because it combines research evidence with clinical skills and patient values and preferences. He comments that clinicians have to be able to make the diagnosis and then discuss options with patients. Sackett uses the example of nonvalvular atrial fibrillation in which a patient has a small risk of a stroke. He asks and answers, “Should the patient take warfarin and so risk a bleed? Most patients see a stroke as about 4 times worse than a bleed. You combine that with number needed to treat and number needed to harm and conclude that you are about 11 times more likely to help rather than harm a patient by treating him or her with warfarin” (Evidence-Based Medicine: An Oral History Video).
Guyatt acknowledges that in the 1992 JAMA article there was little about patient values.2 It was over the next 5 years that patient values and preferences became much more central, and since then strongly emphasized (Evidence-Based Medicine: An Oral History Video).
Muir Gray, a public health physician and UK National Health Service manager, and Iain Chalmers were both inspired by the program at McMaster and persuaded Sackett to move to Oxford in 1994, where he worked as a clinician and was also director of the Centre for Evidence-Based Medicine. Sackett worked to spread EBM to the rest of the United Kingdom, Europe, and beyond. He visited most of the UK district general hospitals and many in Europe and would begin his visit by doing a round on patients admitted the previous night with young physicians and showing EBM in action. The young physicians realized that they could challenge their seniors in a way that was not possible with expert-based medicine. It was liberating and democratizing (Evidence-Based Medicine: An Oral History Video).
Evidence-based medicine quickly became popular, Sackett believes, for 2 main reasons: it was supported by senior clinicians who were secure in their practice and happy to be challenged and it empowered young physicians—and subsequently nurses and other clinicians. Evidence-based medicine did, however, produce a backlash, particularly, says Sackett, “among middle-level guys who were used to making pronouncements,” including an unsigned, critical editorial in TheLancet in 1995 titled “Evidence-based medicine, in its place.”15 Among the many responses to that backlash was an editorial in the BMJ by Sackett and others titled “Evidence based medicine: what it is and what it isn’t.”16 That BMJ editorial, says Sackett, “turned the whole thing around.” It carefully refuted all the complaints made against EBM: it wasn’t old hat, impossible to practice, cookbook medicine, the creature of managers and purchasers, or concerned only with randomized trials (Evidence-Based Medicine: An Oral History Video).
The systematic evidence of what worked in pregnancy and childbirth stimulated the thought that the same could be done for the rest of health care, and in May 1991, while walking beside a tributary of the Thames, Iain Chalmers conceived the idea of the Cochrane Centre. It began in 1992 and from the beginning was intended as an international program, which, because of the immensity of the task of reviewing and assessing the entire literature on all interventions needed to be based on the efforts of well-trained volunteers. The Cochrane Collaboration began in 1993 and has grown to include champions of EBM across the world.
Evidence-Based Medicine: An Oral History is now available free for all to see and learn about the origins of this movement. The video features EBM leaders’ perspectives on the past, present, and future of EBM, along with personal reflections of clinical and patient encounters and shared decision making in the context of EBM. The video makes clear that much has been achieved, but that much remains to be done.
>> These individuals are pioneers, innovators in medicine.
>> They've ushered in an era of evidence and made scientific research an essential basis of medical practice.
[ music ]
>> [Howard Bauchner:] Hello, I'm Howard Bauchner, Editor in Chief of JAMA and the JAMA Network.
>> [Fiona Godlee:] And I'm Fiona Godlee, Editor in Chief of the BMJ.
>> [Howard Bauchner:] We hope you enjoy these inspiring stories.
[ music ]
>> [Richard Smith:] So Gordon, how did you come to evidenced based medicine?
>> [Gordon Guyatt:] Well after doing two years of post-graduate training in medicine in Toronto, I came back to McMaster and found Brain Haynes and Dave Sackett there. And got very interested in what they were doing. Ended up in the health research methodology master's program and became then skilled in what was critical appraisal and what Dave Sackett then called bringing critical appraisal to the bedside. And as we were doing this in clinical practice we found more and more that we were doing a different form of medical practice. And in 1990, I took over the internal medicine residency program to run the program with the notion that I wanted to train residents in this new way of doing medical practice. And I wanted to attract residents who were interested. And I wanted to advertise our program as one that did this. So we were doing what? Needed a name. And the first, my first attempt at coming up with a name for it was scientific medicine. And in a meeting where I was introduced by the department chair to the members of the department of medicine, the basic scientists were so enraged that I was calling what we were doing scientific medicine that I thought, back to the drawing board. And the second notion of what we might call it was evidenced based medicine. And boy did that turn out to be a good choice.
>> [Richard Smith:] So you've never regretted it?
>> [Gordon Guyatt:] Of course not.
>> [Richard Smith:] Teddy [assumed spelling], how did you get to evidenced based medicine?
>> [Kay Dickersin:] Well I came through the root of experimental science cell biology. And I was then joined the doctoral program in clinical trials at Johns Hopkins. And where Kirt Minard [assumed spelling] was my mentor. And was looking for a thesis project. Talked to Tom Chalmers who was in Boston doing a sabbatical at that time. And Tom was a real trialist and really ahead of time in everything he was doing. He had a grant at that time from the National Library of Medicine and was looking at the literature and even thinking about and teaching a class actually, a seminar of meta-analysis at Harvard. And so I said well what do you think I should do my thesis on? I really am interested in registration of trails. And he said oh no, registration of perinatal trials had been done by a guy named Iain Chalmers. And he showed me this letter and a lancet. And he…
>> [Richard Smith:] When are talking about here?
>> [Kay Dickersin:] Oh sorry yeah, that was about 1983 I'd say or '84. And, and so he said well later on that year in the summer he said why don't you go over and talk to Iain Chalmers and see about doing a project. Maybe he'd be interested. And I wrote a letter to Iain. I said hey you have your register of perinatal trials. Have you considered unpublished trials? Is that something that you're working on? And Iain said come on over, let's talk about it. Let's write a grant which we did to try to find out how many unpublished trials in the perinatal field there were that would contribute to the register that he had with Murray Aitken and Mark Kurza [assumed spelling]. And that was the beginning for me.
>> [Richard Smith:] And how many trials did you find were unpublished at that time?
>> [Kay Dickersin:] Well the study that came out of that really found, we looked at Johns Hopkins School of Public Health and in John Hopkins School of Medicine and probably about a third were unpublished at the School of Public Health and probably about 20 percent at the School of Medicine. So, that was a high publication rate compared to what we know now as studies done by many, many different people.
>> [Richard Smith:] Did that, did that figure shock you or surprise you?
>> [Kay Dickersin:] I don't know if it did by that time because I'd done so many interviews. I think at that time because I was working on my doctoral thesis, it was those interviews that made a big difference. I'd call people and ask them about publishing and say why didn't you publish? And the people who were working on theses said because I was sick of it. And I was sick of it. So, that really made an impact on me. But I, I think that that was very interesting finding as why people weren't publishing. Because they were sick of it. Because they just, you know, they moved on to other things. The findings weren't interesting. And that was more interesting than the percent I think.
>> [Richard Smith:] And the editors weren't really the problem?
>> [Kay Dickersin:] They weren't really the problem. They were a very tiny fraction of the problem. It's really the non-authors who were the problem, that they hadn't bothered to submit.
>> [Richard Smith:] But the editors still tend to get blamed. So onto an editor.
>> [Drummond Rennie:] Of course they get blamed. And unfairly as Teddy has told you. I had a piece of luck. I got severely injured climbing. I was then unable to do my research at very high altitudes. I was a physiologist and had to change my career. I went to the New England Journal where I was the then the sole deputy editor. And in November of 1977, I got a manuscript from a guy Tom Chalmers, you've just heard about, and this entranced me. It was the first meta-analysis I'd ever read. And it seemed to me to solve a huge numbers of problems at one time. This was a way, a logical way of dealing with the medical literature. And you wanted to throw away everything of the past or you use this system. That was the first. This impressed me also because it was dealing with a practical patient problem that I'd heard argued constantly all the way through medical school, internship, residency in London and so on and then again in the states. And that was to give anticoagulants or not after myocardial infarction? And this showed, this meta-analysis showed good idea. And proved it or seemed to to me. At the same time, I in this, in this change, I met Dave Sackett. Dave Sackett came from McMaster and he came to see the editor and the other editor, me, the little editor. And he claimed to talk about a series with the editor. I was very impressed by Dave and I kept in contact with him in subsequent years. I was exceedingly impressed by his way of thinking. This led to an invitation from him when I moved to JAMA and led to an invitation. I went to McMaster. And one way or another met everybody and from that came a long series of articles, The User's Guide Series with Gord Guyatt. And then the rational clinical exam using features of the examination and the history taking and regarding them as clinical tests with operational characteristics that you could define.
>> [Richard Smith:] And the New England Journal published that Tom Chalmers meta-analogy?
>> [Drummond Rennie:] Yes indeed it did.
>> [Richard Smith:] But am I right to think it became a bit snotty about, about systematic reviews and meta-analyses subsequently?
>> [Drummond Rennie:] Well the facts are that. The facts are that and in fact I've discussed this with a senior statistical editor at the time. A great friend of mine, John Baylor [assumed spelling], and his problem was you're never comparing apples with apples if you do a meta-analysis. To which I would say, you've, if you, unless you junk the entire literature and only rely on one trial, you have to have a little accommodation there and you do the very best you can.
>> [Richard Smith:] So Brian, how did you get to evidence based medicine?
>> [Brian Haynes:] When I was a second year medical student at the University of Alberta, we had a lecture by [inaudible] psychiatry on Freud's theories. At the end of that session I asked what's the evidence that points that Freud's theories are true? And the person who was doing the lecture sort of broke from his cameo role and said well I don't really think they are true. There's no evidence to support them. But the chair of the department's a Freudian and he asked me to give this lecture. And I just had this sudden realization how much of my medical training to that point has been based on theories that were not supported by facts? It caused me to stew for a while and then I went to University of Toronto where, Toronto General Hospital for my residency training. Because thinking that the truth might not be in Edmonton but it could be in Toronto. And of course they always purported to have a lock on truth. The only difference I could see between the two places that was in Toronto if you asked them what's the evidence, they'd get mad at you. So I stewed about that some more. And I was fortunate enough in Toronto that Dave Sackett had just started the Department of Clinical Epidemiology down the road at McMaster in Hamilton Ontario. And one of the people, Jack Laidlaw, who subsequently became the Dean at McMaster but was at Toronto at the time invited Dave Sackett to give a little talk on Is Healthcare Researchable which is exactly what I wanted to know. I certainly didn't know what to do about the situation of figuring out what things worked and what didn't work. And I figured I need to get some method logic training. And Dave's talk was just right on target for showing examples where you could get to an incredible answer for any number of range of medical problems. So I came over to McMaster after that, went through their graduate program. Went away again and came back on the faculty there to work with Dave on the original series on Critical Appraisal of the Medical Literature and the initial teaching that we did on that. And recruiting young squirts like Gord Guyatt, etc.
>> [Richard Smith:] I can't help but imagine there must have been a little glow of pleasure of each of your professors as they see you sitting there in the audience wondering what you're going to ask next. So Paul, how did you get to it?
>> [Paul Glasziou:] Well I had somewhat similar experience to Brian's. We were in medical school. I wondered what the basis of making decisions were and started to see these disagreements between the physicians saying one thing, surgeon saying another for the same condition or even two physicians saying different things. But I couldn't work out how to resolve that. Did medical practice for a while but became a bit disenchanted by not finding this basis. And went to, moved into epidemiology in clinical trials. Worked with two guys in Sydney, [inaudible] who'd done work at McMaster and John Simms [assumed spelling] who'd been at Harvard where Tom Chalmers was. And done some early work on publication. [Inaudible] taught me about clinical trials and publication bias and meta-analyses. But it wasn't until Dave Sackett came out to Sydney at [inaudible] invitation and did a workshop. But I really decided to become very interested in this evidence practice gap and the fact that he was a basis for making decisions, the evidence, but it wasn't being used in practice and that you needed to work in both camps to do that. So I actually went and retrained. I did emergency department work for a while in the, on Friday evenings in a busy emergency clinic. But eventually retrained as a general practitioner so that I could see how it was trying to get it from the researchers end. So that was incredibly useful and inspired by Dave because he'd done a very similar thing to be able to work at both ends at once. To see how research is unusable for the practitioner and how to make it so. And also how to make researchers do the right sort of research that's going to be useful for clinical practice. And that was sort of bridge people I think are absolutely crucial. We need more of those in medicine. So anyway after that a lot of work in the area and that culminated in the moving to Oxford and taking up The Center for Evidence Based Medicine after Dave Sackett had left.
>> [Richard Smith:] If you want to be serious about evidence based medicine, do you have to be an epidemiologist and some way?
>> [Paul Glasziou:] No, but I think to, I think for the person just wanting to practice it, there's a modest depth that you need to have practiced [inaudible] through medical school so that you're skilled at it. It's like being shown how to use a stethoscope once, you know, it's not enough. You've actually got to listen to lot of hearts and become skilled at doing it. The same thing is true for but you don't have to do a full blown PhD. or master's program in how to use a stethoscope. But I think if you want to teach it and move the boundaries of evidence based medicine then I think that much greater depth is there. And we'll probably come on to discuss the sort of evolution that's occurring in evidence based practice. There's still a lot of work to do.
>> [Richard Smith:] Right, well you've heard lots of references to Dave Sackett. So I went to the Kilgore Trout Research Institute. Those of you that are Vonnegut fans will recognize Kilgore Trout. And I interviewed Dave for about an hour and 3/4. And I think we will be, you know, putting that together into something that everybody will be able to see eventually. But one of the questions I asked him, he talked about how they'd first got into critical appraisal as Gordon was saying. And I said what's the difference between critical appraisal and evidence based medicine? And this is what Dave said. So, how would you say evidence based medicine is different from critical appraisal?
>> [Dave Sackett:] It goes beyond it. In other words, what it does is it, it integrates the science and the literature with your best clinical skills. So you have to get the diagnosis right.
>> [Richard Smith:] Right.
>> [Dave Sackett:] But also incorporates patient values, you know, so someone like Sharon Strauss for example developed a bedside strategy for very quickly being able to tell a patient the likelihood we're going to help versus harm you of with if we follow down this path using patient values. The typical case being someone with nonvalvular atrial fibrillation at risk of a stroke, small risk, but terrible if they had the stroke, on Warfarin, pretty safe but you could have a bleed. And so we would get patients to weigh these two outcomes. A bleed, yeah that's bad but you're going to be over in a few weeks. A stoke, that's forever. And most of them would tend to see a stroke at about four or five times as bad as a bleed.
>> [Richard Smith:] Yeah.
>> [Dave Sackett:] We could integrate that with the number needed to treat and the number needed to harm to come out with we're about 11 times as likely to help versus harm you with this.
>> [Richard Smith:] So Gordon, Dave put a huge emphasis there on, you know, patient values. But I think it's fair to say that when you first use the phase evidence based medicine there wasn't such an emphasis on patient care.
>> [Gordon Guyatt:] Yeah if you look, if you look at the first publication such as the 1992 JAMA publication that really introduced it to the broader world, there's very, you have to look hard to find any reference to it at all. And so at the beginning it was the original series had been a reader's guide. We moved in the series Champion by Drummond to a user's guide getting physicians to use it in practice. And as we were doing this and as we were practicing clinically with the emphasis of okay, we're considering course of action A versus B, what is going to happen to the outcomes of interest? And we would invariably find there are some good things and there's some bad things. And then how do you make the decision. And having done this repeatedly it became evident that every time there were value in preference judgments going on. And then the next thing is who's values and preferences should it be becomes pretty evident as soon as you begin to think about it that it should be the patient's values and preferences. And so over the next five years that became more and more central and really now for at least, at least 15 years values and preferences as a core principle of evidence based medicine, we talk about it as something of an irony. One of the core principles of evidence based medicine is evidence by itself never tells you what to do. It's always evidence in the context of values and preferences which as Dave was pointing out has been a big emphasis.
>> [Richard Smith:] But I think it's fair to say that initially there was quite a backlash against evidence based medicine. It's cookbook medicine, it's all dreamt up by managers and insurance companies. It's all about statisticians and horrible people like that. It's not patient centered. It's all about randomized trials. It's too reductionist. Tell us about your experience of that backlash. Did you experience it Drummond?
>> [Drummond Rennie:] Well of course, absolutely. And it seems to me so obvious and it is a human if you don't the law on your side and you don't have the facts on your side, you attack and you attack particularly in a mindless passion. And there's a tremendous amount of that going on.
>> [Richard Smith:] Well also, I mean there was this sort of implication that everything that had gone before and not really paid any attention to evidence at all which was clearly absurd.
>> [Drummond Rennie:] Well of course it's absurd. I mean Tom Chalmers meta-analysis was a perfect example of paying great attention to what had gone on before far more actually than people who'd just looked at the latest trial.
>> [Richard Smith:] So a question I want to ask you Brian, I mean do you, there's a sense that actually this is just really hard to do this evidence based medicine especially the way it came about in its first incarnation, you know, every patient you're going to kind of respond to by go and search the literature, etc. Was that, that was never going to work was it?
>> [Brian Haynes:] That was definitely part of this antibody response that people had to evidence based medicine that it was too much work. That even if we agreed with you in principle and practice there's no way we could possibly do it. And right from the start we started to figure out other ways that we could simplify this process. Can we provide a way of defining which studies are more important? Can we provide those as information inputs to textbooks to journals to guidelines to decision aids, computerized decisions [inaudible] and so on. So, I spent, that's why I created the health information research unit for, to try to figure out how to make this easy enough that people could actually do it. And we're still in that evolution. It's not perfected. But the resources that are available now so that people don't have to do the primary critical appraisal themselves are much, much better than they ever have been before. Whether that will be enough to move to the next step where that can actually change their behavior in a timely fashion, that's the forefront right now, the frontier which we're trying to deal with in research endeavors called knowledge translation, implementation science compared to effective research and so on.
>> [Richard Smith:] Gordon you're teaching evidence medicine and you've kind of changed the way you teach it?
>> [Gordon Guyatt:] We haven't changed the way we teach it but our targets have changed. So when I took over the internal medicine residency program in 1990, I had it in my head that we were at the end of residency training with us. Individuals would be able to pick up a randomized trial, critically appraise it, decide, assess it, understand the results very well. And they'd be able to do that for studies of prognosis or diagnosis or systematic reviews and meta-analysis. And at the end of 7 years of residency training I have learned that very few of my graduates were actually able to do that. And so the target became different. The target became to have people appreciate the principles of what makes trustworthy evidence and not trustworthy evidence and be able to go to secondary sources of information that produced summaries of evidence that were trustworthy and identified the confidence that one could put into those. And we published in I believe 2002 a paper in the BMJ in which we identified that everybody doesn't have, every clinician who is practicing evidence based medicine doesn't have to be an evidence base expert. They need to understand the principles and they need to be identified the appropriate resources that prevent, that present processed evidence to them in a way that they can apply it to clinical care understanding the underlying confidence in the evidence. And some of my colleagues as we were talking about this on email beforehand referred to this as evidence based capitulation but in fact it is a realistic way and that is what evidence based practice means for most clinicians. And as Brian points out, the strategy is to get to the highest level of easily accessed, easily understood preprocessed evidence.
>> [Richard Smith:] Teddy, you were telling us what I think was a very interesting story about your experience of teaching medical students. I don't whether, you know, finding kind of hard. Of course you could tell that story. I think it'll interest people.
>> [Kay Dickersin:] That was another interesting story where I was teaching a medical school class. I taught medical school for 16 years and evidence based health care. And we had gone through in the class screening, epidemiology, everything you need to know about being able to critically assess the literature. And so we had an empty spot, I forget why. And it was right around the time the breast cancer, the mammography guidelines came out for younger women and saying that really there wasn't enough evidence for younger women to have mammography. So we had a debate that day and I brought in the head of the breast service from the hospital at the university where I was teaching and the President of the National Breast Cancer Coalition who was very evidence based. The head of the breast service however, who came in, he was wearing a white coat, a bowtie, he kept referring to Harvard where he had been before, had a stethoscope and it really established his authority as a doctor. And anyway, so I sat in the front row. I couldn't see the audience behind me but the whole time I was thinking this was so fantastic, everything the two of them were saying was just what the students had learned. The consumer advocate was right on the money with evidence and, you know, lead time bias, etc. and he was saying well we know mammography is useful because before I was head of the breast service all these women died. And after I came in and instituted mammography everybody lived. And I say okay, they've got to get this. And then it was clear when it was finished that he had won the debate. And what I thought was perfect was wrong. And there was a historian also there in the front row and she said to me and it woke me up, she said this just shows the importance of authority. They each established their authority in different ways and his was successful. He said I'm head of the breast service, he had on a white coat, I went to Harvard, I, you know, he had a lot of Harvard stories. And she came in and was a lawyer and a consumer advocate and did not establish her authority in a way that made sense to the medical students. And so that's one of the reasons I've raised issues about experts and so forth. I become very interested in authority and transmission of the evidence.
>> [Richard Smith:] Did you not have any mini Brian Haynes in the audience, the challenges guy?
>> [Kay Dickersin:] No we didn't actually. The students were quite good. As a matter of fact there were a bunch of advocates there and I heard from one of them sitting behind me that one of the students was doing texting or whatever. And she went, pay attention, what's being said is correct. And so there were people in the audience who were trying to get them to listen to the good evidence but didn't work.
>> [Richard Smith:] So does that story make you despair Paul?
>> [Paul Glasziou:] It is slightly despairing. I think we need more people like Brian who got to the Freudian and ask what the evidence is. But what we know, but Brian grew up that way. I don't know what is it that made him that way. We need to know that though because [inaudible] we need all of Teddy's class to be more skeptical and to think about the rules of evidence rather than being swayed by authority. And realizing that that's what persuaded them. In fact it would have been good to have extended that class and get them to think about what was it that persuaded them one way or other? Was it an evidence thing or was it something to do with authority? And we don't know how to do that at the moment, at least I don't.
>> I have an alternative solution to the problem which I'm trying to work on which is to get all the authorities to be evidence based. And I'm not being, I'm not being in the least facetious. So what we're trying to do now is for instance, educate all the folks who are guideline panels. And that's a big initiative within evidence based medicine.
>> [Paul Glasziou:] And I think at the same time unless you've got an educated public both in terms of the clinicians and perhaps the wider public as well, they won't appreciate that they need evidence based authority.
>> I completely agree that we…
>> [Richard Smith:] So wait a minute, how are you going to educate the public when you can't educate Harvard professors?
>> [Paul Glasziou:] I'm not, I am actually optimistic about this Richard because when I was in Oxford I was getting depressed. I thought the evidence based movement had gotten nowhere. And Tim Lancaster who's one of the medical teachers there and who ran the tobacco and addiction Cochran review group said to me look Paul, the conversation has changed. Since the term evidence based medicine was invented there's been this big shift. You now can't talk of a policy issue like that without bringing in the evidence. Whereas it was conceivable before that that you could have the conversation without the who said what. So the conversation has certainly changed. There's been a big shift there. I think that's a sort of superficial way though if you're liking that we actually need a deeper understanding and skepticism built into our medical students so that they don't obey the white coat from Harvard.
>> [Richard Smith:] So let me share two things that really bother me these days about evidence based medicine. One is that increasingly healthcare is about people with multiple problems, not single problems. And yet the evidence is often not very good about those people with multiple problems. Plus we realize as Teddy was saying evermore that so much of the evidence just isn't published. And what is published is probably biased. So we're trying to apply biased irrelevant evidence to highly complex problems. And this seems a little tricky to me.
>> [Paul Glasziou:] As a general practitioner I think I'll have an attempt at that. First of all I think you're right that some patients that are excluded from trials based on comorbidities but not the common comorbidities. If you look at the statins or the hypertensive agent trials, the asthmatics, the depressives, etc. people with various conditions were generally included in those trials. A few things are excluded but not most of the comorbidities. So I think that's a misconception that we can find trials to people with a single condition. So I don't think the evidence is a problem in that case. I think where the problem comes in is trying to deal with patients with multiple conditions and a lot of it's about not the applicability but the prioritization. There's a lovely paper in JAMA by Cynthia Boyd where she got a typical, I think it was a 55 year old woman, with what seemed like reasonable set of conditions. And said well, what would the guidelines recommend for this particular patient? And you come up with this very extensive list that both the clinician and the patient had to do. You read the list and you say this is not possible. You can't deal with all of these things. So two problems there. One is that the guidelines themselves often focus on single conditions and that's makes it very difficult for me as a general practitioner dealing with somebody whose actually got five conditions to prioritize things. The second problem is how do we go about that prioritization? Clearly we can't listen to all of those guidelines. We have to have the conversation with the patient about what's the important thing to them? What are their prioritites? Get an idea of that. And then think about well how would the evidence that I know about how I can manage each of those things help? And that's part of this shared decision making process to which I think you're strengthening within evidence based medicine is a very strong dialogue going on between the evidence based medicine group and the shared decision making groups with Victor Montori, one of Gordon's students sort of leading the way there.
>> [Richard Smith:] Yeah we're going to come on to the future in a minute. Teddy, what about this problem of just a lot of the evidence just isn't there?
>> [Kay Dickersin:] Well you're absolutely right. There's failure to publish and so we don't know where it's sitting. It could be sitting in somebody's file drawer, it could be a decision, a conscious decision not to publish. There could be outcomes that are missing. For example perhaps just the outcomes that made the drug or the intervention look positive were published. So there's a lot we don't know about. And I think it's increasingly worrisome all that's sitting in file drawers and not published. And so it does raise the question what is the evidence? Is there some evidence we don't know about? Can we rely on what we do know about? And those two things make it worrisome, no question about it.
>> [Richard Smith:] So that's the problem. What's the solution?
>> [Drummond Rennie:] Take the money out of the system. Give a contract to Northwestern and say okay you win this contract to test drug A against drug B. By the way that might actually be a relevant and interesting test to do. You have to get the money out of the system to make the system credible. Because the system is incredible in the one use of that term.
>> [Richard Smith:] Is this going to happen Drummond?
>> [Drummond Rennie:] I don't know. The other thing is you have Cochrane has to go on for example because one of the chief things that Cochrane has shown doing in a systematic and enormous way is how many conditions there is no evidence whatsoever for any treatment either way. Or there's no evidence for any effective treatment. Now that may be depressing but it's realistic. And that should be an enormous opportunity for everybody out here in this room.
>> [Richard Smith:] Okay. Paul you wanted to make a point.
>> [Paul Glasziou:] Well I was going to say I like Drummond's idea. It's happening to a small extent in the UK within the National Institutes of Health Research program. So the health technology assessment program that commissions trails are important uncertainties that come up through Cochrane reviews or through the [inaudible] guidelines process. They've got about 100 million pounds a year being spent on this. They commission the trial. This has been going on for about 15 years now I think. Their publication rate at the moment stands at 98 percent. If anyone can beat that I'd dearly like to know about it. But they have a number of tricks in doing that. One is to have a journal that everyone can publish in the HTA program, the technology reports. They also withhold 10 percent of the funding until you've actually not just published something or given in the report but actually made it publically available. You can't get that last 10 percent. But Rory Mill [assumed spelling] who runs that program tells me that actually it's not, that's a nice stick or carrot if you like to have to hold people. But it's actually then checking with people and problem solving. And going, you know, why haven't you published so far? You've got the carrot and stick there but it's actually monitoring it and making sure that it actually gets published which makes the big difference. And at the moment most funders don't, they give the money out and then they don't seem to care that the main results get published. It's not just pharmaceutical companies sitting on this stuff and hiding it, it's also as Teddy was saying, other people losing interest in their trial after they've finished it and then not publishing. So it happens just almost as much as in the public sector as it does in the commercial sector.
>> [Richard Smith:] Good well before we come to you, we're just going to have a little conversation about the future and I asked both Iain Chalmers and Muir Gray to look forward and think what was important to them about the future. So let's watch those clips and then I'll ask the panel to say something about what they see in the future.
[ Iain Chalmers:] I still think that it's going to be important for it to promote that idea of the need to find out what we know already because that's actually where I started out, in Gaza. What was known already was not available to me. And as a consequent my patients suffered. So I come full circle around to that. And we haven't got anywhere near that yet. Things have improved but it's still a long way away.
>> [Muir Gray:] The paradigm has shifted now and the first big job is over. That the people know when they make a proposition and the fancy philosophers [inaudible] they make a statement. What is the base of that statement? Is it based on evidence? And if so what evidence and how secure are you in that. So we are now in a world which people are clear that there's knowledge has a quality as well as a quantity. So they and we can't relax and what needs to go on to make the knowledge better and that's one thing. But I still, I think, the evidence based medicine there is also a problem in healthcare that people have quite a limited attention span, I mean, clever people, powerful people. So they think oh we've done evidence based medicine. So I think the term personalized now is a term I'm using more because actually in reading the original definition we put in the BMJ, you know, it's not only evidence, it's the thoughtful, more thoughtful identification and compassionate use of individual patient's predicaments, rights and preferences in making a clinical decision.
>> [Richard Smith:] And genes.
>> [Muir Gray:] Yeah. Now we come what's called stratified medicine. But the same thing is what is the evidence that you're claiming [inaudible]. Now I'm still on the case. We're not finished yet. It's very clear now the, this is the century of the patient. The last century was the century of the doctor. This is the century of the patient. And we now have the knowledge. We now have the technology to deliver it.
>> [Richard Smith:] So Gordon, what do you see for the future? There's Iain saying we still don't know what we know. And there's Muir saying this is the century of the patient, forget doctors.
>> [Gordon Guyatt:] Well there's in terms of summarizing evidence the industry goes on. Cochrane is, they may eventually get to its goal of all the randomized trials and move onto other sorts of studies. So I think we've gone a long way as far as that's concerned. The challenge now is in terms of getting the information out to people and in being more effective as we've talked about with preprocessed information.
>> [Richard Smith:] And you feel optimistic?
>> [Gordon Guyatt:] Oh I feel extremely optimistic. The, as Muir Gray said at the end, we now have the technology, we have the knowledge and we've learned a lot very recently. I was at the guidelines international meeting with [inaudible] and his colleagues where we were sitting down with the people who know about the evidence, the clinicians who know about the flow, the information technology people who know how to do, get the electronic systems and the designers who know how to make it work for patients. And, and as I say where before we just would produce these things. They looked good to us. We're now doing tons of user testing to insure that in fact the products we put out are presented in a way that people can use them. So I'm very optimistic.
>> [Richard Smith:] Okay, Teddy.
>> [Kay Dickersin:] Well I have some wishes for how the future would go that may be a little far out. So I'll give the far out ones first because I really do wish they'd help. First of all I think we should change the academic reward system. Because I think the way it is now we're publishing a lot of junk. People are doing studies that aren't very good. And they're all sorts of rewards in there that keep us from being evidence based in what we produce and use. The second thing I'd like to see done is the medical literature cleaned up and fewer journals and publishing only good research. And Drummond thinks I'm crazy because you don't have good research unless you have bad research. But I'd like to have just good research published. And then the third thing which I think is more doable is it's related to learning more about safety. I don't think we've been very good about learning about safety and the harms of our interventions. And one of the reasons I don't have confidence in the harms literature is that we haven't done all the methodologic research that all of us and all of you and others in the world have done to say what's a good trial, what's reporting bias, how do you find a good trail? We haven't done any of that work really for observational studies. Brian's done a little bit about how to find it. But there's far less energy going into observational research about harms for example. And I'd like to see a real investment there because it doesn't make sense that we're just looking at effectiveness and not harms with the same emphasis.
>> [Richard Smith:] We're only half the pictures. There's a challenge to people there. So Drummond?
>> [Drummond Rennie:] How it works is like this. What should happen is that we only published really good stuff. But the really good stuff you can show on economic principles, [inaudible] and others, that it cannot exist without this vast mass of other not quite such good scientists doing not quite such good work forming, making the community of science beavering away at that. And then the system is full of noise and the noise you're hearing is of scientists working together, education themselves and trying to get better. I think that's the better way of looking at it as opposed to saying as I used to say, who wrote this junk?
>> [Richard Smith:] Hurrah for the mediocre. So, Brain?
>> [Brian Haynes:] Well my mission for the last 10, 15 years has been to try to provide a cool, clear stream of high quality evidence that can be used by various resources to try to make sure that they're feeding on work that they don't have to create themselves. There's no point in everybody trying to do this. So we feed into textbooks. We feed into guidelines. We feed into systematic reviews, clinical decision aids. Try to just make it so cheap to be able to access current best evidence that people will have that in hand when they try to make their resources. I think that's not enough. And I think that the next phase of things has to deal with some knotty issues.
>> [Richard Smith:] Naughty not noty?
>> [Brian Haynes:] Not naughty, k n o t t y issues. They might be naughty as well.
>> [Richard Smith:] Well then [inaudible] attention to them.
>> Brain: Related to the fact that there is no teeth in continuing education programs around the world so that practitioners don't have to learn new things or can't figure out ways to do new things after medical school which is why we still have a generation effect in new evidence getting into practice. We have to wait for the old doctors to die off. And second we have a problem at the patient level with behavior change because if they're going to take advantage of current best treatments, they need to change the way that they're doing things. And we don't, we need better ways to help them do that than we have at the present time. So I see a big research agenda here if we're ever going to have the great traction we want. The areas are called knowledge translation research, implementation science, those are the areas we need to invest in now. And we're starting to see the payoff from lobbying for that through the funding agencies. We're now getting much more funds into those areas. We've got a new crop of researchers who have been trained in how to do that kind of complex research. So unfortunately though we'll have to wait for another era to pass before we get that figured out.
>> [Richard Smith:] Where there's death, there's hope.
>> [Brian Haynes:] Right, so [inaudible] are in place but there's still work to be done in terms of understanding the barriers to implementation.
>> [Richard Smith:] Good, thank you. Paul?
>> [Paul Glasziou:] Well first of all to say Richard I don't think there's any one thing. I think we actually need a systems approach where we map out the whole process that begins with a clinical uncertainty, a piece of research gets done, synthesized, published, disseminated, implemented and what happens in the consulting room and beyond the consulting room, what the patient actually does with it to map out their entire pathway and understand what goes right and wrong within that is essential if we're going to improve things. And that includes everything that everyone's said along here. So I'm going to say I've got a wish list of two things that I think are crucial in that. But that it's that whole pathway that needs fixing. My two wish things would be the first one is that we're too slow at doing systematic reviews, the process of integrating research. It takes about two years to do a systematic review. By that time more trials have been published and it's just too slow. We need to get that down to two weeks or two days or even two minutes. And it's feasible if we work on it like they've done with the human genome project which took, I don't know, I've forgotten how many years it took for that first human genome sequencing. But now it's just much easier to do. The costs have been plummeting. We need to invest the same effort, funds, technology, brains into working out how to do the same thing with integrating our research literature. So we can do those systematic reviews much more rapidly and it is feasible. That's one with list that I think underlies this whole process. The second for me is that a lot of the work that's been done has been on pharmaceuticals. I'd like to see the same amount of work go into non-pharmaceutical interventions like, you know, exercise for heart failure or the Mediterranean diet, etc. We don't have those things as readily available and so we're working on a thing called the handbook of non-drug interventions which is a [inaudible] and non-pharmaceuticals, a non [inaudible] if you like. And we need that, nutrition in physiotherapy and the things that I do in primary care. But it again needs to be underpinned by all the good evidence. But there's an extra layer of problem in that getting the details of those interventions, getting sufficient information to be able to implement in practice is much harder than the pharmaceuticals. But the effects can be just as large if not greater for some of the non-pharmaceutical interventions.
>> [Richard Smith:] Here, here. Good, well I want to thank all of our speakers. I think it's been extraordinarily interesting. I thought I knew a fair bit about this but actually I've learned a lot. We've look at the origins of evidence based medicine. But what I think is become very clear at the end here is just what a long way we have to go. So if you're a young student, wow, what a prospect you've got. Thank you everybody.