Letters for rainy days

 

For years, I’ve kept a file.  It’s in the bottom drawer of my filing cabinet and it’s labelled ‘letters for rainy days’.  In it I keep all the nice ones.  The thank you notes, the occasional letter of recommendation, the acceptances, all the ones that start, ‘I am pleased to…’.  I don’t look at it very often but I always add to it, and, most importantly, I know it’s there.  It’s the tangible evidence, the other column on my balance sheet, the one to counter all those niggling self-doubts, the lack of confidence and the feeling that I’m a fraud, an imposter.

Despite the name it often goes by, the so called ‘Imposter Syndrome’ is not a medical condition.  It is not a form of mental illness nor something that needs to be cured.  Really, it is an attitude, a way of thinking about how we measure up in the world, and it is as common as having brown eyes.

Have you ever sat in a meeting and looked around the table and heard an inner voice say what are you doing here? How did you blag your way into this company? You know they’re going to find you out, don’t you?

Have you ever given a presentation of your work after a poor night’s sleep, unable to settle because of the anxiety that they’ll all know more than you, they’ll ask you questions you don’t even understand, they’ll realise you’re just a fraud?

Have you ever got the job and told yourself, they must have been desperate, or there must have been a mistake, or simply put it all down to dumb luck?

If you recognise yourself in any of these scenarios, welcome to the club.  And it’s a very big club.  Well over two thirds of everyone who has ever achieved anything has thought like this at some point. As a phenomenon, it was formally described in the late 1970s by American clinical psychologists, but of course it’s been around a lot longer than that and crops up in unlikely places.

For example, the Academy Award winning actress, Jodie Foster said:

When I won the Oscar, I thought it was a fluke. I thought everybody would find out, and they’d take it back. They’d come to my house, knocking on the door, ‘Excuse me, we meant to give that to someone else. That was going to Meryl Streep.

And interestingly, on another occasion, Meryl Streep, Foster’s role model and the actress she thought by rights should win all the awards, expressed her very own self-doubt:

You think, “Why would anyone want to see me again in a movie? And I don’t know how to act anyway, so why am I doing this?

But in case you think it’s just a trait of hugely talented and successful Hollywood actresses, look at what one scientist had to say when reflecting on his professional achievements:

The exaggerated esteem in which my lifework is held makes me very ill at ease. I feel compelled to think of myself as an involuntary swindler.

That was Albert Einstein, the most celebrated physicist of the last 500 years.

So what do these people have in common? You might think not much, but they’re all people,  they’re all talented, they’re all very hard working and most importantly they are all successful.  Real frauds and the true incompetents rarely if ever feel like imposters.  The former don’t care enough and the latter achieve nothing that might make them feel undeserving.

But, perhaps it’s not all bad.  Imposterism might just be a manifestation of that combination of ability and self-doubt that helps us achieve great things or as the lead singer of Cold Play, Chris Martin, puts it, some paranoia with the arrogance.  He says, ‘If we were all paranoia, we’d never leave the house,’ but, ‘if we were all arrogance, no one would want us to leave the house.’  And that goes for performers and scientists, for graduate students and bricklayers, in fact for anyone who successfully builds, creates or achieves anything.

So, why do we do this to ourselves?  It’s quite simple really and stems from the fact that the only mind we can access is our own.  We have an internal life — thoughts, feelings, emotions and insecurities — that only we can glimpse.  Of course, everyone else has exactly the same, but we can’t see inside their heads, we can only see the surface and often that looks completely at ease with the world.  As such, we naturally start to think that it’s just us who feel like this.  That we are the only ones in the room, or on the panel, or on the short list who really shouldn’t be there.  Everyone else is smart and assured, confident and deserving, but not us.  We have an internal balance sheet of all our little failures and screw-ups in one column and our achievements and successes in another, and we can see just how imbalanced it is. As one writer put it:

It’s a classic case of “comparing your insides with other people’s outsides”: you have access only to your own self-doubt, so you mistakenly conclude it’s more justified than anyone else’s.

My mother’s contribution to all this, long before the Imposter Phenomenon had a name, was her mantra, ‘You are no better than anyone else, but no one is better than you.’    And it helped me, the first person in my family to go to university, to weather some of the social storms ahead.  You cannot see inside other people’s minds, you cannot feel their self-doubt first hand or share their secret insecurities, but what you can do is realise, quite objectively,  that they all have them and that really they are no different from you.  And you can even let some of that arrogance temper the self-doubt. You know you’re not the best or most talented person in the room (because my mother would have told you so) but just how talented are the rest?

Former First Lady, Michelle Obama admitted to feeling like an imposter when giving a speech at a London girls’ school, where her young audience was hanging on her every word.  She felt like a fraud being viewed as someone with all the answers, of being smart.  But her concerns were softened when she reflected on her own experiences.

Here is the secret,’ she said. ‘I have been at probably every powerful table that you can think of, I have worked at non-profits, I have been at foundations, I have worked in corporations, served on corporate boards, I have been at G-summits, I have sat in at the UN; they are not that smart.

The emperors, as it turns out, really aren’t wearing any clothes.

While a realisation that you’re not alone in feeling like this is a first step, what more might you do?  Next time you feel like an imposter, catch yourself and change the narrative.  Ask yourself what you did to earn it, whatever the ‘it’ is.  Don’t dismiss the effort and the hard work that got you there, acknowledge it.  And don’t just put it down to luck.  We all benefit from luck and equally we all suffer because of it, but probably we give it too much credit.  ‘Chance favours the mind that is well prepared,’ said Louis Pasteur, or to put it more prosaically, you make your own luck. It wasn’t luck that got you into College, or got you that degree.  It wasn’t luck that helped you make that discovery or get that job.  It wasn’t luck that made them award you the prize.  It was a very little bit of talent and very large amount of hard work.

But there will still be rainy days, and on those you need to be reminded that you’re smarter, more accomplished, more deserving than you allow yourself to imagine.  And that’s why I have a file. Maybe you should have one too.

© Allan Gaw 2019

Now available in paperback…

Testing the Waters: Lessons from the History of Drug Research

What can we learn from the past that may be relevant to modern drug research?

Screen Shot 2015-10-14 at 21.22.11

“I cannot recommend it highly enough: even if you read nothing else about the origins of drug research and what it can (should?) teach us, read this….This is a ‘buy’.”  Madhu Davis review in Pharmaceutical Physician May 2016.

My other books currently available on kindle:

Screen Shot 2015-01-18 at 17.19.43Screen Shot 2015-01-18 at 17.19.00BIS Cover copyScreen Shot 2015-01-18 at 17.19.29Screen Shot 2015-01-18 at 17.18.54Screen Shot 2015-01-18 at 17.19.37Screen Shot 2015-01-18 at 17.19.16Screen Shot 2015-01-18 at 17.19.58Screen Shot 2015-01-18 at 17.20.06

 

 

“Everyone here is clever”

Especially in academia you will be surrounded by people who are smarter than you, sometimes immeasurably so.  You will meet those who can make the connections and see the problem faster than you can articulate the question. And then they provide you with the answer — the complex, nuanced and irrefutably correct answer — before you can close your mouth that had first fallen open at the shock of their original insight.  There will be those who display encyclopaedic memories, those who can process information with disconcerting speed and those who are able to articulate their thoughts with remarkable fluency and formulate powerful arguments with apparent ease. And of course there are those who can do all of these things simultaneously.

In such company, you may be tempted to be depressed, but don’t be.  Sometimes formidable intelligence, in whatever form it manifests itself, is accompanied by less attractive traits.  In academia, I have know very clever people who were also bullies,  highly intelligent individuals who were also casual racists and misogynists and alarmingly smart men and women who were also utter fools outside their own discipline.

And remember you’re clever too, perhaps even enough to disconcert others with whom you work.  But that’s not the point.  The simple fact is that in academia everyone is clever, but not everyone is distinguished.  The point therefore is how you might distinguish yourself in such an environment.

You do your best — that should go without saying.  Putting the effort in is essential and very little was ever achieved in this business of discovery without graft.  Anyone who tells you otherwise doesn’t know what they’re talking about.  You have the basic talents that you have cultivated over a lifetime and amongst those will be the intellect that nature has granted you.  You may strive to be better informed, more experienced, more practised and adroit in your thinking, but it’s unlikely you will actually become more intelligent.  Why not seek distinction in other areas?

I think the greatest distinguishing feature between those I wished to work with in academia and those I grew to despise was something apparently simply and, to some, insignificant.  It was kindness.  This trait is given minimum attention in the starry constellation of academic skills and talents, but on reflection, across a lifetime working in universities, it now stands out for me as the most important thing of all.  Kindness manifests itself in many ways and is often dismissed as a weakness, a deficiency, even a feminine trait.  However, I should quickly say that the only people I have ever heard describe it as the latter have been women and when doing so they were not being complimentary.  There is, in fact, nothing gender specific about kindness and it is most certainly not a form of weakness.  On the contrary, to be kind requires great strength, especially in some of the environments one finds oneself in at a university.  It may mean as little as listening actively to another or as much as putting your job on the line to defend another. It can be the little helpful things that just help us get through, the advice, the support, the cup of coffee.  It might be the smile to cheer you in the morning or the shoulder to cry on at the end of a difficult day.  It is usually a sharing, a halving, a form of relief that makes us think we are not in this alone. 

Academia is a competitive life, with colleagues pitting themselves against each other for promotion.  We fight for grants and wrestle for prestige.  In this sort of world, it is reasonable to question the place of kindness.  In other words, if the dogs are eating the dogs here, does kindness cut it?  Unequivocally, my answer is yes.  The fiercely intelligent researcher who is also known for their humanity, their honour and their ability to share is certainly the collaborator of choice compared with the merely clever.  And collaboration is the key here, for greatness across all the disciplines of academia is achieved not by individuals but by groups working together. People will work with you, no matter who you are, because they have to, but if they ever have a choice it is only natural they will prefer those they can trust and those who will be supportive.

Kindness for its own sake is a remarkable quality and one we should encourage, even inculcate in our junior colleagues. We might persuade them with arguments like the ones above that such behaviour will promote professionally useful links, but perhaps we should just be asking them to develop and display some basic human decency.  We should be asking them to be considerate of others and to think about the consequences of their actions.  We should promote an academic environment that truly values sharing. And finally, we should view kindness as the monumental strength that it is.

© Allan Gaw 2019

Now available in paperback…

Testing the Waters: Lessons from the History of Drug Research

What can we learn from the past that may be relevant to modern drug research?

Screen Shot 2015-10-14 at 21.22.11

“I cannot recommend it highly enough: even if you read nothing else about the origins of drug research and what it can (should?) teach us, read this….This is a ‘buy’.”  Madhu Davis review in Pharmaceutical Physician May 2016.

My other books currently available on kindle:

Screen Shot 2015-01-18 at 17.19.43Screen Shot 2015-01-18 at 17.19.00BIS Cover copyScreen Shot 2015-01-18 at 17.19.29Screen Shot 2015-01-18 at 17.18.54Screen Shot 2015-01-18 at 17.19.37Screen Shot 2015-01-18 at 17.19.16Screen Shot 2015-01-18 at 17.19.58Screen Shot 2015-01-18 at 17.20.06

 

 

What’s in a name? Eponyms in medicine

There is a comet stitched into the heavens of the 11th century Bayeux tapestry. It is now known to be a regular visitor to the heavens, but while the comet had been observed many times by the ancients, it was the Astronomer Royal, Edmund Halley, who is first believed to have predicted its periodic return to our skies. He did not live to see the comet reappear and to have his calculations vindicated, but when it arrived on cue it was named after him in 1759.  An apostrophe secured the deal, and what goes around comes around; in Halley’s case, roughly every 76 years.

It is all too easy to be possessive.  Discovery often implies ownership and those who first describe a disease, a phenomenon, or, in Halley’s case, a comet, have in the past been honoured with not just their name being applied, but they have also been granted the deeds of ownership that come with an apostrophe S.  In the world of medicine, those such as Asperger, Duchenne, Burkitt, Grave and Addison, as well as many others, took possession of diseases from which they never suffered, but which they are credited as first describing.

Cushing, Crohn and Alzheimer are just three examples of very well-known medical eponyms.  Harvey Cushing was an American neurosurgeon who described what would become his eponymous disease of the pituitary in 1912.  Burrill Crohn was an American gastroenterologist who published details of patients with his inflammatory bowel disease in 1932.  And, Alois Alzheimer was a German psychiatrist and neuropathologist who first described an ‘unusual disease of the cerebral cortex’ that led to the premature death of a patient in her mid-50s in 1906.

Of course, in medicine it isn’t just diseases that bear the names of the famous.  When it comes to examination, we have a whole medical dictionary of clinical signs named after their exponents, from an Adie’s pupil to Beau’s Lines and Osler’s Nodes. And then, there are tests. I am sure I am not alone in long-believing that the Apgar Score for assessing neonatal well-being was a clever acronym, only to discover to my embarrassment one day that we own that particular one to Virginia Apgar, the American Obstetric Anaesthetist who devised the scoring system in the 1950s.

Apostrophes, however, do have a habit of disappearing over time and taking the ownership they signify with them.  Mr Charles Henry Harrod’s department store in Knightsbridge has lost its apostrophe, as has Mr John Boot’s chemist shop and, much more recently, Mr Tim Waterstone’s bookshop.  Possession evaporates with rebranding and the same is happening in medicine. Today, we are as likely to see eponymous disease names written either with or without the apostrophe or even without the additional letter S altogether.  For example, Crohn’s, Crohns and Crohn Disease have all become synonymous in the literature.

The main argument against the use of eponyms is that they are unhelpful for both clinician and student, telling us nothing of any clinical import about the disease or the test or the sign in question.  For example, unless you happen to be a 1930s baseball fan, amyotrophic lateral sclerosis tells you much more about the underlying pathology than its eponym, Lou Gehrig’s disease. That eponym, however, at least bears the name of a patient rather than a physician.

Indeed, a common criticism is that merely describing a disease that you have never suffered does not constitute ownership.  The corollary, however, seems equally quaint: just because you contract a particular disease, one that many others before you have also suffered, you can hardly take possession of it.  But perhaps we are being a little unfair to the physicians and scientists in question for they rarely, if ever, called their diseases after themselves.  To do so would have been more than unseemly, and the conferring of the eponym was usually left to others.

Particularly in medicine, this issue of eponyms has been debated for many years.  A generation ago one conference of the Canadian National Institutes of Health proposed: “The possessive use of an eponym should be discontinued, since the author neither had nor owned the disorder.” More recently, both the WHO and the American Medical Association have argued for the elimination of possessive eponyms.

For some, however, the use of eponyms, with all its attendant problems, does add colour and perhaps a sense of history to medicine.  Perhaps we should honour those who have laid the foundations of our subject. But, if we are going to go to the bother of memorialising those who first described a disease then perhaps the least we can do is offer them the nicety of some punctuation to go with it.

© Allan Gaw 2019

A version of this article appeared in the MDDUS publication FYi in February 2019

Now available in paperback…

Testing the Waters: Lessons from the History of Drug Research

What can we learn from the past that may be relevant to modern drug research?

Screen Shot 2015-10-14 at 21.22.11

“I cannot recommend it highly enough: even if you read nothing else about the origins of drug research and what it can (should?) teach us, read this….This is a ‘buy’.”  Madhu Davis review in Pharmaceutical Physician May 2016.

My other books currently available on kindle:

Screen Shot 2015-01-18 at 17.19.43Screen Shot 2015-01-18 at 17.19.00BIS Cover copyScreen Shot 2015-01-18 at 17.19.29Screen Shot 2015-01-18 at 17.18.54Screen Shot 2015-01-18 at 17.19.37Screen Shot 2015-01-18 at 17.19.16Screen Shot 2015-01-18 at 17.19.58Screen Shot 2015-01-18 at 17.20.06

 

 

‘Sinning against Science Itself’ Adolf Friedrich Nolde’s 1799 Code of Good Research Practice

Today, the name Adolf Friedrich Nolde is virtually unknown, even in the circles of medical historians. A young German professor of medicine and midwifery from the late eighteenth and early nineteenth centuries, this man, while mourned and eulogized at the time of his early death, has been quickly forgotten.

However, Nolde deserves to be remembered, and his work merits closer examination, especially because of his writings in the final years of the eighteenth century and his focus on the development of a scientific foundation for the study of drug treatments and the development of research ethics.

In his treatise, [Reminder of some of the necessary conditions for the critical appraisal of a drug], published in 1799, Nolde defines and enumerates a set of eight rules for the conduct of pharmacological research. In the first seven of these rules, Nolde highlights the need for the study of high-quality and “genuine and unadulterated” drugs that are “prescribed in an appropriate manner.”  It is his eighth rule, however, that merits the closest examination, for here he looks at the issue of research misconduct and the impact it may have on both scientific endeavour and patients’ well-being. He summarizes this rule as follows:

Rule 8. When announcing a new drug or recommending a known drug nothing at all should be omitted about anything that could have an influence on the correct assessment of the drug, and it would be shameful if observations were to be fabricated or distorted at the expense of the truth.”

Nolde justifies the inclusion of such a rule by noting that, “unfortunately one sees many a result which has been recorded untruthfully,” and goes on to state that, “not everything which physicians publish under the promising titles of ‘Observations and Experiences’ can be taken at face value.” What might be seen as very much a twenty-first-century problem appears to have been a well-recognized phenomenon even in the eighteenth century.

He describes instances of fabrication, where results are simply made up and then published, as well as instances of falsification, where results are willfully manipulated to tell a different story. In both, he expresses his concern that, “the public can be deceived in this way.” Moreover, he admonishes those whom he believes have corrupted the scientific literature:

Such actions are of course extremely unworthy of any honourable man and should rightly bring disgrace upon him. Not only does he deceive the reading physicians in this way and shamefully betray the time and effort invested by them with the best of intentions, but he is also sinning against science itself by wilfully corrupting the degree of certainty of which science is capable and acting irresponsibly toward the public who entrust their health and lives to their physicians. Anyone who dares to misrepresent the truth so deliberately should consider carefully the unpredictable consequences of his actions and look to his conscience!”

Nolde asks for a comprehensive approach to scientific reporting, but recognizes that this is a more difficult path:

Whoever has the will and resolve to present really instructive observations to the medical public will undoubtedly have to apply himself much more diligently than one who cares not what he writes to the world.”

He also notes the difficulty this may present to the reader, but rejects the idea that this is an unnecessary burden:

I reject the criticism that the length and detail of such comprehensive reporting would bore and tire the reader. Anyone who, as a critic or a prospective physician seeking guidance, turns to such observations does not do so for amusement as in reading a novel. . . . The physician, if he so wills and has the ability to do so, can report his observations so that they read well and easily despite their necessary thoroughness. It is not the deluge of words or the number of pages that give a report its comprehensiveness, but rather the complete and accurate reporting of everything, which is relevant without falling into the trap of long-winded, tiresome verbosity. As regards the time a physician spends reading such reports of observations he would have much less cause for regret if, in a day, he read two or three well-written reports than if he read hundreds which were of little use.”

He also recognizes the importance of education stating:

. . . it would be very desirable if it were very strongly impressed on young physicians at university that their duty was to remain loyal to the truth in all circumstances and that plying their trade in silence would be preferable to doing so with lies and deceit.”

Nolde concludes:

Only when we know this relationship exactly and that the tests which produced the data were undertaken with all practical care, intelligence, diligence and attention, are we able to obtain the information and the degree of certainty needed in order to judge the value or worthlessness of a drug. Truth is always better than deception and definite certainty preferable to precarious uncertainty.”

Regarding the research misconduct that he recognizes as toxic to scientific endeavour and the practice of medicine, he recommends that it be exposed and expunged:

. . . all corner-cutting, fabrications and deceptions of the ‘literary’ physicians, produced in their thousands, should be treated with the greatest contempt as soon as they are recognized as such and deserve no better than eternal oblivion.”

Working in the late eighteenth century, Nolde was part of the medical and scientific enlightenment that recognized the deficiencies of a past reliant on folklore and anecdotes to inform medical practices. He and many of his contemporaries realized that good practice had to be founded on experience and, furthermore, that that experience should be gathered and reported in a rigorous way.

Writing specifically about the evaluation of new drug therapies, Nolde enumerated a series of key principles or rules that he proposed must be followed for such assessments to be valid. What his rules cover are a number of the key aspects of the scientific method and would be readily recognizable to a modern-day clinical pharmacologist. But, in addition to these principles of practice, Nolde felt the need to emphasize the ethical aspects of research practice and reporting.

Today, we are acutely aware of the importance of research misconduct, and major efforts are being made to expose and root out such practices. We understand how the fabrication and falsification of research data and their publication can fatally undermine modern medicine, but so did Nolde, more than 200 years ago. Not only did he recognize the problem, he also understood the implications of a corrupt scientific literature and its impact on patient care. He also realized that education of junior practitioners and researchers is key to both solving the problem and making the practice completely professionally unacceptable.

Nolde’s contribution is not only of interest for historical curiosity but is also a potent reminder that the challenges of clinical research are not new. The problems we face today are similar to those that troubled the minds of our forebears. We are concerned with the quality of clinical research and its integrity and, at times, even its veracity. If we are looking for solutions, we might do worse than to consider those put forward by thinkers such as Nolde. Recognition of the problem, a public refusal to accept such a state of affairs, and then ensuring that junior staff are properly educated were Nolde’s solutions. These are also increasingly the modern solutions to our problem of misconduct and fraud in scientific research.

© Allan Gaw 2019

 

An earlier version of this article was published with Thomas Demant in International Journal of History and Philosophy of Medicine 2016; 6: 10602 www.ijhpm.org doi: 10.18550/ijhpm.0602

Now available in paperback…

Testing the Waters: Lessons from the History of Drug Research

What can we learn from the past that may be relevant to modern drug research?

Screen Shot 2015-10-14 at 21.22.11

“I cannot recommend it highly enough: even if you read nothing else about the origins of drug research and what it can (should?) teach us, read this….This is a ‘buy’.”  Madhu Davis review in Pharmaceutical Physician May 2016.

My other books currently available on kindle:

Screen Shot 2015-01-18 at 17.19.43Screen Shot 2015-01-18 at 17.19.00BIS Cover copyScreen Shot 2015-01-18 at 17.19.29Screen Shot 2015-01-18 at 17.18.54Screen Shot 2015-01-18 at 17.19.37Screen Shot 2015-01-18 at 17.19.16Screen Shot 2015-01-18 at 17.19.58Screen Shot 2015-01-18 at 17.20.06

Why do we call it a stent?

 

 

They are a small select group.  Those individuals whose surnames have been transformed over the years into nouns and even verbs have certainly achieved immortality of a sort.  But, of course, often when we use these words we do not even recognise they were once names.  The French physician and inventor Joseph-Ignace Guillotin is a member of the group, as is the American industrialist William H. Hoover. And so is a now largely forgotten English dentist, Charles Thomas Stent.

Stent was born in Brighton in 1807 and practised dentistry in Victorian London.  His principal contribution to his field came in 1856 when he successfully modified the material used to make dental impressions.  Earlier in the nineteenth century the main impression materials had been bees’ wax and plaster of Paris.  Because these were far from perfect, the English dentist Edwin Truman had introduced the use of gutta percha in 1847.  This natural material derived from rubber trees was an improvement over its predecessors but was still found wanting.  It had a tendency to distort  after removal from the patient’s mouth and would shrink on cooling.  To stabilise the gutta percha and improve its plasticity,  Stent decided to add stearine derived from animal fat.  He also added talc as an inert filler and red colouring.  When the new, improved material was introduced it was an instant success and indeed Stent was lauded by his profession.

Both Stent’s sons followed him into dentistry and together they founded a company, C.R. and A. Stent that would manufacture the increasingly successful Stent’s Compound for the next four decades. In 1901, when the second of his sons died, the dental supply company Claudius Ash and Sons of London purchased the rights and continued its manufacture under the Stent name.

But how did the name of a dental impression material find its way into the wider world of surgical devices?  The story is convoluted and not without controversy.  Stent’s compound was certainly widely known in dental circles throughout the latter half of the nineteenth century, but the story really begins with its use by a Dutch Plastic Surgeon in the First World War. That surgeon, J.F. Esser, was trying to find novel ways of repairing serious facial wounds in soldiers from the trenches.  He described in 1917 how he used “the mould of denticle mass (Stent’s) in fixation of skin grafts in oral surgical repair of war wounds.”  Later in the same article, he referred to the material he used as “stent’s mould.”  This pioneering work was cited in a 1920 book on the Plastic Surgery of the Face, in which the author noted, “The dental composition for this purpose is that put forward by Stent and a mould composed of it is known as a ‘Stent’.”  Thus, Charles Stent’s surname became a noun for the first time.

Throughout the subsequent decades on the twentieth century, surgeons trained in the UK and US would have been well aware of such material being used in oral and plastic surgery.  As there is often significant crossover from one sub-speciality to another, especially as they develop their own identity, the concept of the “Stent” would find its way into diverse fields.  In the reconstruction of the common bile duct, polyethylene tubes would be used to maintain the structure’s patency, and in 1954 this was referred to as “a stent for the anastomosis”. In urology, where there is also an obvious need to hold tubes open, the word stent was first used in 1972.  However, today the most common use of the word stent is in cardiology.  Its in that field was not until 1966, where it was used in the context of heart valve surgery.  While its author was fully aware of the use of the word by plastic and oral surgeons, he later claimed that he thought it was “an all-purpose term for any kind of nonbiological support used to give shape or form to biological tissue.”  The notion of an endoarterial tube graft as we know it today is attributed to researchers in the 1960s and the first coronary stent was implanted in Toulouse in 1986.

Today, the word stent has entered common usage and, particularly because of its wide use in cardiology, is well-understood by any informed lay person.  Few, however, either in the medical professions or elsewhere are aware of the origins of the name.  One cardiac surgeon noted, “The greatest accolade that can be given to any inventor is to have the initial capital letter dropped from his name, for that is recognition that the word is now in the general language.”  Charles Stent, the Victorian London dentist, has certainly achieved immortality, but his contribution to the development of the modern stent has often been overlooked.

 

© Allan Gaw 2018

Now available in paperback…

Testing the Waters: Lessons from the History of Drug Research

What can we learn from the past that may be relevant to modern drug research?

Screen Shot 2015-10-14 at 21.22.11

“I cannot recommend it highly enough: even if you read nothing else about the origins of drug research and what it can (should?) teach us, read this….This is a ‘buy’.”  Madhu Davis review in Pharmaceutical Physician May 2016.

My other books currently available on kindle:

Screen Shot 2015-01-18 at 17.19.43Screen Shot 2015-01-18 at 17.19.00BIS Cover copyScreen Shot 2015-01-18 at 17.19.29Screen Shot 2015-01-18 at 17.18.54Screen Shot 2015-01-18 at 17.19.37Screen Shot 2015-01-18 at 17.19.16Screen Shot 2015-01-18 at 17.19.58Screen Shot 2015-01-18 at 17.20.06

Writing the rules for drug research: Johann Christian Reil (1759-1813)

Today, the name Johann Christian Reil is perhaps best known to neuroanatomists and those working in mental health for this German physician, who was born over 250 years ago, made seminal contributions to both these fields. As his legacy, we have several structures in the brain that bear his name as well as ‘psychiatry’, a specialty that he named. However, earlier in his career, Reil was responsible for another, perhaps even more generally applicable piece of work, when he set forth the principles on which the modern evaluation of drugs in humans should be based.

His life

Born in Northwest Germany, Reil began his medical education in Göttingen in 1779. He taught at the University of Halle for 22 years, during which time he established himself as an esteemed physician, scientist and educator. During his tenure, he promoted the idea that medical practice should be grounded in physiology, which in turn should have chemistry as its foundation.

In 1810, he moved to the new Berlin University as the Dean of the Medical Faculty, but the continued political unrest in Europe would interrupt his career and bring it to an end . In 1813, during the Napoleonic Wars, Reil volunteered for military service in the Prussian Army. He was commissioned to the field and, possibly as a result of his efforts to control a typhus epidemic then raging in Germany, he contracted the disease himself. Approximately 10% of the German population were infected and of these 10% (some 250,000) died. Reil was one of these unfortunates, dying of the disease at the age of 54.

His approach to pharmacology

In 1799, while he was the Professor of Medicine at Halle, Reil published his Beitrag zu den Prinzipien für jede künftige Pharmakologie (Contribution to the principles of a future pharmacology), in which he proposed a set of rules for the conduct of pharmacological research. These were presented as a theoretical framework on which to build a scientific approach to the study and evaluation of drugs.

As his first rule, Reil chooses to emphasise the overall approach and motivation for pharmacological research and indeed for any form of scientific endeavour.

1) The observer must have good common sense, good understanding, judgement, know how to make observations but also have a healthy degree of scepticism. He should not allow himself to be influenced by egotism, doctrine, an attachment to his school, or any prejudice, but by the simple love of truth.

Next, he states the importance of standardization both with respect to the drug under test and to the research participants. He notes that the researcher has the power to control the quality and consistency of drugs used, but the inter and intra-individual variation of human subjects can only be acknowledged and taken into account.

2) If the results of experiments, that is the changes brought about in the human body by the drugs, are consistent, they can be considered to be undoubtedly valid only if both the drugs and the human beings used in the series of tests are of a standardized nature. If the reagents which are working on one another are sometimes of one quality, sometimes of another, then the results will be correspondingly inconsistent…Every individual is different from another, and the same individual is not the same all of the time. With human beings a standard, average type has to be established, and before the experiment the test subjects must be assessed to identify exactly how they deviate from the standard so that variations in the results can be compensated for accurately for each individual according to how they deviated from the norm. How difficult this point is and yet how inexact efforts are with regard to it.

His third rule focuses on a similar theme, but here he discusses the importance of accurate diagnosis and previews one of his later themes—the all-importance of consistent nomenclature.

3) If we are experimenting on invalids the same things apply. Their illness must not be hypothetical, but real, identifiable from recognizable symptoms and deemed by the experimenter as really being present…How many mistakes have wormed their way into pharmacology because of confused terms about the nature of diseases and an imperfect semiotic of the same.

The reproducibility of research findings is emphasized in Reil’s fourth rule. The confounding effects of variables that have not been controlled and the cherry-picking of the desired result from a variety of non-standardised experiments are, in Reil’s opinion, reasons for the mistaken advocacy of ineffective treatments.

4) The experiments must be repeated often and under exactly the same conditions and in each repetition the results have to be the same. This alone can convince us that the results are effects of the drug. If, after one or more tests, a given effect sometimes is seen and sometimes not, the possibility remains that it was due not to the use of the drug but some other possible cause.

As an extension to his call for the control of variables within an experiment, Reil makes a call for the study of individual, or simple, drugs in his fifth rule. Here, he draws an important distinction between the study of drugs and their practical use, with his realization that compounds or mixtures will be prescribed. He argues, however, for their components to be tested first separately, and only when understood to be tested in combinations.

5) A drug must be tested on its own, not in conjunction with others, because otherwise it remains uncertain which of the substances used has brought about the effect in question. I am not suggesting, however, that in practice no compound substances should be used… First of all, we must determine the powers of the simple, individual substances in order to be able to work out the effect of compounds of them. The compounded substance must then be thoroughly tested, the same as for a simple, to understand the alteration of its effect which has been caused by its compounding.

The terminology we use to describe drug actions is called into question by Reil in his sixth rule. Here, he calls for specificity and a greater depth to our language that goes beyond mere superficial description.

6) The effects of drugs must be described specifically, not in terms that are too general, as otherwise they are of no practical use…It is not a question of whether the substance simply has an effect, but the ‘what’ and the ‘how’ of this effect.

In Reil’s seventh rule, he states the importance of the scientific method in the study of pharmacology. The critical importance of experiential learning and the process of inductive logic are emphasized.

7) The effects of drugs must be established either through direct experience or from conclusions, which were clearly able to be drawn from direct experience. Their characteristics have to be clearly described… Isolated observations must be collated and general results deduced according to certain rules (e.g. frequency, causality).

In Reil’s eighth and final rule, he returns to his call for improved terminology, with greater precision in the words we use to describe our experimental findings. After stating his rule, he goes on to offer a number of specific examples in support of his argument for a dedicated, technical language of pharmacology.

8) Finally, the terminology used in pharmacology deserves sharp criticism. The meaning we give to words needs to be more precise, more expansive and more accurate. Without this improvement we will remain virtually unintelligible to each other.

His rules in perspective

In the second half of the 18th century, medical practice was in turmoil after a period of relative stagnation lasting more than 500 years. Through numerous discoveries and a wholly different approach, medieval medicine was giving way to a more enlightened system. As part of this, there was a growing desire to place medical practice on a much more solid scientific foundation. The term pharmacology itself was probably coined in the late 17th century, but acquired its modern definition in 1791 from Friedrich Albrecht Karl Gren, a German chemist, physician and friend of Reil.  Gren distinguished the science of the action of drugs from the mere description and collection of drugs. This cataloguing of drugs was materia medica, while pharmacology was a science. A science, however, needed a rational framework and a method, and these Reil sought to provide.

Reil’s rules may be seen as a natural development in this context. However, Reil was not the first to propose a set of rules to formalise the study and evaluation of drugs. The ancient Roman physician and philosopher Galen and the 11th century Persian polymath Avicenna, as well as medieval physicians who followed them, did the same.

An obvious question to address is: how much was Reil influenced by his predecessors?

Galen and Avicenna had dominated the medical curriculum for centuries. Galen’s works were widely read and studied in their original Greek and from the 15th century in Latin translations. They formed part of the core curriculum in most European medical schools but by the Renaissance their authority was being questioned. By the mid to late 17th century they were no longer taught at the European medical schools.

Similarly, Avicenna’s famous Canon of Medicine was available in Latin throughout the later medieval period and was probably first translated in the second half of the 12th century. The Canon continued to be used in medical schools up until the late 17th to early 18th centuries. However, by the time Reil attended medical school it would have fallen out of favour, to be replaced with more contemporary texts of the medical enlightenment.

In parallel with the upheaval in medical practices in the 18th century, there was a corresponding overhaul of medical education. Medical curricula were changing to accommodate new subjects such as chemistry, botany and physiology. While the classic texts such as those of Galen might still be read, they no longer formed part of the core syllabus.

Thus, Reil would have had access to the works of Galen and Avicenna, but it is questionable whether his medical education in the 1780s would have focussed on, or even included, these classic authors’ works. It is perhaps unsurprising then that Reil’s rules differ significantly in their emphasis from those of the ancients.   Like Galen and Avicenna, Reil recognized the need for appropriate experimental design, but he goes much further than his forebears in his calls for scientific rigour and for a new vocabulary to report our findings.

Conclusion

Reil, like many physicians of his day was not content to specialize in one area. His contributions to several branches of medicine, including pharmacology, are significant, but partly because of his premature death and partly because of his shifting interests, we had to wait until later in the 19th century to see a truly scientifically founded pharmacology.

(c) Allan Gaw 2018

 

This blog is a shortened version of my paper in the European Journal of Pharmacology

 

If you are interested in reading more about medical history take a look at my books:

‘Trial by Fire’ and ‘On Moral Grounds’ — paperback copies of these are available from my website:

http://www.allangaw.com/sapress.htm

and e-copies are available from kindle on Amazon.

 

A Snail in the Ginger Beer

 

Donaghue v Stevenson 1932 and our duty of care

Introduction

Paisley may be known for its patterns and its shawls, but this town to the west of Glasgow has a much greater claim to fame, at least according to those interested in the law.  For it was there, 90 years ago, that one of the most important legal decisions in history, and one that still has implications for medical practice today, had its origins.

Mrs Donoghue’s ginger beer

On the hot summer evening of 26 August 1928, Mrs May Donoghue, a shop assistant from the East End of Glasgow took a tram ride to Paisley.  There, she met a friend at the Wellmeadow Café and they decided to quench their thirst. Her friend bought the refreshments and Mrs Donoghue was served a glass containing ice cream over which the waiter poured some of a bottle of ginger beer making an iced drink or float.  After Mrs Donoghue drank some, her friend added more ginger beer from the dark glass bottle that had been left on the table.  As she poured, Mrs Donoghue noticed something fall into the glass, which she recognised as a decomposing snail.  Understandably, she immediately felt sick and became “shocked and ill.”  She was clearly affected by the event for she sought medical treatment three days later from her own doctor and again three weeks later in mid-September at Glasgow Royal Infirmary.

The legal case

Because it was her friend who had purchased the ginger beer, Mrs Donoghue had no contractual relationship with the café owner.  She would later learn that the only one she might sue would be the manufacturer of the drink, David Stevenson, whose name was clearly written on the dark glass bottle in large white lettering.  Moreover, she would have to prove negligence on his part, if she was to recover any damages and that claim of negligence would require there to be a duty of care between her and Stevenson.  However, at the time, the law supported the existence of a duty of care to people harmed by the negligent acts of others only in very limited circumstances.  These would include instances where a contract existed between the parties, if the manufacturer was acting fraudulently or if the product was inherently dangerous.  In Mrs Donoghue’s case none of these applied, but she was determined to seek such damages and engaged the Glasgow solicitor Walter Leechman.  Interestingly, Leechman’s firm had represented the unsuccessful pursuers in two recent similar “mouse in ginger beer” cases.  It seems more than a coincidence that of all the lawyers Mrs Donoghue might have consulted, the one she chose had both the experience and the resolve to pursue such a case.  Quite how she found him, or was directed to him, remains a mystery.  Leechman issued a writ against Stevenson claiming damages of £500 plus costs and noting that, “snails and the slimy trails of snails were frequently found” in the factory where his ginger beer was manufactured and bottled.

Stevenson’s counsel moved the Court of Session to dismiss the claim and were eventually successful.  Thus, Leechman now began the process of appealing the decision to the House of Lords.  However, Mrs Donoghue had no money.  Not only had she to declare herself in writing as a pauper so she could be absolved of the need to post security to cover any costs if her appeal was unsuccessful, but her counsel had to proceed in representing her without any guarantee of payment.

On 10 December 1931, five Law Lords met to hear the first of two days’ arguments in Mrs Donoghue’s case.  Some five months later they delivered their judgement and by a majority of three to two they agreed she did have a case.  Mrs Donoghue, they ruled, was owed a duty of care by the manufacturer and bottler of the ginger beer and she could bring an action against him.  This duty of care was founded on the “neighbour principle” eloquently expounded by one of the law lords, Lord Atkin.  He summarised this principle in the ruling as follows:

“The rule that you are to love your neighbour becomes in law, you must not injure your neighbour; and the lawyer’s question, Who is my neighbour? receives a restricted reply. You must take reasonable care to avoid acts or omissions which you can reasonably foresee would be likely to injure your neighbour. Who, then, in law, is my neighbour? The answer seems to be—persons who are so closely and directly affected by my act that I ought reasonably to have them in contemplation as being so affected when I am directing my mind to the acts or omissions which are called in question.”

The “lawyer’s question” referred to was the one asked of Jesus in Luke’s Gospel, and which prompted Christ to tell the parable of the Good Samaritan.  Indeed, Lord Atkin’s ruling was firmly based on his reading of Judeo-Christian scripture.

Our duty of care today

The importance of this ruling lay in its implications for our understanding of negligence.  In law, for there to be medical negligence three criteria must be met.  First, a doctor must owe a duty of care to the patient in question. Second, there must be a breach of that duty of care.  And third, the breach must result in harm to the patient.  Thus, the concept of “duty of care” is central to our understanding of negligence, and it may be defined as an obligation we hold to take care to prevent harm being suffered by others.   In defining to whom we owe this duty of care as our “neighbour”, Lord Atkin created a new basis for the law of negligence, and of course his wide definition of neighbour would certainly include any doctor’s patient.

Conclusion

Not long after the House of Lords had ruled that Mrs Donaghue would be entitled to recover damages if she could prove what she alleged regarding the snail, David Stevenson died.  His executors agreed an out of court settlement of £200 (almost £10,000 today) and as such the case never went to trial.  May Donoghue died in a Mental Hospital in 1958, likely unaware of the global impact that summer evening trip to Paisley had had some 30 years earlier.

Today, in Paisley you will find a small park, a bench and a memorial stone at the corner of Well St and Lady Lane where the café once stood.  Most locals know little of its significance, but occasionally you will see a stranger standing reading the inscription on the stone.  Often they will be lawyers, sometimes doctors, who have made the pilgrimage from England, from North America or from even further afield, just so they can stand on the spot where our duty of care and the concept of negligence began.

The site of the Wellmeadow Café today

 

Post script

In everything that has been written about this case, including all of the original legal documents, the bottle in question is said to have contained “ginger beer”.  What is perhaps not widely known is that the term “ginger” is a colloquialism in Glasgow for any fizzy drink.  It is possible that in reality the bottle did not contain true “ginger beer” but some other form of flavoured, aerated water such as orangeade referred to by Mrs Donoghue and her friend as “ginger”. Whether the bottle contained a snail at all is also a subject of controversy.  Some have argued that Mrs Donoghue’s claim was just a hoax to extort compensation.  Whatever the truth of the contents of that bottle, it remains the reason for our understanding of negligence around the world.

 

Detail from the adjacent mural painted by local artist Kevin Cantwell showing the famous snail and the ginger beer bottle.

Sources

© Allan Gaw 2018

This article was originally published in the MDDUS’ Insight

 

If you are interested in reading more about medical history take a look at my books:

‘Trial by Fire’ and ‘On Moral Grounds’ — paperback copies of these are available from my website:

http://www.allangaw.com/sapress.htm

and e-copies are available from kindle on Amazon.