Blog


Why (most) Vaccine Mandates are a Distraction

At the present moment Australia’s Federal and State governments are racing to vaccinate their citizens. Whilst it was initially criticised as more of a stroll-out than a rollout, the arrival of the Delta variant of COVID-19 in Sydney seemed to provide an impetus to the vaccination programme. Certainly, significant problems remain, not least the apparent inclination to vaccinate those whose risk level is relatively minimal whilst those who are in priority groups, including Aboriginal and Torres Strait Islander communities, continue to have low levels of vaccination. Nevertheless, the programme now has some momentum and there is an expectation that a fairly high level of vaccination—around the 70-80% mark—can be achieved before the end of the year.

As the pandemic has unfolded there have been ongoing discussions about whether or not vaccination should be mandated in some way. On the face of it, the Prime Minister has been fairly clear that individuals will be able to choose for themselves whether or not to get vaccinated. Nevertheless, he has also refused to rule out the idea that employers might mandate vaccination for their employees. Furthermore, facing growing criticism of the vaccination programme and the levels of vaccination amongst those who work in Residential Aged Care Facilities (RACF), the National Cabinet (which is to say the Prime Minister and all state and territory first ministers) agreed to mandate vaccination for those who work in this setting. State Legislation to this effect has followed and, those who have not received their first vaccination by mid-September will likely be unable to continue to work in RACF. Indeed, at the time of writing the AMA called for all those who work in healthcare to be subject to a mandatory vaccination whilst others are seeking to extend the mandate to those working in other areas of the care sector.

Before we can plan for the future, we need to understand what it will look like.

It seems to me that one of the current problems with the way Australia is planning for the future of the COVID-19 pandemic at both Federal and State level is the way in which that future is being understood, both by those in power and by the citizenry in general. Sometimes I think that what I perceive to be implicit and explicit misunderstandings or misapprehensions are views that individuals genuinely hold. Sometimes I think the suppositions underlying the views being expressed merely suit the political inclinations of those speaking; that they do not really believe the implications or presumptions of what they are saying, it simply being the case that the position they are presenting is advisable from a political point of view. Either way, it is a problem. If we are to respond to the virus appropriately over the next few months, there needs to be a collective acceptance and understanding of the way COVID-19 is going to unfold over the next few months, particularly now the delta variant has become predominant. In my view the following points go relatively unacknowledged in current public discourse. Nevertheless, I also think them to be unconvertible.  

COVID Zero will soon be behind us

Over the past few weeks as the number of COVID infections in NSW first began and then continued to rise in Sydney and NSW various commentators and experts have questioned the idea that we need to learn to live with COVID. The idea seems to be that not only should we be aiming for COVID Zero in the short term (the next month or so) but that this should continue to be the strategy for the medium (in 6 months to a year) and perhaps even long term (for the next 5 to 10 years). Personally, I have no doubt that COVID will become endemic in Australia, as it now is in the rest of the world. The difference is that we still have the opportunity to exert some control over the virus and mitigate the consequences of its spread, albeit primarily through our vaccination programme. We should therefore be thinking about how to effectively manage the spread of the virus across Australia and how we should respond to various scenarios as they develop.

Whilst this does not mean we should immediately abandon our existing strategy of COVID Zero, it does mean that we need to think about when it is appropriate to change gears. The federal government’s four phase plan is an example of such thinking, albeit one that only scratches to surface of what needs to be considered. As such, and in contrast to the aforementioned commentators, I do think COVID-19 is something we need to learn to live with. Part of my reasoning is what the consequences what refusing to live with COVID-19 might look like. What are the decisions that we will need to make if we are to continue with the COVID Zero strategy in the medium and longer term?

Deciding to vaccinate is not just a personal decision, it is an ethical one.

On the back of increasing criticism of the rollout of Australia’s vaccination programme the Federal government have asked The Australian Health Primary Protection Committee (AHPPC) to reconsider the issue of mandating vaccination for those working in Aged Care facilities. Prompted by the most recent outbreak in Victoria, it seems that vaccination of residents is now proceeding fairly rapidly across Australia. There are, however, worries about continued low(er) rates of uptake amongst staff, not all of whom are healthcare professionals.

Indeed, this seems to be a broader issue to do with vaccine hesitancy. The absence of COVID-19 from Australia and worries about vanishingly rare cases of blood clotting associated with the AstraZeneca vaccine seems to have led some to delay getting vaccinated. Although some under 50s have taken the initiative and got themselves vaccinated early, those over 50s that are delaying do not seem to be concerned about vaccination in general. Rather what seems to be going on is that they are hedging their bets, calculating that they are at very low risk of contracting COVID-19 because of the lack of cases. Thus, the recent spate of cases in Melbourne has directly resulted in an increase in the numbers of people seeking out a vaccine, something that many of them will have been putting off for the past few weeks. 

The Long 2020: year of the COVID-19 Pandemic

According to historians the long 19th Century lasted 125 years. It started in 1789 with the French Revolution and ran through until the beginning of World War One in 1914. The 20th Century was, however, short. Closing with the dissolution of the Soviet Union in 1991 it lasted a mere 77 years. What this suggests is that whilst our calendars reflect the orderly nature of celestial revolutions, human events are not so neat. More often than not, true significance attaches to periods, and not units, of time.

This is likely to be true of 2020. Whilst it is apparently a unit of time, the year of COVID-19 is, in fact, a period of time. The long 2020 began with the first symptomatic patients, which have been traced to the start of December 2019. Indeed, this is why it is called COVID-19 and not COVID-20. As a result, and despite writing merely hours from the advent of 2021, one can still ask: what event will mark the end of the long 2020?

Of course, the day that the pandemic ends is the obvious choice. However, whilst the pandemic will eventually end, it is unrealistic to think that this will mean that COVID-19 will no longer trouble us. Although there are facts about the presence or absence of the virus SARS-COV-2 in a particular population, such facts alone do not determine the presence or absence of a pandemic. There is very little of the virus in Australia, and yet we are also living through the pandemic and the long 2020 alongside those in countries with far higher rates of infection. Thus, a certain kind of significance must attach to the virus if we are to declare a pandemic and the same applies if we are to declare its end.

It’s in the Abstract: Knobe and X-Phi’s Claim to Originality.

I often think that some – perhaps much – of the recent work in moral psychology and X-Phi does a good job of repeating the findings of other fields and forms of enquiries. One the one hand, this has value, on the other, the findings presented in such work are often presented as if they were novel. In so doing some of those working in the field can give the impression that they are largely unaware of related work in other fields. Consider the following abstract that I came across earlier today:

“It has often been suggested that people's ordinary capacities for understanding the world make use of much the same methods one might find in a formal scientific investigation. A series of recent experimental results offer a challenge to this widely-held view, suggesting that people's moral judgments can actually influence the intuitions they hold both in folk psychology and in causal cognition. The present target article distinguishes two basic approaches to explaining such effects. One approach would be to say that the relevant competencies are entirely non-moral but that some additional factor (conversational pragmatics, performance error, etc.) then interferes and allows people's moral judgments to affect their intuitions. Another approach would be to say that moral considerations truly do figure in workings of the competencies themselves. I argue that the data available now favor the second of these approaches over the first.”

Knobe, J. 2010. Person as scientist, person as moralist. Behavioral and Brain Sciences; 33(4): 315-29. doi: 10.1017/S0140525X10000907

Let’s take it step by step:

“It has often been suggested that people's ordinary capacities for understanding the world make use of much the same methods one might find in a formal scientific investigation.”

Well, it is certainly true that Piaget considered the way in which children learn about the world could be likened to them being mini-scientists. However, there has been a lot of criticism of such views since then and even at the time Vygotsky took a rather different line. Perhaps this is unfair. Rather than talking about academic enquiries Knobe is talking about the more prosaic presumptions of ordinary people. If so, this seems decidedly odd. Ordinary people, or so we are led to believe, tend to think that scientists are boffins, off in ivory towers doing strange things in laboratory’s. 

A World without Bioethicists: On Sally Phillip’s a World Without Down’s.

Last night BBC2 broadcast a documentary entitled ‘A World Without Down’s?’ Even if you did not see the programme itself, you may have heard about it on the radio, read some of the commentary published over the past week or spotted it on twitter under hash tag #worldwithoutDown’s. It was one of these advanced trails, specifically the presenter’s appearance on Frank Skinner’s On Demand, that first drew my attention to the programme. Here Sally Philips talks about Peter Singer’s appearance on Hardtalk and, whilst she is hardly alone in doing so, I felt that she misunderstood what Singer has to say. As a result I intended to watch the documentary to see which bioethicists appeared and if their views were represented accurately.

Despite the programme consisting of Philips speaking with various people involved with this issue – including doctors, scientists, individuals with Down’s syndrome and their parents, those who run support groups and one brave women who had terminated a pregnancy following a positive test for Down’s – she did not actually speak to a bioethicist or, indeed, explicitly discuss any bioethical ideas [edit: although, see addendum at the foot of this post]. Thus, whilst one could think that this documentary was about a bioethical issue – prenatal testing and screening for Down’s Syndrome - there was not any real discussion of the matter from a bioethical perspective.

Call for Ethics Cases in Social Science Research

Whilst there are ethical concerns common to most social scientific research many of the issues that arise in the course of conducting such projects are highly contextual and even idiosyncratic. As such it can be difficult to fully grasp the ethics of social science research. As with everyday life, the field’s ethics are resistant to being captured by succinct, conceptually tight, principles that can be used to frame and analyse research proposals and difficulties that arise in the course of conduct research. Whilst five generic ethical principles resulted from the work recently undertaken by the Academy of Social Science’s Research Ethics Group, and have influenced the recent revision to the ESRC’s Framework for Research Ethics, they have a more discursive nature than their biomedical counterparts

As many have noted, it would be helpful if a library of illustrative ethics case studies were available for researchers to draw on. Given that a set of such cases would reflect and compliment the existing Case Studies in Research Methods built up by Sage over the past few years, I have agreed to select and edit a series of short (3-5,000 words) essays focused on the ethical dimension of specific research projects and relate ‘what actually happened.’ The purposes of these cases is not to offer a dry ethical analysis or to reduce a case to its principles. Rather the aim is to illustrate and illuminate the ethical dimension of social scientific research as it is conducted or practiced. 

Call for Chapters: Virtue Ethics in the Conduct and Governance of Social Science Research

Call for Chapters 

Following an event organised by the BSA / Academy of Social Sciences on the topic in May 2015 I am editing a collection of essays to be published under the (working) title ‘Virtue Ethics in the Conduct and Governance of Social Science Research.’ The contract for the book has been signed and it will appear in 2017. The structure of the book is as follows: 

 Section 1: Virtue and Integrity in Social Science Research

 Section 2: Virtue and the Review/ Governance of Social Science Research

 Section 3: Phronesis in the Conduct and Governance of Social Scientific Research.

The first two sections of the book will contain the papers given in 2015, whilst the third section has resulted from the interests of additional contributors. 

I am interested in hearing from potential authors for chapters in each of these sections. If there is a particular motivation to add a section this may be possible. To discuss a proposal please contact me at n.emmerich@qub.ac.uk




Have we Reached Peak Moral (bio)Enhancement?


The other day I read an article ‘Procedural Moral Enhancement’ written by Schaefer and Savulescu and recently published in Neuroethics. The argument can be summarized as follows: All things being equal a procedural approach to moral deliberation can improve the quality of that deliberation and the reliability of its conclusions. Moral deliberation that is conducted in accordance with their procedural approach is ‘generally acceptable across a wide range of normative and meta-ethical theories’ (p.2) and, more than this, is said to be neutral with respect to the substantive content being addressed. Thus advocating for moral (bio)enhancement is an ethically neutral proposal when such enhancement targets the abilities or skills (intelligence, empirical competence, openness to revision, empathetic understanding, and bias avoidance) that support procedural deliberation. 

Although this procedural approach is a not uncommon Rawlsian position there is no acknowledgement of the almost equally common critiques of its ‘liberal neutrality’ rooted in feminist perspectives (see Anderson’s fantastic book which does similarly for the related Habermasian attempt to proceduralise moral debate in the public square). Whilst this is a failing of the paper, there is something more interesting at play. Whilst this essay purports to be about moral enhancement and/ or bioenhancement very little is actually said on this point. There is no mention of this, that or the other bioenhancing neurochemical, merely a note that “our proposal suggests a promising approach to moral bioenhancement … [and m]any of the capacities we identify should be susceptible to biological improvement, at least in principle” (p.11). Somewhat drily the authors subsequently state: “much more research needs to be done in this area before interventions can be seen as viable” (p.11). 

If we relinquish our phones, how long until the police want to decrypt our minds?


Over recent weeks the FBI has been attempting to legally compel Apple to help them access an iPhone belonging to a suspected terrorist. This is, it appears, one of a number of similar endeavours in what is and will continue to be a larger effort by the FBI and other intelligence agencies to ensure they can access the increasing variety of devices that many of us now have. 

Having just won a similar case Apple seem in a strong position to resist further legal arguments that would have compel them to provide assistance in this and comparable cases not least because it would require them to undermine the security of its own products – the consequences of which are succinctly [summarised by this cartoon] by Stuart Carlson.

Extended mind

Drawing on the extended mind thesis first put forward by Andy Clarke and David Chalmers, philosopher Matthew Noah Smith [has argued] that iPhones can be considered an extension of our minds. First, the way we using them to store information can be seen as an expansion of our memories. Not only do we use them to record information - photos, shopping lists and passwords - that we either cannot or do not wish to memorise they can now automatically present us with that information according to spatial, temporal and cybernetic prompts - we are reminded about meetings in a timely fashion, to pick up garlic when near the supermarket and our passwords are provided automatically when we log-in to a wide variety of sites. 

Are Recent Judgements by the Care Quality Commission Symbolically Violent?

Earlier in the year I co-authored a paper in BMC Medical Ethics. It was part of a cross-journal special issue on the Many Meanings of Quality in Healthcare and argued that the evaluation of ‘care’ held significant potential for symbolic violence. The main thrust of my paper was that the auditing of specific health and social care institutions necessarily involved a certain level of bureaucratic standardization. As such, the work of bodies like the Care Quality Commission (CQC) involve the imposition of a formal evaluative framework, one that reduces the thick context(s) of practice and care to a thin, semi-quantified, account structured by the requirements and imperatives of bureaucratic evaluations and a culture of audit. In short, bodies like the CQC are organized so that, albeit implicitly, they care less about the quality of care than they do the bureaucratic evaluation of the quality of care. Put another way, we might say that our ability to audit the quality of care lacks a certain degree of nuance and, therefore, quality.

Does a Fall in Organ Donation Rates Justify a move to an Opt-Out Organ Donor Register?


NHS Blood and Transplant service (NHSBT) recently published their Organ Donation and Transplantation Activity Report for 2104/15. It shows that there has been a 3% fall in the number of individuals who become post-mortem organ donors. This is the first time in 11 years that there has been a fall and reporting of the figure has been accompanied by calls for the UK as a whole to move to an ‘opt-out’ system of registration; a system in which consent to donation is presumed. The context for this call is that against expectations a 2008 report by the Organ Donation Taskforce took the view that it would be premature to change the UK’s system of registration to one where consent was presumed. Instead it recommended that other methods of increasing the number of organs available for transplant should be pursued. Subsequently, between 2008/09 and 2013/14, there has been a 50% increase in the number of post-mortem organ donors. Thus, whilst this recent fall is discouraging it comes on the back of significant progress, progress that has met the targets set out in the 2008 report.

Funding Expensive Treatments on the NHS, and Funding the NHS

The cost of treatment is one of the biggest areas of the NHS budget and, whilst it is not often discussed openly, such costs need to be managed and controlled just like any other expenditure. However, given their implications for people’s health, such decisions need to be approached with care and taken in a consistent manner. Neither elected politicians nor frontline clinicians can realistically be expected to do so. In the case of the latter the cost of treatment cannot be considered directly if healthcare professionals are to maintain the trust of patients. David Cameron’s ‘Cancer Drug Fund’ provides an example of the problem in the case of the former. Simply put, the fund circumvents NICE – the body responsible for considering the cost effectiveness of treatments – and the principles that guide decision-making about the affordability of expensive drugs. This fund illuminates something of the difficulties that occur when, for one reason or another, debates about the allocation of resources become politicised.

Comments on Atul Gawande’s 2014 Reith Lectures: The Idea of Well-Being (Part 4)

I have been blogging about the 2014 Reith Lectures currently being given by Atul Gawande. This is the final part, Part 4, and is related to Gawande’s fourth talk entitled Part 3, which responds to the third lecture’The Problem of Hubris’ is here. Part 2, which responds to the second lecture ‘The Century of the System,’ is here. Part 1, which responds to the first lecture ‘Why Doctors Fail,' is here


In his final Reith lecture, recorded at the India International Centre in Delhi, Gawande addressed The Idea of Wellbeing. Gawande starts the lecture with some family history. His parents were both from India and his family’s fortunes have changed significantly in just three generations. This is used to illustrate the changing nature of public health in ‘advancing economies’ like India. Previous issues concerning malnutrition and diarrhea are increasingly giving way to Western illnesses like diabetes and hypertension. The message is that the major challenges to population health are changing from the acute to the chronic and the responses require a shift in perspective from health to wellbeing or, one might say, being well. Solutions, if indeed there are any, are less about infrastructure, like the provision of clean water, and more about social structures, our individual and collective behaviors, the way we eat, exercise and lead our lives.  

Comments on Atul Gawande’s 2014 Reith Lectures: The Problem of Hubris (Part 3)

I have been blogging about the 2014 Reith Lectures currently being given by Atul Gawande. This is Part 3, and is related to Gawande’s third talk entitled ’The Problem of Hubris.’ Part 2, which responds to the second lecture ‘The Century of the System,’ is here. Part 1, which responds to the first lecture ‘Why Doctors Fail,' is here


In this weeks lecture Gawande again spent a lot of time recounting an illustrative case. This time it was about a family friend who had a recurrent cancer that ultimately proved untreatable. The purpose of the story is, first, to show the difficulty doctors have when dealing with patients whose conditions are terminal, second, to consider how healthcare professionals can help these patients to live well whilst dying and, third, to examine the connection between these two phenomena. 

The difficulties medical professionals have when dealing with death and dying seems to provide the inspiration for the lecture's title - The Problem of Hubris. However, it seems a little ungenerous to think that individual doctors are reluctant to talk to patients about death and dying because they have an 'over-weaning confidence' in their ability to treat patients. After all the empirical evidence clearly augers against such presumptions. Rather, much like the rest of us, I think healthcare professionals find talking about death and dying difficulty and emotionally challenging. It is easier for them to retreat into their socially sanctioned roles and provide the patient with medical facts and information. 

Comments on Atul Gawande’s 2014 Reith Lectures: The Future of Medicine (Part 2)

I have been blogging about the 2014 Reith Lectures currently being given by Atul Gawande. This is Part 2, and is related to Gawande’s second talk entitled ‘The Century of the System.’ Part 1, which responds to the first lecture ‘Why Doctors Fail,' is here


Gawande devoted much of this lecture to recounting the case history of a three-year-old child who plunged through the ice into a pond, spent thirty minutes under water and ultimately survived with no apparent negative consequences. The tale is, I think, designed to illustrate the value of a systemic approach to medical practice. The young girl’s life was saved as a result of a lot of different interventions happening at the right time and in the right order over the course of at least 2 days. As one of the commentators following the lecture pointed out, this is a highly unusual occurrence. Certainly people who drown in warmer conditions will die. Thus, in this case, the circumstances of the injury are an important part of medicines ability to save the lives of people like this young girl.

Comments on Atul Gawande’s 2014 Reith Lectures: The Future of Medicine (Part 1).

Atul Gawande is one of those sickeningly accomplished individuals who succeeds at everything they do. His day job is surgery but he is world renowned for his work on healthcare and healthcare systems. I have found his previous writing stimulating but have not yet read his latest work Being Mortal. He is giving this year’s Reith Lectures and I am going to try and write something on each one. 

This is Part 1, and is related to Gawande’s first talk ‘Why Do Doctor’s Fail?


Offering an enlightening mix of the personal and the professional the first of Gawande’s Reith Lectures addresses the question ‘Why do Doctors Fail?’ Whilst much of his career, particularly the Checklist Manifesto, has been concerned with the avoidance of preventable error and ensuring the practice of medicine meets the highest standards possible this lecture engages with a broader set of concerns. 

To this end he makes use of a perspective set out by MacIntrye and Gorowitz in a 1976 article ‘Towards a Theory of Medical Fallibility.’ The articles subtitle - Distinguishing Culpability from Necessary Error - makes clear their view. There are medical errors for which medical professionals are responsible but there are others that are unavoidable: there can be no error-free medical practice. 

Towards a Vocational Ethics for Scientific Researchers

What follows is the text of a brief talk I gave as part of QUB leg of a series of events run by Nuffield Bioethics concerning The Culture of Scientific Research

Towards a Vocational Ethics for Scientific Researchers

My name is Nathan Emmerich and I did my PhD here at QUB, looking at the ethical education of medical students. At the present time I am a Visiting Research Fellow in PISP where, amongst other things, I am writing about the idea of ethical expertise. I have also been part of the Academy of Social Science’s project on Generic Ethical Principles for the Social Sciences. If you are interested you can find out more about this project on the Academy website, including a position paper on ethics in social science research that concluded a major phase of the project. It is interesting that some of the broader questions that arose during the course of this work are reflected in the concerns expressed by yourselves in advance of this meeting. Part of what I am interested in is connecting what we normally think of as the ethics of research with these broader issues and, at least for the social sciences, I am trying to do so by moving away from talking about ‘research ethics’ and instead returning to the idea of a professional ethics or science as a vocation. I am going to try to illustrate these points by talking about Stanley Milgram’s infamous obedience experiments.

A Quibble with Baker’s Before Bioethics


My pile of books to read over this past Christmas and New Year included Baker’s recently published ‘Before Bioethics: A History of American Medical Ethics from the Colonial Period to the Bioethics Revolution.’ I was asked to review it for Social History of Medicine and have duly done so (short version: it’s very good and you should read it if it is of interest to you or relevant to your work in anyway). However, one thing caused me some disquiet that, due to the constraints of length, I did not get the chance to address in my review. It is a very minor point and the relevant text in Baker’s book is about three pages long. Regardless, it has stayed with me so I thought I would tackle it here. 

In his Chapter ‘Explaining the Birth of Bioethics, 1947-1999’ Baker has a section ‘Research Oversight: The Origins and Atrophy of Professional Self-regulation’ (p.281). It is subtitled ‘Percival’s Proposal for Research Ethics Committees.’ The reference is to Thomas Percival (1740-1804), a founding figure in the codification of medical ethics and, therefore, the professionalization of medicine i.e. the social institutionalization of medicine as a profession.* Interestingly Percival’s writings exerted their clearest and most immediate influence on the emergence of the American medical profession and not in the UK, where he lived and worked.