Nonhuman animal ("animal") experimentation is typically defended by arguments that it is reliable, that animals provide sufficiently good models of human biology and diseases to yield relevant information, and that, consequently, its use provides major human health benefits. I demonstrate that a growing body of scientific literature critically assessing the validity of animal experimentation generally (and animal modeling specifically) raises important concerns about its reliability and predictive value for human outcomes and for understanding human physiology. The unreliability of animal experimentation across a wide range of areas undermines scientific arguments in favor of the practice. Additionally, I show how animal experimentation often significantly harms humans through misleading safety studies, potential abandonment of effective therapeutics, and direction of resources away from more effective testing methods. The resulting evidence suggests that the collective harms and costs to humans from animal experimentation outweigh potential benefits and that resources would be better invested in developing human-based testing methods.
Objective: Previous studies in China showed large sex differences in childhood overweight and obesity (OW/OB) rates. However, limited research has examined the cause of these sex differences. The present study aimed to examine individual and parental/familial factors associated with sex differences in childhood OW/OB rates in China. Design: Variables associated with child weight status, beliefs and behaviours, and obesity-related parenting practices were selected to examine their sex differences and association with a sex difference in child OW/OB outcomes using logistic regression analysis. Setting: Cross-sectional data analysis using the 2011 China Health and Nutrition Survey. Subjects: Children aged 6-17 years (n 1544) and their parents. Results: Overall child OW/OB prevalence was 16-8%. Adolescent boys (AB; 12-17 years) were about twice as likely to be overweight/obese as adolescent girls (AG; 15-5 v. 8.4%, P<0.05). AB more likely had energy intake exceeding recommendations, self-perceived underweight, underestimated their body weight and were satisfied with their physical activity level than AG. AG more likely practised weight-loss management through diet and self-perceived overweight than AB. Mothers more likely identified AG's weight accurately but underestimated AB's weight. Stronger associations with risk of childhood OW/OB were found in boys than girls in dieting to lose weight (OR = 6.7 in boys v. 2.6 in girls) and combined maternal and child perception of the child's overweight (OR = 35-4 in boys v. 14.2 in girls). Conclusions: Large sex differences in childhood obesity may be related to the sex disparities in weight-related beliefs and behaviours among children and their parents in China.
We describe a new freshwater myxosporean species Ceratomyxa gracillima n. sp. from the gall bladder of the Amazonian catfish Brachyplatystoma rousseauxii; the first myxozoan recorded in this host. The new Ceratomyxa was described on the basis of its host, myxospore morphometry, ssrDNA and internal transcribed spacer region (ITS-1) sequences. Infected fish were sampled from geographically distant localities: the Tapajos River, Para State, the Amazon River, Amapa State and the Solimoes River, Amazonas State. Immature and mature plasmodia were slender, tapered at both ends, and exhibited vermiform motility. The ribosomal sequences from parasite isolates from the three localities were identical, and distinct from all other Ceratomyxa sequences. No population-level genetic variation was observed, even in the typically more variable ITS-1 region. This absence of genetic variation in widely separated parasite samples suggests high gene flow as a result of panmixia in the parasite populations. Maximum likelihood and maximum parsimony analyses placed C. gracillima n. sp. sister to Ceratomyxa vermiformis in a subclade together with Ceratomyxa brasiliensis and Ceratomyxa amazonensis, all of which have Amazonian hosts. This subclade, together with other Ceratomyxa from freshwater hosts, formed an apparently early diverging lineage. The Amazonian freshwater Ceratomyxa species may represent a radiation that originated during marine incursions into the Amazon basin that introduced an ancestral lineage in the late Oligocene or early Miocene.
Empirical work has shown that patients and physicians have markedly divergent understandings of treatability statements (e.g., "This is a treatable condition," "We have treatments for your loved one") in the context of serious illness. Patients often understand treatability statements as conveying good news for prognosis and quality of life. In contrast, physicians often do not intend treatability statements to convey improvement in prognosis or quality of life, but merely that a treatment is available. Similarly, patients often understand treatability statements as conveying encouragement to hope and pursue further treatment, though this may not be intended by physicians. This radical divergence in understandings may lead to severe miscommunication. This paper seeks to better understand this divergence through linguistic theory-in particular, H.P. Grice's notion of conversational implicature. This theoretical approach reveals three levels of meaning of treatability statements: (1) the literal meaning, (2) the physician's intended meaning, and (3) the patient's received meaning. The divergence between the physician's intended meaning and the patient's received meaning can be understood to arise from the lack of shared experience between physicians and patients, and the differing assumptions that each party makes about conversations. This divergence in meaning raises new and largely unidentified challenges to informed consent and shared decision making in the context of serious illness, which indicates a need for further empirical research in this area.
Pharmaceuticals or other emerging technologies could be used to enhance (or diminish) feelings of lust, attraction, and attachment in adult romantic partnerships. Although such interventions could conceivably be used to promote individual (and couple) well-being, their widespread development and/or adoption might lead to the 'medicalization' of human love and heartache-for some, a source of a serious concern. In this essay, we argue that the medicalization of love need not necessarily be problematic, on balance, but could plausibly be expected to have either good or bad consequences depending upon how it unfolds. By anticipating some of the specific ways in which these technologies could yield unwanted outcomes, bioethicists and others can help to direct the course of love's medicalization-should it happen to occur-more toward the 'good' side than the 'bad.'
Disturbing cases continue to be published of patients declared brain dead who later were found to have a few intact brain functions. We address the reasons for the mismatch between the whole-brain criterion and brain death tests, and suggest solutions. Many of the cases result from diagnostic errors in brain death determination. Others probably result from a tiny amount of residual blood flow to the brain despite intracranial circulatory arrest. Strategies to lessen the mismatch include improving brain death determination training for physicians, mandating a test showing complete intracranial circulatory arrest, or revising the whole-brain criterion.
In this paper, the author argues that Joseph Fins' mosaic decisionmaking model for brain-injured patients is untenable. He supports this claim by identifying three problems with mosaic decisionmaking. First, that it is unclear whether a mosaic is a conceptually adequate metaphor for a decisionmaking process that is intended to promote patient autonomy. Second, that the proposed legal framework for mosaic decisionmaking is inappropriate. Third, that it is unclear how we ought to select patients for participation in mosaic decisionmaking.
A new generation of implantable brain-computer interfaces (BCI) devices have been tested for the first time in a human clinical trial, with significant success. These intelligent implants detect specific neuronal activity patterns, such as an epileptic seizure, and provide information to help patients to respond to the upcoming neuronal events. By forecasting a seizure, the technology keeps patients in the decisional loop; the device gives control to patients on how to respond and decide on a therapeutic course ahead of time. Being kept in the decisional loop can positively increase patients' quality of life; however, doing so does not come free of ethical concerns. There is currently a lack of evidence concerning the various impacts of closed-loop system BCIs on patients' decisionmaking processes, especially how being in the decisional loop impacts patients' sense of autonomy. This article addresses these gaps by providing data that we obtained from a first-in-human clinical trial involving patients implanted with advisory brain devices. This article explores ethical issues related to the risks involved in being kept in the decisional loop.
Neuroprosthetic speech devices are an emerging technology that can offer the possibility of communication to those who are unable to speak. Patients with 'locked in syndrome,' aphasia, or other such pathologies can use covert speech-vividly imagining saying something without actual vocalization-to trigger neural controlled systems capable of synthesizing the speech they would have spoken, but for their impairment. We provide an analysis of the mechanisms and outputs involved in speech mediated by neuroprosthetic devices. This analysis provides a framework for accounting for the ethical significance of accuracy, control, and pragmatic dimensions of prosthesis-mediated speech. We first examine what it means for the output of the device to be accurate, drawing a distinction between technical accuracy on the one hand and semantic accuracy on the other. These are conceptual notions of accuracy. Both technical and semantic accuracy of the device will be necessary (but not yet sufficient) for the user to have sufficient control over the device. Sufficient control is an ethical consideration: we place high value on being able to express ourselves when we want and how we want. Sufficient control of a neural speech prosthesis requires that a speaker can reliably use their speech apparatus as they want to, and can expect their speech to authentically represent them. We draw a distinction between two relevant features which bear on the question of whether the user has sufficient control: voluntariness of the speech and the authenticity of the speech. These can come apart: the user might involuntarily produce an authentic output (perhaps revealing private thoughts) or might voluntarily produce an inauthentic output (e.g., when the output is not semantically accurate). Finally, we consider the role of the interlocutor in interpreting the content and purpose of the communication. These three ethical dimensions raise philosophical questions about the nature of speech, the level of control required for communicative accuracy, and the nature of 'accuracy' with respect to both natural and prosthesis-mediated speech.
Long-term patient outcomes after severe brain injury are highly variable, and reliable prognostic indicators are urgently needed to guide treatment decisions. Functional neuroimaging is a highly sensitive method of uncovering covert cognition and awareness in patients with prolonged disorders of consciousness, and there has been increased interest in using it as a research tool in acutely brain injured patients. When covert awareness is detected in a research context, this may impact surrogate decisionmaking-including decisions about life-sustaining treatment-even though the prognostic value of covert consciousness is currently unknown. This paper provides guidance to clinicians and families in incorporating individual research results of unknown prognostic value into surrogate decisionmaking, focusing on three potential issues: (1) Surrogate decisionmakers may misinterpret results; (2) Results may create false hope about the prospects of recovery; (3) There may be disagreement about the meaningfulness or relevance of results, and appropriateness of continued care.
In Finland, as well as all over the globe, great weight is put on the possibilities of large data collections and 'big data' for generating economic growth, enhancing medical research, and boosting health and wellbeing in totally new ways. This massive data gathering and usage is justified by the moral principle of improving health. The imperative of health thus legitimizes data collection, new infrastructures and innovation policy. It is also supported by the rhetoric of health promotion. New arrangements in health research and innovations in the health sector are justified, as they produce health, while the moral principle of health also obligates individual persons to pursue healthy lifestyles and become healthy citizens. I examine how, in this context of Finnish data-driven medicine, arguments related to privacy and autonomy become silenced when contrasted with the moral principle of health.
The practice of obtaining informed consent has its history in, and gains its meaning from, medicine and biomedical research. Discussions of disclosure and justified nondisclosure have played a significant role throughout the history of medical ethics, but the term “informed consent” emerged only in the 1950s. Serious discussion of the meaning and ethics of informed consent began in medicine, research, law, and philosophy only around 1972.
In a recent paper in Nature1 entitled The Moral Machine Experiment, Edmond Awad, et al. make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called “autonomous vehicles” and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument:1)Find out what “public morality” will prefer to see happen.2)On the basis of this discovery, claim both popular acceptance of the preferences and persuade would-be owners and manufacturers that the vehicles are programmed with the best solutions to any survival dilemmas they might face.3)Citizen agreement thus characterized is then presumed to deliver moral license for the chosen preferences.4)This yields “permission” to program vehicles to spare or condemn those outside the vehicles when their deaths will preserve vehicle and occupants.This paper argues that the Moral Machine Experiment fails dramatically on all four counts.
Human and animal research both operate within established standards. In the United States, criticism of the human research environment and recorded abuses of human research subjects served as the impetus for the establishment of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, and the resulting Belmont Report. The Belmont Report established key ethical principles to which human research should adhere: respect for autonomy, obligations to beneficence and justice, and special protections for vulnerable individuals and populations. While current guidelines appropriately aim to protect the individual interests of human participants in research, no similar, comprehensive, and principled effort has addressed the use of (nonhuman) animals in research. Although published policies regarding animal research provide relevant regulatory guidance, the lack of a fundamental effort to explore the ethical issues and principles that should guide decisions about the potential use of animals in research has led to unclear and disparate policies. Here, we explore how the ethical principles outlined in the Belmont Report could be applied consistently to animals. We describe how concepts such as respect for autonomy and obligations to beneficence and justice could be applied to animals, as well as how animals are entitled to special protections as a result of their vulnerability.