The Ethics of AI in the Brain
Innovative machine learning devices enable patients with debilitating conditions to conduct actions that were otherwise impossible. Brain-Computer Interfaces promise great development of therapies while showing potential for entertainment and cognitive enhancement. Concerns about privacy, agency and the essence of human existence justify this as the “greatest ethical challenge that neuroscience faces today”, leading to new parameters of personal technological invasiveness.
How is Machine Learning Currently Utilized?
Brain-Computer Interface (BCI) devices have been used as assistive technologies for patients who are unable to perform motor functions for communication and movement due to spinal cord injuries or amyotrophic lateral sclerosis. Robotic prostheses and BCI spellers bridge the “muscular” gap between the desired action and neural input in people who are paralyzed or locked-in. This “artificial” control requires direct detection of brain activity through electrodes and ability to decode signals conveying intention. Artificial Intelligence (AI) translates this information into executable output that is calibrated through motor cortex cognitive tasks to assemble a puzzle of unique neural networks (digital reconstructions of neurons designed to mimic how the brain processes and learns information).
Machine learning (describing how neural networks learn to recognise patterns) is also utilized in seizures by following the previously mentioned principles where changes in brain activity are detected and characterised as an impending seizure by the neural network, allowing quick detection (perhaps of a seizure yet to happen!) and treatment. Contemporary research aims to remodel similar technology to treat Parkinson’s disease. Neuroscientists are currently constructing an electrical model of the brain based on the interpretation of AI algorithms that will certainly change our perspective on memory and disease to enable safer testing of drugs and manufacturing of therapeutic devices.
What are the Ethical Implications of AI Technologies?
The novel field of Neuroethics focuses on optimising medical care through machine learning advances in the ability to monitor and influence the brain while balancing the concomitant issue of ethical decision-making and agency. The symbiosis between man and technology is a delicate matter that raises questions such as “How much is what I am experiencing my own thought pattern?” and “Should anything have control over one’s actions?” The ability to act in accordance to one’s choices stands at the forefront of the sense of self. In this regard, assistive BCIs are instrumental in enabling the manifestation of behaviours in immobilized patients while promoting human dignity through increased independence. Unfortunately, life is never as simple as that.
AI algorithms discern intention from neural signals with a degree of uncertainty, resulting in imprecise output. For example, BCI spellers use a similar technology to the “AutoCorrect” feature of texting. Based on common phrases and grammar, this software aids in sentence formation however, more often than not, it proves ineffective in the majority of user experiences. Now, taking into consideration the lack of precision in calibration of input in spellers, the concern about truthful expression becomes more apparent. The issue is enhanced by the human experience of not vocalizing our every opinion. This privacy of thought is breached when AI devices act out on the input to reveal an attitude that would not normally be made public, leading to a conflict of interest.
Beyond the ethical dilemma of agency, a legal responsibility must be attributed to the actions of people. Envision the difficult situation where a robotic limb executes a harmful operation in response to anger that the user would have normally controlled and not have conducted. What is the extent of responsibility and consequence of the aggressor? Although an extreme example, the dichotomy between private thought and outward behaviour is frequently reiterated as a problem by those living with such technologies. Patients say they feel as though they have a “shared or hybrid agency” and wonder, “how much is you anymore”.
Personality changes have been observed in some users and taking in consideration the impact on autonomy, their ability to consent to continue or stop a treatment might be affected. Particularly non-communicative persons in a state of pseudocoma have a significantly impaired capacity to offer informed consent. A conundrum is created if another person has the ability to terminate the treatment against the patient’s desire, by implying that the technology reduces one’s ability to decide for themselves. The philosophical assessment of complex dilemmas is vital in understanding the effects of deep learning and devising a moral methodology of distribution of such treatments with specific consideration to commercialisation of new devices.
Future Prospects and Considerations
The relationship between humans and AI has to progress in an ethical manner. Neuroethics remains true to its aim of maximising the benefits of emerging techniques while minimizing their harm however, the development of consumer tools is notoriously covert and subjected to oversight. This germinal stage of technological advancement of brain devices is crucial for laying the foundations of an adequate approach to mass production of BCIs for non-therapeutic usage. Within the last two decades, the abrupt introduction of phones rapidly changed the nature of communication through social media and, changing the way we see the world and indeed our reality.
The negative effects of this can be seen in the unanticipated rise of mental health issues and an increasing gap in understanding between generations. Similarly, hasty implementation of reality altering AI technologies could lead to complications that cannot just be removed. Furthermore, the scientific community must educate the public through media in a more accurate manner in order to shape the expectations of current BCI technologies and ensure the general understanding of the implications of potential commercial gadgets. Fortunately, researchers are considering the concerns mentioned in this brief essay and many more in their process of discovery and application of innovations while learning from current studies and promoting interdisciplinary transparency.