Sitemap

Neurotech’s Own Ethical Considerations Beyond the Scope of AI Ethics

The growing subject of AI ethics is a robust discussion; one that’s mushrooming into varied ideas in the contexts of technology policy and regulation.

As those important conversations unfold, IBM researchers Sara E. Berger and Francesca Rossi (both are AI Ethics global leaders at IBM’s Thomas J. Watson Research Center in New York) co-authored an article for the Association of Computing Machinery (ACM) emphasizing the need to expand the focus of AI ethics to incorporate key aspects of neurotechnological ethics.

While you can read their contributed article, entitled AI and Neurotechnology: Learning from AI Ethics to Address an Expanded Ethics Landscape on the ACM website in full, I’d like to point out that both researchers make a strong argument for an AI ethics inclusive of ethical considerations related to the human brain and its neurodata.

Let’s define these important terms

Berger and Rossi (2023) classify neurotech as invasive or non-invasive across three specific categories:

The other key term to define and understand is:

Ethical thoughts worth noting

No matter how well-intended Berger and Rossi (2023) — and other researchers like them — might be in helping us expand our understanding of AI ethics and its correlations to neurotech ethics, it’s important to note the following:

the practice of neurotechnology has its OWN set of ethical concerns completely independent of AI.

As Müller and Rotter (2017) point out, sticking wires onto (or inside) people’s head (otherwise referenced as “technological interventions”) to see how their brains work is a discipline wrought with its own set of ethical considerations; some of which include unintended impacts to a subject’s brain, their person, their sense of identity, or their personality.

For years, neurotech — a field that’s still developing but evolving and showing greater promise for future treatment of neurological and psychiatric disorders — was generally limited to the treatment of brain disorders, like Parkinson’s, epilepsy, and so on.

Yet as technology companies continue exploring ways for how to scale and improve their proprietary AI’s performance en masse, neurotech tools and neurotech researchers are now being deployed for non-treatment purposes.

Closing thoughts

The AI bandwagon is here and the technology holds great promise.

But it’s crucial to keep an ongoing, close eye on a much wider array of ethical issues and challenges, especially when we’re talking about exploiting the mechanics of human brain functioning to develop profitable and non-medicinal technologies that, by and large, don’t always set out to enhance or complement our thinking.

Instead, there are too many AI scenarios that set out to reduce, compete (and in some contexts, replace) significant human cognitive activity.

Postscript

I’d like readers to keep in mind that AI ethics’ lingo tends to sometimes overshadow the human side of the human-machine interplay.

When discussing AI ethics, please keep the following in mind:

  • It’s not just about “data” (or “Big Data”) … it’s also about our collective “neurodata;”
  • It’s not just about “safety” but also our subjective well-being (generally defined as the health, safety, happiness, and comfort of individuals and/or communities); and lastly,
  • It’s not just about “control” and “access” but also our human autonomy and human agency.

Thanks for reading!

I write about our human-technology interactions, social-technological trends, mediated technologies, and a range of cyberpsychological subjects. See my Medium writings for other articles of interest 🙏

let’s connect👇🏽

__ inquiries? email me at cyberpsychologist@ruizmcpherson.com
__ more about me? check out cyberpsychologist.media
__ on social? find me on LinkedIn, Instagram & here on Medium

--

--

Mayra Ruiz-McPherson, PhD(c), MA, MFA
Mayra Ruiz-McPherson, PhD(c), MA, MFA

Written by Mayra Ruiz-McPherson, PhD(c), MA, MFA

Cyberpsychologist • AI Ethics • Qualitative Futurist • focused on affective computing, cognitive science, AI, humanoid robotics & human:machine relationships.

No responses yet