Beyond the Slides: 5 Insights I Wish I’d Shared During My AI Ethics Talk at NYUAD
The added context — and other takeaways — I didn’t get to mention or expand upon about AI technology, ethical concerns, and human impact.
Sharing the topic of AI Ethics with audiences is both a passion and a mission.
… a passion because inspiring audiences to understand why the development of responsible technology is vital across so many domains of human life is — as a cyberpsychologist and qualitative futurist — my whole raison d’être.
It’s also a devoted mission and steadfast ‘North Star’ of my AI literacy nonprofit launch (but more about that later).
As most talks are shaped and limited by time, there’s only so much one can convey and highlight within a given window.
Such is indeed the case with my recent NYUAD talk, where I spotlighted various ideas, challenges, and concerns as they pertain to AI Ethics with a scholarly and professionally diverse audience eager to immerse in, and engage with, these thought-provoking topics.
In the spirit of expanding my comments, sharing additional contexts, and clarifying concepts, I’ve outlined five key points below I wish I could’ve included or further explained while presenting at NYUAD.
Diving right in …
#1: AI Ethics & Cybersecurity: A Natural Tension
What was said:
As our moderator, Muhammet Bas — Associate Professor of Political Science at NYUAD — explained during his opening comments, the two topics to be presented — Cybersecurity and AI Ethics — are distinct discourses. This was a needed programming note as it helped set up the contrasts between the event’s two presenters and their contents (myself and the esteemed James (“Jim”) Lewis, Senior Vice President; Pritzker Chair; and Director, Strategic Technologies Program at the Center for Strategic and International Studies or CSIS). When my turn to present came up, I added that while the two topics were indeed distinct, the subjects did share overlapping concerns and challenges, to a varying degree.
What I wish I had added:
Despite any solid overlaps the realms of Cybersecurity and AI Ethics may share, the juxtaposition of the two domains creates a natural tension, if you will. Cybersecurity is a vital field and crucial practice enlisted to protect electronic data against criminal or unauthorized use and such protective measures are often enlaced with strong geo-political and industry interests. Yet geo-political and industry goals aren’t always in strictest alignment with the central focus of AI Ethics, despite best intentions.
Why this matters:
Context is so important in understanding the natural dichotomy that exists between these two domains, and this understanding becomes increasingly critical when societies around the globe are facing momentous and exponential challenges amidst continued digitization trends (with artificial intelligence leading the way) by which we find ourselves in today.
#2: My Quilting Analogy
What I said:
In my opening comments, I explained my research work is akin to that of a quilter; a quilter is someone who invests time arranging and attaching scraps and disparate fabrics together to establish patterns and see overarching possibilities.
What I wish I had added:
Stitching together pieces of disconnected information—gleaned from scholarly and anecdotal sources (like the technology columns I cited) — is a careful and ongoing process. Doing so allows me to identify the AI deployment trends and implementation themes that give rise to ethical complexities across our socio-technological landscapes.
Why this matters:
The exploration and discussion of ethical challenges posed by AI are not distant practices nor abstruse ideas disassociated from the daily happenings across industry, government, and other vital sectors of human life.
#3: About “Co-Intelligence”
What I said:
I mentioned the specific word of “co-intelligence” (which I gleaned directly from the title of the book, Co-Intelligence: Living and Working with AI, by Ethan Mollick) inspired very specific ideas for me while traveling to NYUAD. I explained that in the context of our interplay with AI, the idea of “co-intelligence” is accurate: we are “thinking in tandem” with AI. But the prefix (and definition) of “co-” tends to emphasize togetherness, much like a co-pilot works alongside a pilot. Both the pilot and co-pilot, I continued, might share flight goals, possess a similar volume of knowledge about how to fly a plane, and support each other as they cross-check and perform their tasks for departures and landings. BUT their thoughts and their thinking processes, no matter how jointly or in tandem they may occur, are still quite individualized. Thus, my point about the term “co-intelligence” is that while it is foundational in terms of emphasizing how we “think jointly” with an AI, it insufficiently describes — in my view — the “external” cognitive process an individual experiences when or while “thinking jointly” with an AI.
Time willing, I would’ve added:
Right before my travels to NYUAD, I read an article by researchers from the Universita Cattolica del Sacro Cuore that seems to not only agree with my “external” cognitive process ideas (in the context of our “thinking with” or alongside AIs) but takes it a step further by formally labeling the framework as ‘System 0.’ The System 0 moniker, by the way, pays homage to the widely accepted cognitive theoretical concepts of System 1 and System 2 thinking (from Dual Process Theory, aka DPT). System 0, in turn, speaks to a new form of “external thinking” given our usage of AI as an “external thinking tool;” one that complements our Systems 1 & 2 cognition frameworks.
Why this matters:
These robust ideas and frameworks are so profound they demand their own article here on Medium (working on it 🙏🏽) but for the moment, and in context to my NYUAD talk, they’re important to highlight because the mainstream topics which often dominate the public discourse under the AI Ethics umbrella tend to visibly center on concerns about job displacement, existential crises facing humanity, and the like. I wanted to use the NYUAD opportunity to shed more light on a different, less visible and discussed AI Ethics concern: risks facing our cognitive autonomy and human agency.
#4: “We go out of our way to be creative…”
What I said:
Half-way into my talk, I shared a series of slides — three of which I’ve included above — to underscore just how much we, as a species, go OUT OF OUR TOTAL WAY to be creative.
Like billions of species, we are biological beings; yet we — unlike any other known living organisms — don’t live our lives solely governed by biological processes or stimuli.
The three slides shown above and shared with the NYUAD audience underline these sentiments; we don’t just build basic shelters (like the content bear resting in his cave) or bow towards the sun (like plants do whenever they figure out the direction of sunlight). We are intentional in our thinking and act with clarity, concentration, and creativity.
What I wish I had said:
As a cyberpsychologist, my focus here wasn’t to single us out as “creative” in terms of the arts but to highlight creativity both as an intensely human attribute and as a psychological construct distinctive to our species.
Why this matters:
It is this very “going out of our total way to be creative” human propensity that gave rise to the idea or notion that “machines could think” like us, that “intelligence” could be mechanized, and that artificial intelligence could be designed to emulate how neurons work in the human brain. Moreover, the scholarly domain most attributed for the creation of AI is cognitive science, and cognitive science is predominantly focused on computational-representational approaches of the mind. Such approaches tend to neglect or exclude the important roles emotions and human consciousness (including psychological constructs, like creativity) play in the study of human thinking. Thus, by sharing such ideas with the NYUAD audience, my intention was to bring more balance and offer greater context beyond the over-emphasized ideas of “mind like machine” we so often hear about in the discourse of AI development and innovation.
#5: About those glorified spreadsheets 🙏
What was said:
At the conclusion of the NYUAD event, the audience asked a series of questions and in the course of those questions, my fellow speaker, Mr. Lewis, opined that while others may have a different view, his perspective of the AI of today is akin to “a glorified spreadsheet” and/or a pattern detector on steroids (I’m paraphrasing his latter point). Additionally, Mr. Lewis shared he didn’t see AI’s abilities changing beyond those capabilities any time soon. An audience member vocally agreed with said analogies and I imagine others in the audience may have as well.
What I would have contributed, time willing:
I agree with Mr. Lewis’s fundamental ideas; AI is often touted for its tremendous computational capabilities, which are performed at speeds and scales well beyond any human being’s own capacity. This is not in dispute. I would only like to offer more context to these ideas, starting with the notion that such views of what AI “only is” (at this time) do not appear to be in alignment with what the Altmans, Musks, and Zuckerbergs of the AI frontier world set out to build or believe they’re unleashing, both for today and across our tomorrows. The investments made by such companies to create the AI technologies of now (and those of our future) are in the millions if not billions, and the processing power required each and every single time we ask an AI agent to “think” or “compute” for us is as extractivist as it is resource-expensive. After vast efforts in innovation, exponential investments in the computational infrastructures for training massive language models, and the volumes of research and development efforts of highly specialized teams to deploy the “thinking” AI of today, I suspect AI developers like OpenAI et. al. would be unsatisfied or miffed to learn their AI agents, like ChatGPT and the like, are regarded by some as mere Excel spreadsheets on steroids.
Why this matters:
Be that as it may, and as shared during my talk, whether AI technologies are “good” or “not (as) good” is not my area of expertise, nor is determining how they work or don’t work in their current states, per se. And though I do agree the computational prowess of today’s AI is absolutely “on steroids” in contrast to our own feeble-in-comparison computational abilities, I think there’s more intention and innovation surrounding the AI-of-now beyond statistical computation or en masse pattern recognition. We’ll have to see how AI continues beyond these initial stages of innovation and I’m eager, like many are as well, to see how AI advancements will further unfold in the coming years, especially as processing power for AI improves. That all said, I’m a cyberpsychologist and thus, my focus is not so much on the technological underpinnings but rather how our own human minds and subjective experiences are influenced or affected by the technologies we use or keep, including AIs. There are plenty of savvy folks already out there intently hyper-focused on what the AIs can or can’t do (right now) and so on, but I find not as much attention is garnered or as visibly given in understanding or describing how said technologies make us FEEL or BEHAVE, both as individuals and as vital members of whole societies. During his talk, Mr. Lewis reminded us there have always been bad actors online and he shared powerful examples. But beyond the bad actors, I myself am keenly focused on how our cognitive and emotional states, our morality and collective values (aka our ethics), and our overall wellbeing are influenced or shaped by our relationships and constant interplays with cybertechnologies — and most notably with the latest versions of neurocomputational technologies, and AI in particular. This is the crux from which I hang my research hat and from where I’ll continue my specific focus moving forward.
I hope my ideas and the information I imparted with the NYUAD audience reminds folks to stay vigilant, curious, and critical; to ask probing questions; and to welcome insights well beyond the bounds of industry or political headlines as they seek to learn more about AI’s progress and ethical impacts to our cognitive, affective, and collective experiences. This latter point is essential to better understand the influences and ramifications such technologies so often ebb and incur across swaths of socio-economic strata around the globe, intentionally or not. Thus, gaining a deeper, human-centered perspective on AI is essential for tackling the rising risks and ethical challenges that our interactions with these technologies introduce, both in our personal lives and, often more acutely, worldwide.
I’d like to profusely thank Mr. Lewis for being my co-presenter, Mr. Bas for moderating our conversation (and the attendees’ questions) with engaged curiosity, and NYUAD for creating amazing opportunities such as this to raise awareness, share important ideas, introduce invaluable perspectives, and allow for rich, textured discourse across timely cybersecurity and critical AI Ethics topics.
References
Chiriatti, M., Ganapini, M., Panai, E., Ubiali, M., & Riva, G. (2024). The case for human–AI interaction as system 0 thinking. Nature Human Behaviour, 8(10), 1829–1830.
Cognitive science (Stanford encyclopedia of philosophy). (2023, January 31). Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/cognitive-science.
Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio/Penguin.
Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? Behavioral and brain sciences, 23(5), 645–665.
Thanks for reading!
I write about our human-technology interactions, social-technological trends, mediated technologies, and a range of cyberpsychological subjects. See my Medium writings for other articles of interest 🙏
let’s connect👇🏽
__ inquiries? email me at cyberpsychologist@ruizmcpherson.com
__ more about me? check out cyberpsychologist.media
__ on social? find me on LinkedIn, Instagram & here on Medium