HAL,AI,andthehumancondition:InsightsfromM2AISummit
"Good morning, gentlemen. This is HAL 9000. I am now in control," delivered by the sentient computer in 2001: A Space Odyssey, chillingly capturing the 1968 vision of artificial intelligence seizing power. But, more real-world, less dystopian discussions were taking place at Auckland’s 2025 M2 AI Summit.
Unexpected intersections
As the tech-savvy founders and business leaders gathered, the discussions ranged from the pragmatic to the profound, revealing unexpected applications that spanned disparate fields such as finance, HR, and user-generated content for advertising. The 2025 M2 AI Summit in Auckland provided a nuanced and compelling glimpse into our burgeoning relationship with artificial intelligence.
Human versus AI?
The elephant in the room, articulated by Professor Michael Witbrock, and captured further in IT Brief, is the very real prospect of human displacement within the next decade. This isn't science fiction anymore. While humans excel at creativity and pattern recognition – those intuitive leaps of insight – we struggle with relentless multitasking and analysing truly massive datasets.
The question posed wasn't whether AI could erode our human pursuits, but whether do we want it to? Do we cede our creative spaces, allowing ourselves to be replaceable, or do we steer AI towards augmenting our human weaknesses, such as multitasking, spotting subtle patterns, and illuminating hidden trends within vast datasets?
Guardrails in the cloud
This brings us to the critical need for secure Cloud Foundations – our digital "landing zones." Building robust infrastructure with alerts and controls, which is increasingly challenging to do manually, to prevent incidents and ensure there are ways to avoid unexpected cost shocks from AI’s insatiable data appetite and controls in place to prevent security incidents. It’s also about establishing boundaries. AI can bolster our security measures, but its very ability also presents a potential vulnerability.
The autonomous future
The current buzz around "Agentic AI" – systems that make autonomous decisions, such as doing tasks like shopping at online at Woollies or planning your next holiday, for you – showcases the latest seductive power of this technology. Yet, as the summit underscored, even the most sophisticated AI models remain secondary to clear use case identification and effective change management. Many organisations still grapple with the fundamental "why" and "how" of technology adoption.
Strategy and structure
Without this strategic foundation, we risk drowning in "AI Goo" – content for content's sake, generated without purpose or insight. This digital deluge, the so called goo, doesn't empower; it overwhelms, sucks times, and adds no value. Introducing frameworks and policies is crucial to support people, organisations, and teams to harness AI’s power without succumbing to a state of unthinking reliance and digital distraction.
Underpinning all of this is the imperative to understand data. High-quality inputs, data controls, and relevance, along with statistical modelling, probability, and confidence intervals, are required for generating and understanding insights. But for brevity's sake, my handwritten notes read "shit in, shit out."
Playing catch-up
The regulatory landscape, however, remains persistently two steps behind. The current free-for-all of scraping proprietary content to train AI models, akin to Sam Altman’s recent legal challenges and the ethical concerns raised in this Guardian op-ed, echoes sentiments of the pot calling the kettle black. Artists and writers see their work recycled without consent, a digital echo chamber with no delete button.
Consent is explicit, time-bound and specific. Importantly, it can be revoked at any time. Upcoming changes to New Zealand’s Privacy Act, through the Privacy Amendment Bill, are driven by alignment with European standards, signalling a shift toward stronger data protection in New Zealand. The right to be forgotten will be a challenge.
Opting out?
The summit highlighted the genuine concerns around consent, security, and privacy in an AI-driven world. The ability to replicate documents and communications with alarming fidelity creates a minefield where authenticity becomes increasingly challenging to verify. We heard a recording of vishing, that is voice phishing, attempting social engineering to get login details. A reminder of the growing risk and proliferation of scams and the need to help others spot and know how to respond… or not, to the scams.
The ease with which our information is found. Our trails of digital footprints being stitched together and manipulated are coupled with the sobering statistic that just three unique pieces of information can uniquely identify 87% of individuals. It underscores the gravity of this challenge. But we’re in a new era. Requesting deletion, opting out, and burying your head in the sand aren’t options. So, can data ownership be put back in our hands? One presentation highlighted the use of digital wallets utilising blockchain or other cryptographic measures as a potential solution, another digital twins holding our “digital soul”.
From passenger to pilot
Yet, amidst the discussions of algorithms and agentic systems, it’s always the human moments that resonate. Over coffee, the discovery of bagpipe talents offered a reminder of the analogue world. Or the surprising question that lingers: Does the relentless march of AI make you yearn for the perceived safety of a bunker, or does it ignite a sense of wonder at the possibilities?
Perhaps the most crucial takeaway from the M2 AI Summit was this: while AI presents transformative potential, our journey with it must be guided by human values, ethical and governance considerations, and a clear understanding of both its capabilities and its limitations. We must avoid becoming unwitting passengers on HAL’s digital journey and instead, consciously pilot our course towards a future where AI truly augments, rather than diminishes, the human experience.