For the past year, I’ve been writing about the proliferation of Artificial Intelligence (AI) into our daily lives. But I’ve been doing it from safety of my lo-tech desk and the comfort of my loose-fitting sweats. This week I ventured out into the real world of hi-tech movers and shakers by attending the Code Conference 2023 in Dana Point, California. It was a deliriously fascinating experience, not unlike an eager child entering Willy Wonka’s Chocolate Factory for the first time. There were flavors of AI I’d not considered before and it set my mind surging forward at the volume of possibilities.
In my last newsletter—before attending Code Conference 2023—I opened my discussion with these words: “For many people, Artificial Intelligence (AI) is like Chekhov’s loaded gun on the wall: it’s inevitable that sometime during the story it will go off. But will it go off in defense of an innocent person’s life, or will it be used to take an innocent person’s life? Or both?”
After attending the conference, those sentiments haven’t changed. In fact, I’m even more certain now that there is so much beneficial to embrace about the potential for AI use, but I’m also just as sure that we need to proceed with caution. The foundation of further integration of AI into society must be built on carefully considered regulations and laws that ensure the safety and integrity of that foundation. And I was glad to hear from most if not all of the presenters that this was the foremost thing on their minds.
In other words, we don’t want to build our home on quicksand that might swallow us up in the night.
That’s why conferences such as this are so important, not just to introduce new products that incorporate AI, but also to ask the question that Jeff Goldblum as Dr. Ian Malcolm asks in Jurassic Park: “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” The more conferences like Code Conference 2023 that allow us to glimpse the potential wonders of the future, yet still ask what price that future may cost the better off we are.
For me as a first-timer at such a high-level conference, I spent a lot of time feeling like Alice through the looking glass. My initial reaction was what I call Shiny Object Syndrome, in which we’re so captivated by the glittering disco ball of inventiveness that we stare open-mouthed in delight. Part of our wonderment is that when we see such marvels, it triggers an inner hope for the future. Surely, if we can create such magical devices, we can also solve our social and economic problems. To quote Sportin’ Life in Porgy and Bess: “It ain’t necessarily so.”
A tool is still a tool, whether it’s a stick being used by chimps to fish for termites to eat, or an AI-driven supercomputer that powers a continent. What we do with our tools depends on our creativity, morals, and ability to reason. It will require all three of those qualities to properly and safely move forward with AI.
A conference like this filled with so many developers, inventors, and funders is a fertile place for networking. Networking is often how great ideas become useful products. It’s the equivalent of a chef with innovative recipes finding the means to open that restaurant where we can all eat.
I was impressed by products that have been in development for a while, including Runway’s ability to animate photos and Google’s Project Starline, which uses hardware and software to create a kind of window where you can talk to 3D recreations of people as if they were in the same room.
I was also impressed by HBO’s commitment to not use AI in the writing, structuring, or creation of shows. That’s important to me as a writer and as a human being. In his 1946 book Confessions of a Story Writer, Paul Gallico (The Poseidon Adventure) wrote: “It is only when you open your veins and bleed onto the page a little that you establish contact with your reader.” That’s not going to happen with AI.
The most cautionary element of the conference was the use of AI in social media platforms. There have been many news stories detailing the abuses made easier by AI, including the widespread and relentless dissemination of misinformation to undermine democracy. Some of this was addressed by two prominent figures in social media. Yoel Roth, the former head of trust and safety at Twitter who quit after Elon Musk took control of the company, recounted accusations that Musk spread lies about him that resulted in death threats that forced him to sell his home and move. The interview with Linda Yaccarino, the CEO of X, formerly known as Twitter, did little to address those claims or reveal a clear course ahead for X and the issue of misinformation (“Linda Yaccarino’s wild interview at the Code Conference). This is the area that most needs regulation.
As many of you know, I am a historian at heart, having written several history books about pivotal times and people that changed the course of history. This is one of those times. And the conference was where I could see history pivoting in real time.