< BLOG

some of my thoughts

David Sacks, the AI threat is not *necessarily* overblown

September 2024

The All-In Podcast has been one of those constants across my late undergraduate and early career. Whether it be classes on economics, my career in investment banking, or me now working in tech, their insights are super unique and it's just one of those shows that always make me go "huh, I never thought about it that way." I admire all the besties greatly.

One of the episodes I watched, David Sacks made a comment on the existential threat posed by AI being largely speculative, and driven by fear-mongering. For some reason I have a visceral reaction to the term "fear-mongering" in both directions so I thought I'd dive deeper into this.

In the episode discussing the regulatory overreach surrounding AI development, Sacks expressed the narrative around AI being an existential threat is often used to justify unnecessary regulations that could stifle innovation and technological progress. Needs to be said that this is my interpretation of his stance. He may not have meant this the exact way I am saying or reading. At the same time, if indeed this is what he means, it seems like a dangerous blanket statement, and imo requires a lot more nuance than was provided.

Firstly, I think Sacks is underestimating the pace at which AI actually evolves. Recently, we have seen the development of AI systems that can learn and improve autonomously. This advancement is progressing faster than a lot of us expected. The potential for these systems to exceed human intelligence and make decisions that could be harmful on a large scale is a genuine concern. Sacks' argument that AI fears are overblown doesn't account for the unprecedented nature of this technological shift and the challenges it presents in terms of regulation, ethics, and control. I realise deferring to expert opinions is lazy, but so many founders and investors who have skin in the game have warned of AI being left uncontrolled, including "friend of the pod" Elon Musk. At this point it's not a coincidence or lazy expert deferrals imo. We should at least hear out the guys who are working on these applications day in and day out.

Also, Sacks has a strong focus on the economic and industrial impacts of AI, but generally didn't mention the broader ethical and societal implications. AI's potential to amplify existing inequalities, reinforce biases, and even challenge democratic processes are issues that require immediate attention. These are not speculative concerns; they are grounded in where the tech is headed.

As I said before, I don't think Sacks is entirely wrong btw. Overregulating any technology will hinder innovation. At the same time, though, a healthy dose of skepticism is important to reign in possible negative implications of a technology that is essentially (still) a black box and nobody really understands. Keeping on the theme of nuance, there is overregulation, no regulation, and then some regulation. AI at this time probably falls within the last bucket. I'm willing to accept (some) stifled innovation if it means we develop the technology in a sustainable way and actually come out the other side taking control of AI and understanding it.