Benedict Evans, Gen AI and the Future!
I have been following Evans for 5+ years, and I don’t think there is a better macro thinker of our times - below is his take on AI and the future of tech economy.
Benedict Evans - The introduction
When asked what I aspire to achieve with my writing journey on Substack, the answer would be simple: to develop the ability to analyze data and trends in the same length as Benedict Evans. Evans has a remarkable talent for connecting the present with the future, deciphering where technology is headed and why it matters. In today’s rapidly changing world, making sense of the tech landscape is challenging—either because new advancements come at a dizzying pace or adoption lags in areas where it could make the most significant impact. Instead of relying on outdated sources, I find Evans’ insights invaluable, as he acts like a "macroeconomist for tech," providing clarity on what’s shaping our future.
Evans' work offers a fresh and insightful lens on some of the most significant technological shifts, from media to AI to post-COVID software adoption. He doesn’t just report on trends; he unpacks the complexities, helping us understand how technology reshapes industries and society over time. His ability to anticipate and explain the trajectory of innovation sets him apart, making his analyses a must-read for anyone trying to keep up with the evolving tech landscape.
Here is where you can learn more about his writings: https://www.ben-evans.com/
Today, I watched the Benedict Evans video on AI and the future, and this is my summary of that talk. In this brief but insightful talk, Evans offers a critical synthesis of what’s truly at stake when we take a macro view of technology adoption—looking both backwards to understand the patterns and forward to anticipate what’s coming next. Let’s get straight into it!
AI, then, is just Software Today!
1970s: Historically, any complex system—such as early databases in the 1970s—was labelled as AI. Today, we simply call it software. Evans illustrates this evolution with an example from the insurance industry, where tasks once managed by entire teams of actuaries have been replaced by tools like Excel, automating complex calculations and transforming industries such as finance over the past few decades.
The 2010s: The early days of machine learning (ML) were marked by pattern recognition's basic applications, like identifying cats and dogs in images. Although initially limited and lacking solid use cases, these capabilities laid the groundwork for significant advancements. Over the past decade, pattern recognition has expanded into natural language processing, image recognition, and text/voice pattern recognition, becoming integral to modern software solutions.
Given that early breakthroughs in pattern recognition brought about unprecedented changes in business operations, a similar transformation will occur with AI as companies evolve to adopt the latest technologies. Integrating AI more deeply into business practices will pave the way for optimized processes and entirely new ways of working, reshaping industries once again.
He reasons why the evolution of new technology from “new and cute” to “great and impactful” takes time with this quote:
When technology is new, we force it to work in ways we are already know, but as technology evolves, we change the way we work to make the best use of it.
Where is GenAI today?
What began with pattern recognition for images is now evolving into reason recognition quite simply! Just as we developed systems to identify cat pictures, we are now attempting to replicate this process for decision-making and outputs. But how will this transition unfold?
In the early stages of machine learning (ML 1.0), we fed computers millions of images of cats to train them. For ML 2.0, as we focus on reason recognition, our approach shifts: we assume that by providing large language models (LLMs) with all of humanity's outputs, they can figure out the reasoning themselves.
On the one hand, you might have experienced ChatGPT delivering seemingly confident and validated answers when inquiring about a new travel itinerary or exploring a physics concept. The information can appear more accurate than what even a well-informed friend might provide, but often come with some caveats. Evans shares his experience researching his own biographical information, which seemed correct but was not entirely accurate. On the other hand, companies today are successfully employing GenAI to assist engineers in writing code, solving mathematical models, and solving more definite problems. How do we reconcile this difference in model reliability?
How do we resolve this differing stance? It’s not that ChatGPT is lying; instead, it operates on a probabilistic matching system, generating responses based on reasoning about Evans’ history and profile while achieving a level of accuracy that can sometimes mislead, as summed up by the following slide he shared. See how Evans explains this dichotomy of GenAI by explaining what it is not and maybe what it is.
Reflecting on ML 1.0, we can say that machine learning provided us with many "interns" to handle tasks, while ChatGPT seems to offer better guidance and organization. In Evans' framework, he describes two distinct use cases:
Use Case 1: In this scenario, running multiple iterations leads to the most definitive answers, akin to executing SQL queries or writing code. Accuracy is paramount, and there's typically a clear, correct answer.
Use Case 2: This use case operates differently, where no option is inherently wrong. When seeking a travel itinerary or brainstorming brand slogans, the goal is to generate many choices—essentially providing automated interns to deliver 1,000 options for you to consider.
So far, so good. Where is GenAI going from here?
Beyond the excitement surrounding AI on platforms like Hacker News, in YC-funded companies, and even Nvidia’s soaring stock price, something genuinely transformative is occurring. Unlike the initial enthusiasm surrounding the iPhone when Steve Jobs unveiled it, today's enthusiasm for technology feels different. Evans employs the S-curve model to depict the phases of technology adoption: initial excitement, a tedious growth phase, and eventual widespread acceptance. GenAI is currently at the excitement stage, which is marked by increasing adoption and interest.
As he explains three possible shifts about what AI could mean for us:
Platform Shift: This occurs every 10-15 years (e.g., the iPhone) and marks a significant technological transition.
Fundamental Change: Similar to software, this shift happens every 30-40 years, as Bill Gates articulated.
The Path to AGI represents a transformative impact on our lives that could span a century.
The lower end of this spectrum indicates a platform-based evolution, while the higher end suggests a pathway to AGI. However, Evans doesn’t provide definitive answers; instead, he frames critical questions that invite deeper exploration over the years.
To illustrate why this moment is pivotal, Evans recalls Bill Gates’ assertion about how GUIs revolutionized user interaction by shifting from monolithic command lines to a more intuitive click-and-work approach. While GUIs enhanced productivity, they still required specialized skills to develop. With GenAI, we are entering a new paradigm that extends beyond GUI adoption. Imagine instructing a computer with just a few lines and having it perform tasks like building an app, drafting a college application, or facilitating learning about new technologies. This suggests we are moving past mere platform shifts.
Evans envisions generative AI as a versatile tool, akin to a blank canvas or an Excel spreadsheet, where users can create and customize solutions tailored to their unique needs. However, this raises an important question: How do we accurately identify our true requirements? This dilemma mirrors the evolution from generic software like Excel—originally used for basic calculations—to specialized SaaS products designed for specific tasks, such as project management tools or CRM systems.
On the flip side, Evans posits a future where the capabilities of generative AI may eliminate the need for numerous standalone apps. Instead, we could leverage a few powerful, general-purpose tools to achieve a broader range of functions. For instance, imagine using a single generative AI platform to draft marketing content, analyze data, and manage customer inquiries rather than relying on separate applications for each task.
As we stand on the precipice of total AGI, we must grapple with the dichotomy of this technology's potential impact on society, reminiscent of themes explored in films like I, Robot. Evans’ insights leave us both puzzled and excited, inviting us to ponder how generative AI will reshape our world and redefine the nature of work.
Here is the original link to the video if you are keen to hear it from the author:
Hi Shubham, you are absolutely right.
Hey Pranjal, we are already seeing solutions on both spectrum with products like Julius AI & Reforge Browser Extension. Current products are yet to cross the chasm but the future is exciting with the vision of "Google like Search Box" solution for complex tasks.
To me, AI seems to be the true enabler of knowledge democratization which was the foundation of internet.