The Whitepill on AI

2024/05/01

John

Introduction

Unless you’ve been living under a rock for the last year you’re likely all too familiar with the AI hype-train dominating the technical corners of the internet. If you are a student studying in a sofware-centric field, recent college graduate, or even someone well into their career, you may be having doubts about how your career might be affected by the recent innovations in the AI space. Hopefully this post will help to allay some fears and shift your concern into a more optimistic direction!

Most of the concern around the recent developments in AI revolves around Large Language Models (LLMs) replacing knowledge work, specifically engineering, software, and related fields. Additionally, the seeming march towards “Artificial General Intelligence” (AGI) and the ensuing “Singularity” are thought to signal the end of our scarcity-driven economy and of human work as we know it (or our complete enslavement by the machine god, depending on your perspective!).

While I can’t comment on the latter, and what it will mean for us if (and it’s a big IF) humans ever give rise to AGI. I can tell you based on working knowledge of the former, that we are nowhere close to creating AGI, or the end of human-based knowledge work anytime soon.

It may help the reader to establish some basic definitions and conceptual understanding of how LLMs fit into the overall landscape of “AI”. The venn diagram below succinctly illustrates how some of these common buzzwords relate to each other:

venn_diagram

As you can see above, generative models (of which LLMs are a subset) are a subset of “Neural Networks” which are a subset of “Machine Learning”, which is a subset of AI. Congratulations, you are now an AI expert and can post about it on X/Twitter.

Very few people on the AI hype train (and in general) have even a basic conceptual understanding of how these LLMs work. A great (and free) resource as a primer in the field is the Deep Learning book published by MIT. The web version is absolutely free, and will, if mastered, give anyone the skills they need to enter the field.

For the purposes of this post, it may also help to define what I mean by AGI. The definition we will be using is this:

“An intelligence that can understand, learn, and apply knowledge across a broad range of tasks at a level equivalent to or exceeding any human.”

Ok now that we have established some definitions, and linked some resources to get us all on the same page, let’s get into why I believe all our software jobs (MIS majors or JS devs excluded) are safe. Just kidding, MIS people! Here are your whitepills:

1st dose - Generative Models are NOT AGI

No matter what anyone tells you, no matter how many AI-grifters try to sell you on their products, no matter how hard the OpenAI CEO tries to raise capital. Generative models are NOT and never will be AGI. They are highly specialized for specific tasks and lack the broader understanding, adaptive reasoning, and problem solving abilities of a human. They may eventually be a small part of a larger system that could be considered AGI, but they are not and cannot be AGI in and of themselves. The training (aka learning) cycle for models like GPT-4 spans weeks to months across multiple datacenters worth of hardware. For an artificial system to operate at a level approaching a human, it would need to perform this learning process in real-time. We can learn and adapt in real-time. Generative AI cannot, currently due to some pretty significant constraints. Some experts in the field believe LLMs to simply be “Stochastic Parrots”. A great lecture from one of these experts can be found here: The Debate Over “Understanding” in AI’s Large Language Models. Interestingly, the creators of this term were fired from Google for publishing these criticisms.

2nd dose - We are constrained on compute

The current generation of generative models takes months to train, in the most powerful datacenters in the world, on the most bleeding edge hardware. The semiconductor industry (where I currently work) is playing catch up. Supply chain issues, tariffs, legislation, and manufacturing logistics all throw a wrench into the semiconductor supply. It is difficult to even source enough compute hardware, and as these models get larger, this problem is only going to get bigger. OpenAI has yet to turn a profit, and running these models is insanely expensive. OpenAI estimates it would take a $7T (yes, trillion) investment across a number of areas to relieve the compute constraints.

3rd dose - The ARC

The Abstraction and Reasoning Corpus (ARC). The ARC was created by Francois Chollet an AI researcher at Google. He also created Keras, and has worked on TensorFlow. (I think he knows what he’s talking about). His description of ARC is as follows:

“ARC can be seen as a general artificial intelligence benchmark, as a program synthesis benchmark, or as a psychometric intelligence test”

The current generation of generative AI cannot pass this test, where the average human can easily pass it. It turns out, generative models are extremely bad at “zero-shot learning”. Zero-shot and few-shot learning is something humans are really good at, and implicitly rely on in many technical careers.

The ARC is a valiant attempt to benchmark intelligence, and one of the best tools we have for testing artificial systems. However, we still don’t have a satisfactory general definition for intelligence or consciousness. Is it something like the human brain? Is intelligence an emergent property of a complex system? Some researchers believe this to be the case. Let’s assume it is, to reach something like human intelligence, we would need to create a system with the same complexity as the human brain. Generative models are nowhere near as complex. GPT-4 has ~175B parameters. The human brain has trillions and trillions of connections between its billions of neurons, the information storage and processing ability is like comparing a typewriter to a supercomputer.

4th dose - We may be constrained on training data

Now that LLMs are available to the general public, there is a growing amount of artificially generated text on the internet. New training datasets may end up containing more of this text, negatively affecting the quality of the training data. This can create a feedback loop where models are learning from their own outputs. Ensuring training data is high quality will be a growing concern going forward.

5th dose - More parameters != better performance

I have built several neural network projects both from scratch and with frameworks like TensorFlow, one was for predicting electrical load in grids powered by renewable energy sources. Another was to predict the price of crypocurrency based on the news sentiment about that cryptocurrency. It would analyze news articles from various sources to gauge the overall sentiment of an article, along with the relevance of the source. It then fed that sentiment index, along with prior price action as parameters into a neural network to try and predict future price action. In every case, more parameters did not necessarily result in better performance. More parameters can result in “over-fitting” to the training data, and you end up with worse performance. GPT-5 and beyond are growing in size, while some feel GPT-4 actually performs “worse” than GPT-3.5.

6th dose - Learn to use them to your advantage

LLMs are an amazing innovation, you should be familiar with them. Knowing how to take advantage of these tools will make you more effective in virtually any technical role. Think of them as enhanced google search, ask them questions when researching a new topic. Use them to generate boilerplate for projects or help you debug a problem. Use them to help yourself become a 10x engineer. LLMs should be a force-multiplier for anyone working in software. They are simply another tool in your toolbox.

7th dose - Software Engineering is more than just writing code/generating text

Until an AI system can gather requirements, interpret those requirements, identify issues with those requirements (how many customers even know what they want?), QA and test, manage the project, communicate with stakeholders, implement security best practices, ensure compliance with company policies and with the law, make strategic decisions (you start to get the idea) humans will never be replaced.

You may have seen tools like Devin and gotten worried that the end is nigh. A great counter to this is the “Lump of Labour Fallacy” (or Labor if you’re an American 😎 πŸ‡ΊπŸ‡Έ). Which is the fallacy that there is a finite amount of work to be done within an economy, the concern being if something makes that labor more efficient, then there is less work to be done, and then less jobs. The demand for sofware is growing, there are more projects than ever, and real, actual people needed to develop and maintain them. The technology industry is cyclical, and ebbs and flows like any other. This year, 2024, is expected to show a lot more growth. I can tell you from personal experience, there are loads of business-critical (sometimes life/death critical), legacy projects in weird environments, that a business will likely never trust an LLM to run loose with. Devin is nothing more than an interesting tech demo at this point.

8th dose - If Software Engineers don’t have jobs, no one will have jobs

The day we don’t need software engineers anymore, everyone will be out of a job. We will have hit the singularity. Hopefully it’s the Star Trek universe and not the Terminator universe. Catch me on the holo-deck.

Don’t be a doomer, LLMs (or AGI) aren’t taking your job anytime soon. In the meantime, there is meaningful work to be done, problems to be solved, and cool shit to learn. We have a long way to go!