Birthday Time!
Nov 08, 2023 4:10 pm
Hey.
It's my birthday and you know what that means!
Every person I've never met or spoken to on LinkedIn is going to send me an automated birthday greeting!
Hooray! So sincere!
But I've got a gift for you that is real and heart-felt.
It's the first section of my 7th book, "Will AI Take My Job?"
- - -
Introduction
2023 may become known in the future as the start of the “Age of AI.” But is this the beginning of the end, or the start of a golden age of humanity?
And what does this mean for you? That’s the big question.
Is AI going to “take” your job?
The short answer is, “it depends.”
It depends what kind of work you do. It depends on how your work is regulated or how private the data is. It depends how much of your work is physically in-person and the environment where you do your job.
There are other factors like quality of connectivity, access to reliable power, security of facilities, access to hardware specific to AI systems, and many more.
The news telling you AI is going to “steal everyone’s job” isn’t helpful. It’s way more complicated than that. One thing is for sure, AI is going to change the workplace and the world as we know it.
But just by picking up this book, you have significantly improved your chances of keeping your job, being more competitive in your business, or learning how to manage the “Age of AI.”
This book is a combination of my personal experiences working hands-on with AI platforms, early access to developing tech, speaking with groups and audiences about AI and productivity, talks from AI conferences, and surveys on the topics we will cover.
You will walk away from reading this book with a vastly improved sense of the current state of AI, how companies and employees are approaching AI tools, how to best protect your future employment or business, and some wild predictions about the future that are sure to make you a hit at business gatherings.
I will let you know now that there will be some technical terminology scattered about in this book, but it is unavoidable in the discussion we need to have to determine if AI will take your job.
Let’s get back to the task at hand: talking about AI.
Seemingly out of nowhere, generative AI exploded on the scene in late 2022, but only gained mass adoption in 2023.
It appeared in the wake of waning interest in Web3 and Cryptocurrency because of multi-billion dollar scandals and the lack of any day-to-day useful purpose in the eyes of the general population. But Web3 and blockchain is a discussion for another day.
Now, most information workers, students, artists, entrepreneurs, and marketers have heard of the most prominent AI systems: ChatGPT, Claude, Bard, Lambda, Jarvis, Stable Diffusion, Midjounrey, Eleven Labs, Dalle, Descript, and many others.
When it comes to the general population, the more educated people are, the more likely they are to have tried AI. Also, the younger people are or the higher their income, the more likely they are to have tried AI.
In a May 2023 Pew Research survey, roughly half of US residents had heard of ChatGPT but less than 15% had tried it.
For this book, I am taking for granted that you have a very basic knowledge of AI and have at least tried using an AI program. If you have not, take a few moments to give some free ones a shot.
Try ChatGPT or Bard, or download Pi on your smartphone and chat with it a bit. Maybe ask it to make you a recipe that includes three items in your fridge, to write a song in the style of your favorite musician, or ask to create an engaging presentation about your job.
Try an image generator. You can try using Midjourney or Dalle-2 to create an image. Make a unicorn in space or a group of dogs wearing business suits.
Maybe go to a site like Eleven Labs and type a sentence for an AI voice to read back to you.
You need to have an idea of what AI does and how good it is now. And remember. This is the worst AI you will ever use.
We will be talking about AI for productivity, how AI automation is threatening some industries, how AI Agents can handle entire projects, and AI Developers that can code applications.
We also need to have a discussion about AI ethics. What does it mean to “ethically train” an AI system? What data is public, protected, or private? What does copyright mean in a generative AI world?
And then, we’re going to look forward to the future and talk about how embedding robotics and sensory devices with multi-modal AI systems is going to create Autonomous workers. The robots of science fiction that are being developed as I write this.
Also, no discussion about the future would be complete without talking about AGI (Artificial General Intelligence) and the possibility of creating a Superintelligence. What are the threats and benefits to humanity? How will AI companies, advocacy groups, and political groups make future systems safe, or “aligned” with the goals of humanity?
The future of AI is the collision of many technologies and discoveries. It’s been said that in 2023, there was more advancement in AI than in the previous 20 years.
AI is seeing truly exponential growth.
“In a properly automated and educated world, then, machines may prove to be the true humanizing influence. It may be that machines will do the work that makes life possible and that human beings will do all the other things that make life pleasant and worthwhile ”
Isaac Asimov, Robot Visions
Chapter 1: The Current State of AI in the Workplace
The only thing we know for certain is that work, as we know it now, will change.
Any work involving data analysis, software development, digital design, customer support, writing and editing, marketing, acting, game design, project management, virtual assistants, borrowing and finance are about to change for good.
This is by no means an exhaustive list. There are countless other tasks and duties that could end up being automated which make up the majority of most people’s workday.
This alone doesn’t mean we will lose our jobs. In many cases, it could improve our job. Relieving us from the repetitive, tedious, or arduous tasks holding us back from doing our best work.
However, some people cannot see the forest from the trees. They lack the experience or technical knowledge to grasp the full potential, risks, and implications of AI by only concentrating on its current state.
Someone you know is going to tell you, “AI will never take my job!”
They refuse to believe it can happen, even if they work in one of the aforementioned industries. Besides, they are good at what they do, and a machine can’t replace them, dammit!
They may be right.
But I would wager that they should take a long, deep dive into what’s happening with AI before they strut their stuff so confidently.
Remember this: the current AI you are using is the worst AI you will ever use.
It only gets “better” from here. But “better” may not be the right word. I would say that “more capable” is a more accurate description.
Our brains are simply not designed to extrapolate the future in an exponential way. Because of this, it is easy for folks to misunderstand or simply be wilfully ignorant of what is to come.
To get some perspective on exponential growth in technology, we can look at video games consoles, which is something I know well having worked briefly in the video game industry. We will compare the Atari 2600 (1977) to the recent Sony Playstation, better known as the PS5 (2022).
Now, comparing an Atari 2600 to a PS5 is like comparing a bicycle to a rocket.
The PS5 does 235,294 more computations than the Atari 2600 every second.
The Atari wasn’t even capable of creating a single polygon, which are the little 3D triangles used in modern video games and modern CGI for movies and television.
A PS5 can process roughly 10.28 Billion polygons… per second.
If you were to count every polygon created in a single second by a modern video processor, it would take you approximately 326 years.
Just think, in training an AI like ChatGPT-4, it takes thousands of more powerful graphics processors than the PS5 has and runs them 24 hours a day for months at a time.
Comparing the first GPT-1 (2018) to a current AI like GPT-4, Bard, or Claude-2 is also like comparing a bicycle to a rocketship. A more accurate comparison would be the mental capacity of a toddler to that of a young teenager. And it took a lot less time to make that leap than it did to go from an Atari to a PS5. It only took a few years.
The speed of computation is unimaginable.
Wrapping our primate brains around exponential technology is a difficult thing. We’re not built to think that way.
An excellent demonstration of exponential growth is called, “The Rice & The Chessboard.”
Imagine a simple chessboard (64 squares). You place a single grain of rice on the first square and double that on the second square, two pieces of rice.
Double it again for the third square, placing four grains and continue this process, doubling the number of grains for each subsequent square.
By the time you reach the end of the first row (8th square), you have 128 grains on that square. It might not seem like a lot at this point. But it would take a long time to count 128 grains of rice.
As with our Atari 2600 evolving into a PS5, the numbers become staggering.
By the 32nd square (only halfway through the board), you'll be counting over 4 billion grains of rice for that square alone. By the time you reached the final square, you would require more than 18 quintillion grains.
That is so much rice that the amount on your last chessboard square would be as big as a mountain. But not a mountain of rice, it would be a mountain of sand, with each grain of sand equalling all the rice that exists in the entire world.
The point I am making is that the “next AI” (referencing the next GPT or the newest version of an image, video, voice, or music creating system) will far surpass the capabilities of the last one.
In our case, the “next” one is likely less than a year away. And the one after that less than two years away.
ChatGPT-4 is twice as capable as ChatGPT-3.
ChatGPT-5 is four times as capable.
ChatGPT-6 will be 16 times as capable… and so on.
“This AI” – referring to all the current systems – is pretty good, but it isn’t great. The next one will be great. And the one after that will be amazing, and the next absolutely astounding, as long as we don’t hit some kind of unforeseen problem with scaling the technology.
The growth in power and capability and speed of these models will continue to improve. Is it for the better?
Author’s note: These are theoretically estimated in increased capability, but in testing, the models seem to be 5-10x as powerful as the previous models.
A recent benchmark test using more expansive testing found that ChatGPT-4 was 5x more capable than ChatGPT-3.5, and 10x ChatGPT-3. The models seem to scale by magnitude, but we don’t know if this will stay constant.
If scaling by magnitude stays consistent, then instead of GPT-6 being 16 times more capable than GPT3, it will be one thousand times more capable.
The scaling of this technology is why people are concerned about the future of these systems. How much more capable do they need to be to do what you do?
That’s what we’re going to talk about.
Why Your Friend Bob Doesn’t Think AI Will Take His Job
Let’s talk about the nay-sayers and non-believers who refuse to believe that AI will change anything.
They are dead wrong.
I have seen the future, so to speak, and the capabilities of AI are unquestionably going to change life as we know it. I feel very privileged to have been given access to unreleased versions of AI software, and I am involved in a number of alpha and beta tests of upcoming products.
And let me tell you, I have seen some things that made me curse under my breath in disbelief. They are what I like to refer to as “indistinguishable from magic.”
A reference to the Arthur C. Clark quote, “Any sufficiently advanced technology is indistinguishable from magic.”
But the nay-sayers are everywhere, trying to protect what they have by yelling to the rooftops that AI is evil magic, but also that it has no functional purpose.
But it can’t be both, Bob. Is it useless or is it going to take over the world? Make up your mind, Bob!
One of these nay-sayers attempted to publicly undermine my AI musings on a social media post. They commented, “If AI is so smart, why don’t you just have AI plagiarize your entire book for you.” Yuck, yuck. Good one, smart guy!
Let’s put the plagiarism misconception aside, you probably are curious to know if this book was written by AI.
The answer is no. However, I used several AI programs in the process. I like to call it, “partnering with AI,” as I wrote about in my previous book, PEERtainment.
Yes, I used an image generator to help with the cover art, and ChatGPT-4 to help with the layout, the chapter titles, rewording complex sentences, and to help me explain difficult concepts with more clarity.
An AI was not used to write the book itself.
This is for a number of reasons:
- AI writing systems are good, but not THAT good, yet. I don’t feel that they are up to the standards of writing required for an educated and attractive reader, such as yourself.
- Ironically, AI isn’t up to date on AI advancements. The current Large Language Models (LLMs) you may be familiar with (ChatGPT, Claude, Bard, Lambda, et al.) are only trained up to a specific date with information.
- Right now, in the USA for example, works created by generative AI cannot be used for copyright because the work was not created by a person. That will be important later as a reason why AI may not take your job.
The second point is important, that AI systems aren’t up to date on news and such.
An AI system is similar to a history book in this way. It was trained on information up to a certain date, and once the model is published, it doesn’t know anything “newer” than what it was trained on.
(Author’s note: An AI can be given additional information in real time but there are a number of security and safety reasons they aren’t up to date on current events and research. We will talk more about this later in the book.)
That said, context can be provided to a system, or even training them with additional data or private company data, but we will talk more about that later.
Now if you really want to have a coherent argument with your nay-sayer friend, Bob, you need some technical details.
We’re going to get a little bit technical here, but this understanding is vital to the rest of the book. It may just be your “Ah-ha!” moment.
Virtually every company on the planet was caught off guard by the release of ChatGPT, including many huge players in the Artificial Intelligence industry.
The technology behind the chat-interface for a “Generative Pre-trained Transformer” (hence ChatGPT) is revolutionary.
Never before have people been able to “chat” with an AI. Nor did anyone really think the general public, outside of data scientists and researchers, would ever want to do it.
- Generative because it generates language.
- Pre-trained because it was taught with hundreds of millions or even billions of pages of text and data.
- Transformer is the type of neural network used to build the AI model. I won’t get into the software engineering side of it, but let’s just say it’s complicated.
The year 2023 marked a significant turning point in AI advancements, with more progress than the combined developments of the prior two decades.
This surge is now fueled by global investments amounting to hundreds of billions of dollars.
Additionally, the convergence of certain pivotal technologies has led to a rapid emergence of AI companies in the industry, including advances in vector processing - previously used in graphics cards for video games and 3D modeling, but very well suited to the structure of neural networks.
Why Do People Think a Chatbot Could Be Dangerous?
There is a rare interview on the Dwaresk Podcast with Dario Amodei, CEO of Anthropic, which I highly recommend listening to. It may be a bit too technical for people not familiar with software development, but there are still a lot of valuable insights in there.
What I inferred from this and other experts is that the initial problems with LLMs seemed insurmountable to most AI researchers. There didn’t seem to be a good commercial use for a system that couldn’t answer 2+2 correctly, like GPT-1.
Most companies working with LLMs had abandoned the GPT model or at least put it on the back burner. The research was interesting but not the kind of research that would lead to anything important or profitable, so most let it slide. No one figured it could possibly turn into anything dangerous.
The systems would get basic math problems wrong, and would hallucinate fiction when asked for facts. Without solutions to these problems, most large companies abandoned their research into these models in favor of other types of AI.
Dario Amodei, having worked on GPT models up to version 3, said that he realized early on that the more you scale the parameters and the data, the more the AI model will scale its capabilities. It was an educated guess and he and some folks at OpenAI agreed.
He was VP of Research with OpenAI before creating Anthropic, the maker of Claude-2, essentially a competitor of OpenAI. After working on the first few GPT models, he saw the power of them but wanted to take the company in a different direction. Where Sam Altman’s OpenAI is focused on creating a Superintelligence or AGI (Artificial General Intelligence), Anthropic is trying to create a safety-first based AGI model by solving the problem of ensuring AI/AGI is in line with human values and ethics, however vague that may be at this time.
A Simplified Look at How AI Works
Parameters are essentially the variables an AI system uses to make predictions. The more parameters, the better the output.
Once you have the neural network set up, you need to train it. Instead of telling it exactly what to do, as you would with traditional software, a neural net is given examples with information about those examples. Then it is tested on its ability to identify additional training data.
As promised by the title of this section, here is a simplified example of training a neural net to create an AI.
You supply the system with 10,000 photos of a cat and label them all as cats. You feed the system another 1000 photos of cats and animals and objects that look somewhat similar to cats. You then test the AI to see if it can identify which photo in the test data is a cat or is not a cat.
When it correctly identifies the photo as a cat, the neural network will weigh the connections in its network that led to the correct decision. When it gets one wrong, you correct it and it will modify its neural net accordingly.
And then you do this with thousands, and then hundreds of thousands, and then millions of examples. There are several hundred or thousand rounds of training and testing done and tweaks are made to the system. More data is given to the system and so on.
The more initial accurate data the AI gets, the better it will be. The more total data the system can be given, the better “trained” the system is.
Got that so far?
- The parameters are the bucket and the data is the water.
- The bigger the bucket, the more data you give to the AI.
- The more data you give to the AI, the better the output and the more different things it can accomplish.
At this point, most data scientists do not see a limit to the theoretical ability of these systems to scale. The biggest challenge coming up is the limit of hardware and electricity to power them, but that point is a long way away.
The ability to scale their capabilities is the reason that industry experts think they could become dangerous.
Dario joked about having to protect a powerful AI system of the future, and the setup would be, “a data center next to a fusion reactor beside a bunker.”
The scaling of the system follows something we don’t understand yet. There is no process yet to “X-ray” an AI model and see what it is doing or to reverse engineer why it is able to create what it creates. You may hear AI industry folks talk about mechanistic interpretability, which is the fancy name for this reverse engineering process.
99% of the people you hear talking about LLMs in the news media and on social media don’t have it right. It is not a plagiarism machine reshuffling words around, technically they run on tokens - not even thinking in full words to begin with.
Author’s note: A Token is not even a word most of the time, but part of a word. It could be a symbol or a number, or even a single syllable. Roughly translated, a token is the equivalent of ¾ of a word.
A GPT is a prediction system designed to mimic how our brains work. By predicting the next token (or what word comes next) and then having weighted probabilities, it gets really good at delivering text in a coherent way.
But as more capabilities emerge, the idea of alignment is coming up more and more.
Alignment means, “Are the goals of the AI aligned with the goals of humanity (or at least, the goals of the creator/owner)?”
Because we can’t see how it decided to predict a series of tokens which become the output it produces, we can’t reverse engineer the system to see why it created the output.
We can’t see how an LLM helped you script that movie about the wealthy New York attorney and the small town Canadian Lumberjack. In the same way that we can’t figure out how your neighbor came up with the idea to paint their house orange with a gold roof.
Similar to the complexity of the human brain with its billions of neurons, AI systems like GPT operate with billions of parameters. These parameters help the model process vast amounts of data, including fragments of words, symbols, or numbers known as tokens.
Although we can study and probe these models, understanding the intricate workings of their billions of parameters, especially as they perform rapid and complex calculations, remains a daunting challenge.
And part of the potential misuse of AI is that it’s just as happy to help you create a biological weapon as it is to cheerfully write a poem about cats riding unicorns. This is why companies use a process called “guardrails”, which we will cover later in the book.
Data engineers don’t exactly know why giving a neural net millions of tokens allows it to understand basic sentence structure, but giving it hundreds of millions allows it to understand how to do arithmetic, and billions or trillions “should” allow them to understand the physics of the world in which we live.
They don’t know why it can go from writing good sentences in a small model, to writing coherent instructions of complex processes in a larger AI model. There are a lot of theories, but to my knowledge, no one has cracked the code yet.
Currently, they understand that the system will keep learning more and will be more functional, the bigger you make it and the more data you provide.
If you ask Dario Amodei, CEO of one of the leading AI foundational model companies Anthropic, how long before the new systems are at “human level” capability, he had this to say:
“In terms of someone looking at the model and even if you talk to it for an hour or so, it's basically like a generally well educated human, that could be not very far away at all. I think that could happen in two or three years.”
How Is AI Being Used In Industry Right Now?
There is a meme of a cartoon dog sitting drinking coffee at a table in a cafe that is on fire. He is saying, “This is fine.”
This is how I would describe the scene at most corporations when it comes to AI systems.
- The board is asking what the company’s AI plan is.
- The customers are asking if the company is using AI.
- Your staff could be using AI without your knowledge.
- Vendors could be producing work with AI without your knowledge.
- Staff could be creating things with AI such as images, videos, or products without an understanding of how copyright or intellectual property laws affect them.
- The CEO wants to know what AI is and how it can increase productivity or reduce costs.
- Legal wants to know if we can even legally use AI or not.
- I.T. is worried about data security, improved phishing attacks, and exposing company assets to third parties.
The big problem is that no one knows if and what they are allowed to use, and it’s basically a free-for-all right now.
The next major problem is that no one in the organization has a good understanding of different types of AI. Few, if any, understand how they work or what the risks and benefits are. Rumor and conjecture are the only information being spread about AI, and this creates false narratives.
It is difficult to find staff with any AI training or experience, so how do you get actionable advice to create company policy and frameworks?
If those problems weren’t enough, nearly everyone in organizations are worried about layoffs because of current economic changes or because of future natural disasters, and now they are also worried that AI could take their job.
Schools are back in session, and I have only heard of two school systems so far in all of Canada and the USA that have a comprehensive policy on generative AI. Schools are trying to make policy around AI based on ChatGPT-3.5, when GPT-5 will likely be released before the end of the school year.
This is another common issue. As we mentioned before (pertaining to some individuals), the same goes for organizations. They are making decisions based on the current AI, not the future AI. And as I demonstrated earlier, the capability of the next versions of AI will be exponentially improved compared to current models.
In the McKinsey 2023 State of AI survey, 35% of companies said that they expect to decrease their workforce in the next 3 years. 38% of companies said they plan to retrain more than 20% of their workforce.
I think that we can safely say that AI has turned things upside down in the workplace and will continue to do so.
We (collectively) will need to get up-to-speed on the capabilities of AI, as individuals and as organizations.
We need to remind ourselves that things are changing, even if we can’t immediately see those changes around us. And once the changes are visible in our industry or workplace, those changes will seem sudden and sweeping.
How about an example?
A company in India released a support chatbot to 1% of customers as a test in March 2023. (The podcast I listened to about it did not mention their name.)
They found that wait times dropped from an average of about 5 minutes to a few seconds. The calls lasted less time. The customers received the support and information they needed faster. The survey responses received were more favorable than with their human staff.
How quickly will a company act when they see a service improvement combined with a cost savings? By the end of April, less than 45 days later, they had laid off 90% of their phone support staff except for the most expert and the most productive.
The ones who answered the oddball questions, the things the AI was not able to fix, kept their jobs, as well as the ones who were able to use AI to be more productive than the rest.
Once an AI system is set up and running, it is a trivial IT task to scale the access to that system.
But in larger organizations or industries that require more creativity and thought, physically complex actions, or are more traditional, the pace of change will happen more slowly.
I watched a video the other day of a commercial apple orchard that had a fleet of drones which can determine which apples are ripe and pick them. Dropping them into automated machines carrying the apple boxes. It reduced their dependence on foreign farm workers and reduced waste while improving quality.
This means that even if your work involves physical labor, it still may be affected by AI combined with other automation or technologies.
Even if you work in an industry or organization that is slow to change, this massive societal change will likely put stress on the existing systems, leadership, and personnel in a way they are both unaccustomed to and unprepared for.
Key Takeaways From Chapter 1
The Evolution and Scale of AI: AI models, like GPT, have rapidly evolved in capabilities and scale, yet the intricate workings of their vast parameters remain elusive, with much about their learning processes still unknown.
AI's Societal Implications and Workplace Transformation: AI's integration in the corporate world varies, leading to both transformative successes and societal challenges, as industries grapple with its potential and consequences.
Concerns and Considerations Surrounding AI: Despite AI's potential, its decision-making remains mostly obscured, raising pressing concerns about alignment with human values and the potential for misuse.
Want to read the rest?
Amazon USA: https://www.amazon.com/dp/B0CG7KMVK1
Amazon Canada: https://www.amazon.ca/dp/B0CG7KMVK1
-Matt