Listen
NEW! Listen to article

The one-year anniversary of ChatGPT was at the end of November 2023. The preceding year, 2022, was a landmark year with the release of Stable Diffusion for images and ChatGPT for text.

The world as we know it has since changed dramatically.

So, what have we learned in the past year or so from the whiplash rollercoaster ride we now call generative AI?

(Editor's note: This article is adapted from Christopher Penn's Almost Timely Newsletter. You can find the original version here. It discusses three major artificial intelligence (AI) trends and offers three related sets of takeaways and advice to knowledge workers, businesses, and even policymakers.)

1. AI came to the masses

The first and most important thing that generative AI really changed is that nontechnical, noncoding people got an onramp to AI.

We've had AI for decades, and we've had very sophisticated, capable, and powerful AI for the 20 years. However, that power has largely been locked away behind very high technical restrictions: You had to know how to code in Python, R, Scala, or Julia, among others, to make the most of it.

Today, you code in plain language. Every time you give an instruction to Bing, Bard, Claude, or ChatGPT, you are coding. You are writing code to create what you hope is a reliable, reproducible result in the same way that a programmer who writes in Python hopes.

The implications of that change are absurdly large, almost too big to imagine, and we're only at the very beginning of that change.

Clay Shirky once said that a tool becomes societally interesting once it becomes technologically boring, but AI is defying that trend. It's still technologically interesting, but its simplicity and ease of use make it societally interesting as well.

And those societal changes are only beginning to be felt.

Recently, I was on a call with a colleague who said their company's management laid off 80% of their content marketing team, citing AI as the replacement for the human workers. Now, I suspect that is an edge case for the moment: Unless that team's content was so bad that AI was an improvement, I find it difficult to believe the management knew what AI was and was not capable of.

2. Most people don't know what AI can and can't do

That raises the second major thing we've learned in the last year: The general public doesn't really have a concept of what AI is and is not capable of.

The transformers architecture that powers today's language models is little more than a token guessing machine: It can take in a series of arbitrary pieces of data called tokens (in language models, those tokens correspond to 4 letter pieces of words), and then they attempt to predict what the next set of tokens would be in any given sequence. That's all they are: They are not sentient, not self-aware; they have no agency; and they are incapable of even basic things, such as math (just ask any of them to write a 250 word blog post and you'll almost never get exactly 250 words).

The general public, however, seems under the impression that these AI tools are all-knowing, all-powerful magic wands that will usher in a world like either Star Trek or Skynet—and the various AI companies have done little to rein in various such expectations. In fact, a substantial number of people have gone on at length about the existential threat AI poses.

AI doesn't pose world-ending threats in its current form. A word-guessing machine isn't going to do much else besides guess words.

Now, can you take that and put it into an architecture with other components to create dangerous systems? Sure, in the same way that you can take a pressure cooker and do things with it to turn it into an explosive device. But the pressure cooker, by itself, isn't going to be the cause of mass destruction.

To be clear: there are major threats that AI poses—but not because the machines are suddenly sentient. Two of the major, serious, and very-near-future threats that very few people want to talk about are...

  1. Structural unemployment
  2. Income inequality

Imminent AI Risk: Structural Unemployment

AI is capable of automating significant parts of jobs, especially entry-level jobs in which tasks are highly repetitive. Any kind of automation thrives in a highly repetitive context, and today's language models do really well with repetitive language tasks. We've previously not been able to automate those tasks because there's variability in the language, even if there isn't variability in the task. With language models' abilities to adapt to language, those tasks are now up for automation—everything from call center jobs all the way up to the CEO delivering talks at a board meeting. (sit on any earnings call and the execs largely spout platitudes and read financial results, both tasks machines could do easily)

As a result, we will, planetwide, need to deal with this risk of structural unemployment. Yes, a lot of jobs will be created, but many more jobs will be curtailed because that's the nature of automation. The US economy, for example, used to be mostly agriculture, and today less than 1% of the population works in agriculture. What the new jobs look like, we don't know, but they won't look anything like the old jobs—and there will be a long, painful period of transition as we get to that.

Imminent AI Risk: Substantially Worsened Income Inequality

Here's why this risk is imminent—and it's pretty straightforward. When you have a company staffed with human workers, you have to take money from your revenues and pay wages with it. Those human workers then go out into the broader economy and spend it on things like housing, food, entertainment, etc.

When you have a company staffed more and more with machines and a few human workers to attend to those machines, your company still earns revenues, but less of it gets disbursed as wages. More of it goes to your bottom line, which is part of the reason why every executive is scrambling to understand AI; the promise of dramatically increased profit margins is too good to pass up. But those profit margins come at a cost: fewer people earning wages.

What happens then is a hyper-concentration of wealth. Company owners keep more money—which is great if you're an owner or a shareholder, and not great if you are unemployed. That sets up an environment where hyper-concentrated wealth exists; and, for most of human history, that tends to end in bloodshed. People who are hungry and poor eventually blame those in power for their woes, and the results aren't pretty.

An Antidote to Those AI Risks

The antidote to these two problems is universal basic income funded with what many call a robot tax—essentially, an additional set of corporate taxes.

Where that potential solution will play out will depend very much on individual nations and their cultures.

Societies that tend to be collectivist, such as Korea, Japan, China, and other East Asian nations, will probably get there quickly, as will democratic socialist economies, such as the Scandinavian nations.

Cultures that are hyper-individualistic, such as the USA, may never get there, especially with corporations' lobbying strength to keep business taxes low.

3. AI models are evolving super quickly

The third thing we've learned in this last year is how absurdly fast the AI space moves.

Back in March of 2022, there were only a handful of large language models—GPT 3.5 from OpenAI, Google's BERT and T5, XLNet, and a few others. Fast-forward a year and a half, and we now have tens of thousands of language models.

Look at all that's happened for just the biggest players since the release of GPT-3.5:

  1. March 15, 2022: GPT-3.5 released
  2. April 4, 2022: PaLM 1 released
  3. November 30, 2022: ChatGPT released
  4. January 17, 2023: Claude 1 released
  5. February 1, 2023: ChatGPT Plus released
  6. February 27, 2023: LLaMa 1 released
  7. March 14, 2023: GPT-3.5-Turbo, GPT-4 released
  8. May 10, 2023: PaLM 2 released
  9. July 12, 2023: Claude 2 released
  10. July 18, 2023: LLaMa 2 released
  11. October 16, 2023: GPT-4-V, GPT-4-Turbo released
  12. November 21, 2023: Claude 2.1 released

When you look at that timeline, it becomes clear that the power of these models and the speed at which they are evolving is breathtaking.

The fact that you have major iterations of models (e.g., LLaMa and the OpenAI GPT) within six months of the previous version—with a doubling of capabilities each time—is unheard of.

We are hurtling into the future at warp speed. In a recent talk, Andrej Karpathy (one of OpenAI's top technologists), said there was so far no indication that we're running into any kind of architectural limits for what language models can do, other than raw compute limits.

The gains we get from models continue to scale well with the resources we put into them—so expect that blistering pace to continue or even accelerate.

Three Practical Career and Business Takeaways From AI's Breakout Year (Or So)

That's quite a tour of the past year-and-change. What lessons should we take from it?

1. Brush up on AI, because it'll be a part of your job

AI adoption is increasing at a crazy rate thanks to the promises it offers and its ability to fulfill them in ways that previous generations of AI have not.

The bottom line is this: AI use will be an expected skill set of every knowledge worker in the very near future. Today, knowledge and skill with AI is a differentiator. In the near future, it will be table minimum.

That hearkens back to what's been a refrain in my keynotes for years: AI won't take your job; a person skilled with AI will take the jobs (plural) of people who are not. One skilled worker with AI can do the tasks of 2, 3, 5, or even 10 people.

You owe it to yourself to get skilled up quickly.

2. Stick to the foundational AI models

The pace of change isn't slowing down. That means you need to stick close to foundational models—GPT-4-V, Claude 2.1, LLaMA 2, etc.—which have strong capabilities and which are adapting and changing quickly.

Unless you have no other viable alternative, avoid using vendors that build their companies on top of someone else's AI model. Why? Because as you can see from the list earlier, that rate of change is roughly 6-9 months between major updates. Any vendor that builds on a specific model runs the risk of being obsolete in half a year.

So, in general, try to use foundational models for as many tasks as you can.

3. Think about the implications of AI—ethical and moral—and take action

Everyone who has any role in the deployment of AI needs to be thinking about the ethical and even moral implications of the technology.

Profit cannot be the only factor we optimize our companies for, or we're going to create a lot of misery in the world that will, without question, end in bloodshed. That's been the tale of history for millennia—make people miserable enough, and eventually they rise up against those in power.

How can we do our part to avoid creating that misery?

One of the first lessons you learn when you start a business is to do things that don't scale. Do things that surprise and delight customers, do things that make plenty of human sense but not necessarily business sense. But, as your business grows, you do less and less of all that because you're stretched for time and resources.

Well, if AI frees up a whole bunch of people and increases your profits, guess what you can do? That's right: Keep the humans around and have them do more of those things that don't scale.

Here's a practical example. Today, humans who work in call centers have strict metrics they must operate by. My friend Jay worked in one for years, and she said she was held to a strict five-minute call time. She had to get the customer off the phone in under five minutes, or she'd be penalized. The net effect of that approach? Customers get transferred or just hung up on because the metric that employees are measured on is time, not outcome—and almost no one ever stays on the line to complete the survey.

Now, suppose AI tackles 85% of the call volume. It handles all the easy stuff, leaving only the difficult stuff for the humans. You cut your human staff some, but then you remove the time limits for the humans and instead measure them solely on survey outcomes. The result? Customers will actually make it to the end of the call to complete the survey, and if an employee is empowered to actually take the time to help solve their problems... then your customer satisfaction scores will likely skyrocket.

Such an approach would be contingent on your accepting that you won't maximize your profits—because doing so would require you to get rid of almost all your human employees. If you kept the majority of them instead, you'd have only somewhat lower costs; however, re-tasking those humans to solve the really thorny problems would let you scale your business even bigger. The easy stuff would be solved by AI, and the harder stuff solved by the majority of humans you kept around for that purpose.

Will companies behave this way?

Some will. Some won't. However, in a world where AI is the de facto standard for handling customer interactions because of its low cost, your ability to differentiate yourself with that uniquely human touch may become a competitive advantage... so give that some thought.

* * *

Happy first birthday, ChatGPT—and let's see what the world of generative AI has in store for us in the year to come.

More Resources on ChatGPT and Generative AI

Top 3 AI Tools for High-Quality Content Creation

Marketing at the Speed of Thought: AI Use Cases for Four Content Types

How to Harness the Power of ChatGPT to Scale Your Social Media Marketing

ChatGPT Is Everywhere. Here's How to Keep Your PR Job.

Enter your email address to continue reading

ChatGPT Has Turned 1: What Have We Learned From AI's Breakout Year?

Don't worry...it's free!

Already a member? Sign in now.

Sign in with your preferred account, below.

Did you like this article?
Know someone who would enjoy it too? Share with your friends, free of charge, no sign up required! Simply share this link, and they will get instant access…
  • Copy Link

  • Email

  • Twitter

  • Facebook

  • Pinterest

  • Linkedin

  • AI


ABOUT THE AUTHOR

image of Christopher S. Penn

Christopher S. Penn is a co-founder and the chief data scientist of TrustInsights.ai, a marketing and management consulting firm. He is a renowned keynote speaker and best-selling author who specializes in analytics, digital marketing, and machine-learning.

LinkedIn: Christopher Penn