There's been a lot of chat about what AI can do—from image generation to automated social media copy, and then there's the inescapable mantra of "AI is going to take our jobs!" (Spoiler: it won't.) But not many people have bothered to think about what it can't do, or what problems could arise from the technology.

PLAY NOW
Listen to it later:

Don't miss a MarketingProfs podcast, subscribe to our free newsletter!

In Episode 533 of Marketing Smarts featuring Christopher Penn, he and host George B. Thomas kick things off by discussing just that.

"The thing that I am most concerned about with all of the AI models...is that these models are trained on the corpus of human content," Christopher says. "That's how they do the things that they do. That also means that they have inherited all of our biases, all of our prejudices, all of these other things."

That includes the tendency of Open AI to have more a positive association with European American names than African American names, for example.

"Let's say you have a marketing automation system, and you have your first name and last name of your prospect, and you're going to generate some marketing copy semi-automated to send out some stuff," he continues. "What is that model going to do if the prospect's name is Latisha? What is that model going to do if the prospect's name is George? What's the model going to do if the prospect's name is Hiroki?"

It follows, therefore, that a human should always be checking the AI's output for content that is problematic or just plain wrong. Humans are good to have around for things like that.

The bottom line: AI is a tool, not a replacement—and tools are imperfect.

"AI is not going to take your job," Christopher insists. "If you're a B2B marketer, AI is not going to be a B2B marketer. But a B2B marketer who uses AI is going to take the job of a B2B marketer who does not. That is the end of the game."

For the entire discussion, including tips on how to use AI to make your job easier, listen to Episode 533 from the link above, or download the mp3 and listen at your convenience. Of course, you can also subscribe to the Marketing Smarts podcast in iTunes or via RSS and never miss an episode.


"Marketing Smarts" theme music composed by Juanito Pascual of Signature Tones.

Full Transcript: A B2B Marketing Deep Dive on AI Foundations, the Future, and More

George B. Thomas: AI is all the rage. I got a chance to sit down with Christopher Penn. Let me just say this. Usually, with these podcast episodes, I have some starting questions that I like to go through and make sure we talk about hurdles, getting started, and all the things that you need to know as a B2B marketer around the topic that we're covering. This interview, not the case. By the first answer to the first question, I knew that the starter questions had to go out the window, so we just went on a journey.

It is a magnificent B2B marketing deep dive journey on most of the things, if not all the things, that you should be thinking about and trying around AI and marketing. Be warned, buckle up, get your notepad and your pen ready, your iPad, chalk and a wall, it doesn't matter, get ready to take some notes, get ready to start playing around with some AI for your B2B marketing.

It's time for the good stuff. Here's the thing. You either saw this title and got super duper excited, as the nerdy marketer you are, or you saw this title and you said, "Oh no, here we go," because sometimes when you think about AI, machine-learning, data science, your brain feels like it's going to explode a little bit. We're going to try to fix that. I'm not alone today, as always, I'm bringing around smart people for the journey that we're making on this Marketing Smarts Podcast episode.

Christopher Penn, how are you doing today?

Christopher Penn: I am doing very well. As of the time we're recording this, I can say happy New Year. When you listen to this, I still wish you happy New Year, whatever year it is.

George: There you go. It could be 2030, but we're still wishing you a happy New Year. I want to dive into this and I want to start where I always like to start. I think especially for this topic there are a lot of places that one could lose some sleep. You personally, what keeps you up at night when you think about B2B marketers, AI, machine-learning, any of these topics, what are you like oh my gosh, I can't sleep, or maybe you cry yourself to sleep, either way?

Christopher: I don't often cry myself to sleep, but I am in my 40s now, so back pain is a thing.

The thing that I am most concerned about with all of the AI models, in particular, there's the generative models that are out there, the things like ChatGPT or Stable Diffusion, is that these models are trained on the corpus of human content. That's how they do the things that they do. That also means that they have inherited all of our biases, all of our prejudices, all of these other things, to the point where there's issues with these things. There's real documented problems. Everybody, but marketers in particular, do not ask the question, "What could go wrong? What could be wrong in this model?"

I'll give you an example. In ChatGPT, which is the model everyone has been talking about recently, listen to this limitation. This is disclosed on the Open AI website: "We found evidence of bias in our model running X benchmarks. For example, we found that our models more strongly associate European American names with positive sentiment when compared to African American names, and negative stereotypes with black women." Think about that. Everybody and their cousin is using this thing to create content, and this is on the warning label to this model. It's not hidden, they're not pretending it's not there, it's on the warning label. Yet nobody is talking about it, nobody is looking at it, nobody is thinking what could go wrong if I'm using this to generate certain types of content.

Let's say you have a marketing automation system, and you have your first name and last name of your prospect, and you're going to generate some marketing copy semi-automated to send out some stuff. What is that model going to do if the prospect's name is Latisha? What is that model going to do if the prospect's name is George? What's the model going to do if the prospect's name is Hiroki? There are biases.

We were running a test the other day in DALL E-2. We said, let's take a Mexican man with X criteria, a white man, a black man, and it created these images. There were some very obvious biases into what the machine spit back that were not okay.

For me, the thing that makes me concerned the most about these models, particularly as marketers are kind of blindly using them, is they don't know what can go wrong with them and what can be being done and put into their content that they're not even thinking about because they don't ask themselves, "Am I creating content that could be problematic for a percentage of the audience?"

George: It's interesting. I am quickly going to go off the beaten path here. Part of my brain is battling with what you just said and trying to diagnose is Chris saying we shouldn't be using it, or is Chris saying we should use it, but we should be careful as marketers an quit being at break neck speeds just trying to get a job done and actually take the time to strategically use it. I'm going to start there. Was this whole beginning question a marketers, knock it off, or here's some ginger steps you should take?

Christopher: It is number two. It is perfectly okay to use these tools, as long as you, a human, are reviewing its output and going, "I'm looking for problems." Ideally, within your organization, you have.

Everyone and their cousin has been ramping up DEI initiatives—diversity, equity, and inclusion. Aside from workshops and all the usual activities, games and role playing, your DEI committee should be on your content marketing committee to look at the content you're producing and saying, "This is problematic." Your DEI committee should be involved with your AI team, with your machine-learning team, with your data science team, to say, "Look at that. This doesn't look like it's working right."

I'll give you an example from the martech show I was at a couple of years ago. I saw this one vendor that said, "We will find you the perfect customers, the ideal customers. You just give us your data, and we'll put it on a map, visualize it, and there's your customers." This is a B2C example. They put up a map of Boston, they put up Dunkin' Donuts, and showed the city of Boston and said, "These red dots are your ideal customers, go get them. These black dots are not." If you don't know the layout of Boston, the southern part of the city are historically black parts of the city and historically less wealthy parts of the city. There were no ideal customers there. They were all in Cambridge, in the financial district, etcetera.

For our international listeners, if you're not familiar, Dunkin' Donuts is an American brand of coffee that I would mostly describe as milky weak coffee. The thing that is true about Dunkin' Donuts in the city of Boston and surrounding areas is that the only people who don't drink Dunkin' Donuts are dead. Everybody drinks Dunkin' because it's cheap, it's everywhere, and it's good enough. For this company to have created this map saying there were no ideal customers in the black parts of the city is a load of what our Spanish friends would call excremento de toro, this is just completely untrue. Everybody drinks Dunkin' Donuts.

So, this software which was built with statistical data, census data, stuff like that, machine-learning based, produced a phenomenon called red lining. First coined in the 1930s in the real estate industry and the insurance industry where people would take maps of the city and draw red lines around the parts where they didn't want to do any business. Again, historically black or minority, historically poorer parts of the city. No one stopped to say, "That looks weird, that doesn't look right." Someone on a functioning DEI committee would look at it and say, "You just reinvented red lining. This is really bad. Maybe we should turn this software off."

Another classic example, back in 2018, Amazon created a predictive algorithm to screen LinkedIn profiles for ideal candidates to reduce the delay in hiring engineers. They turned it on and stopped hiring women immediately. Just stopped. Why? Because they trained it on all male developers, and as a result, it learned that characteristic. Of course, Amazon got a big black eye for this because it was really obvious immediately. Nobody stopped to ask, "What could go wrong?"

To answer your question, no, we shouldn't stop using these systems, but we absolutely need human beings throughout saying, "What could go wrong?" You need to leverage that investment you've made in DEI to have those folks in particular saying, "What could go wrong? It looks like that went wrong."

George: Super interesting. My brain is going in two different directions with what we're unpacking here. One side of this is we're talking about the content that we might execute out to the people that we're trying to actually serve, that we're trying to be human to, that we're trying to fix a problem or help them reach their aspirational goal. Using AI and what has changed in the last three, six, nine, twelve months and where you think it's going to go. The second part of my brain I think is more important, so that's probably the question that I'm going to ask you next. Talk us through the path, the speed, the rate in which this is happening and where you think it's going.

Here's my question before that. Are there things that we should put into place? I understand having those people look at it and say something is wrong, but even before you start to generate it, before you pick the tool that you're going to use to do it, because we could start to list out things like Jasper, GoCharlie, all the different ones that are probably tying back, in my assumption, to one main system to actually do what they're doing, talk me through this roadmap of how do I make smart decisions as a marketer to use the right tool, have the right checklists in place so that I don't find myself being the Amazon of my manufacturing company, car rental company, restaurant. Where does your brain go with that?

Christopher: There's a series of pretty well-defined processes. When you're thinking about using AI, machine-learning, and data science, you're really talking, in a lot of cases, about software development, whether you're the one developing it or whether you're contracting with a vendor. That immediately goes to requirements gathering. What is the software supposed to do? Who is going to use it?

We have a framework at Trust Insights we call the five Ps: purpose, people, process, platform, performance. Purpose; what is the intended purpose of this thing that you're trying to use? People; who is going to use it? Also, who is going to be affected by the results of it? Understanding both sides of the coin, not just your staff, but also your customers.

Platform; what's the technology behind it and how was it made? Again, a lot of these companies are creating AI models out there and they're not super open about which models they're using. They're like, "We use this proprietary stuff." No you don't. You have one of the five open source models that Hugging Face deployed and you're fine tuning it behind the scenes. Which is fine, just say that and be honest about it. I did actually have a chance to talk to the chief AI scientist at GoCharlie, and she and I had a very in-the-weeds chat about what's going on behind the scenes. I can't speak to the other companies, but I've talked to one person and gotten the technical details, so I can say at that product they know what they're doing.

Then processes. What are all the processes in place that you're going to use to create, to test, to deploy, and to QA this piece of software that you're deploying? Even if you're using a vendor, you still have to test, deploy, and QA because you need to make sure that you're doing what is expected. That's where, for example, your DEI committee should be part of that QA process to look at, are you doing things you shouldn't be.

A really simple example: Are you doing stuff that is cultural appropriation? If you're putting up social media content on your Instagram account using imagery from the Mashantucket Pequot Tribe and zero people in your company are Mashantucket Pequot, even if the AI suggested it, you probably shouldn't use it. If you are Mashantucket Pequot, then that's a totally different story.

Finally, performance. Is what you're doing, is the software, is the program achieving its goals? This goes back to the purpose. One of the things have to ask up front is, are things like reducing or mitigating bias part of your purpose and part of the things that you'll be measured on. If the answer is no, you might want to think about putting that in because that should be part of modern AI. That's how I would tackle that aspect. You want to tackle where the stuff is going.

George: That's the thing. There's so much for us to think about. I'll tell you, I think there's two, maybe more, but I'm going to break it into two audiences right now listening to this podcast. I am by no means saying I am a Christopher Penn, but there's people like you and me who are like new thing, let's try it, let's test it. By the way, you're like try it, test it, break it. I'm the shiny new toy, what can I do with it guy coming along for the ride as an early adopter. Then there's the folks listening to this like no, not for me. This is changing the way that we do business. This is changing the way that we can accomplish things. I agree with this first part of our conversation that is you have to put the foundation, the processes, the people to make sure that you're not jacking it up.

Let's talk for a second to those folks who are like no. I'm going to back to this question of over the last three, six, nine, twelve months, have you ever seen anything gain as much speed as the conversation around AI and content generated AI, anything like it before? And where do you think it's going? Kind of paint the picture of how we've gotten to where we're actually having this conversation today. Which, by the way, is predicated off of seeing Christopher at B2B Forum talking about GoCharlie, being mind-blown in an instant, I knew we had to do an interview. Talk to the audience about how we got here so fast and where the heck you think we're going in the future.

Christopher: We haven't gotten here so fast. We've been talking about AI for 70 years. Many of the algorithms and things that are in use today were developed in the 1950s. What's different today is the compute power that we all have. We carry these literal supercomputers in our pockets that allow for a lot of these capabilities.

The conversation has changed, in the last four years especially, because of an architecture called Transformers. Not the awesome 1980s toys, but these AI algorithms. Without getting into the technical bloody guts of it all, essentially these Transformer-based models, which incorporate things like large language models, like ChatGPT for example, allow us to do what's called generative AI.

There are three different basic classes of AI, three use cases.

There's regression, which is I have a whole bunch of data, find me something with an outcome, find me things that look like the outcome. This is the principle behind things like recommendation engines. When you fire up Netflix and it says, "You might also enjoy," these eight shows that are exactly like these other eight shows. When you're on TikTok and you look at dogs in ballerina outfits, for some reason your entire TikTok feed is all dogs in ballerina outfits, that's a recommendation engine, that's regression.

The second major category of AI is classification. This is where you see a lot of companies doing stuff with voice-of-customer things. Bringing in billions of social media updates, bringing in phone calls, interviews, call centers, just classifying what's the in box. Of the last 20,000 calls we've gotten to our call center, what are the five main topics that people are complaining about, and having machines be able to digest that.

Those two things, regression and classification, have been part of AI and have been deployed in production for years now. When you look at your marketing automation system and your CRM, and you see things like automated lead scoring, that's what is going on there. When you use a piece of software like Demandbase that's making recommendations about content somebody should see, that's regression algorithms. It's very straightforward. I had a chance to chat with a weather chief data scientist, and again we got into the bloody guts of which algorithm they should use, and it boils down to regression stuff, and the software is very good.

What has changed in the last five years is generative AI. This is when you say I want AI to start transforming inputs that I give it into outputs that maybe have not been seen before. In the last six months when people start to catch on and start to understand that generative AI is capable. This started with things like DALL E-2, which is an image generator, and then Stable Diffusion, an open source model. Suddenly, you saw an explosion of people making computer generated images of dogs on skateboards in outer space and fun stuff like that. Generative AI.

Then in the last three months, the rollout of ChatGPT, which is a chat-based interface to a language model called GPT 3, which the current version has been on the market for about two and a half years. People who are in the know have been using it very successfully for the last two and a half years. A lot of these big companies that are generating content in an automated fashion are using that. We wrote some software to connect to it and we do predictive blog posts stuff with it.

The chat version is something that the completely nontechnical user can get behind and say, "I know how to chat, I don't know what temperatures or P scores of soft max layers are, but I know how to chat." Now with DALL E-2 and Stable Diffusion people go, "I can make pictures of my dog's breed in knight's armor on a horse." Now they can say I can have this thing write me blog posts, or social media updates, or rephrase the lyrics to Gangster's Paradise to be about B2B marketing. You can do things with these large language models. That's what has changed.

What's coming in the next few months is enhancements to these models. The models are getting larger. Open AI has said on their roadmap for 2023 is GPT 4, which will be about eight times the size in terms of what GPT 3 is, which means more natural conversation, more realistic outputs, harder to detect machine generated versions. All that is coming.

The use cases for these things are going to dramatically multiply. We just did a whole livestream on some of the different use cases. I'm gathering a whole collection of them for the fourth edition of my book because what you can do now is incredible.

I'll give you a real simple example that's a huge time saver for me. I record our conference calls, as many people do, with the disclosure that the call is being recorded. Then I have one AI, Otter.ai, transcribe it. It's full of ums and uhs and all his filler talk. Then I feed that to Open AI and say summarize this into meeting notes and action items, and it gives me two paragraphs, and we're done. I don't need a VA, I don't need anything else, I just have meeting notes and action items. The action items go right into my to-do list and every client call, I don't miss a thing. That is one of the simplest use cases. It saves time, it saves money, happier clients.

George: First of all, that was a little bit of a pause there because my mind was just blown. Like wait, what? The idea of not skipping a beat. Talk about a couple of things here. One, what I hear is it's been around, it's nothing really new. Just the way that we as humans can leverage it has been simplified. Now that it's simplified, what I hear in the future is it's going to be maximized, there's going to be more that we can do.

I'm glad that you gave that use case because of the next question that was in my mind. I want to know when you think about the B2B marketers that are listening to this episode right now and the fact that you have foundation straight, you have your people paying attention, you have a process, you know you're not going to have egg on your face, here are the two, three, four things that as a marketer in 2023 and beyond, by all that is holy, quit wasting time and money, have AI do these things.

Christopher: Any content that is repetitive and of moderate value is stuff that you can hand off to an AI, at least for the first draft. I'll give you a real simple example. Again, in the large language models. I have a prompt that is prewritten. In fact, let me pull it up here just so I can read it out loud to you. It's pretty straightforward.

It says, speaking to the machine, "You are an expert social media manager, you are skilled at crafting social media posts that garner high engagement on services like Twitter, TikTok, Instagram, and LinkedIn. In your capacity as a social media creator, you will create promotional tweets enticing audience members on Twitter to download our new e-book. Here are the details of the e-book. Here's the URL. Here's what the e-book is about. Here's an abridged table of contents. Write 10 tweets using the above details, promoting the e-book and encouraging people to download it. Use the details provided for content and benefits as reasons why people should download it. Follow the technical specifications carefully."

I put this into the large language model and it spits out 10 tweets and a hashtag that has the kind of language that gets engagement, that avoids things I tell it to avoid. I copy and paste this, then take it over to AgoraPulse, drop the CSV file in, and my promotional tweets for the week are done.

I then say to the language model, "Give me 10 Instagram ideas. Here's the format, suggested photo, accompanying caption." It spits out a photo of the e-book cover, caption, "Have you tried downloading this?" So on and so forth. Again, this is all language models.

The way that I think people should be thinking about this if you had a new intern on staff, just got them from the temp agency or whatever, what instructions would you give them to do a fairly simply marketing task? Write those out. That is the prompt that goes to the machine. Then the machine does it, and you QA the results and say it's ready to go.

I was doing an experiment with some fiction writing last night. There's this one writing group that I'm part of where they have a monthly contest, and this month's contest the prompt was dream. There's three restrictions and three bonuses if you do this. I wrote all that out as a prompt and said, "We're going to write this story in four parts, 750 words each. I want you to write the outline for the story first, title each part, and then write each of the parts." In about 15 minutes or so, I had a 3,000-word story, and I submitted it.

It was coherent. Was it great? No, it wasn't great. It was stuff that you'd see in a lot of very similar stories. But I didn't have to spend three hours writing it, I got it done in 15 minutes. Think about that for your blog. If you have a blog post that you know you have to write a good first draft, you say, "I want you to outline this. I want you to write me a social media strategy for this. I want you to write me a TikTok strategy."

I did one the other day, because Katie and I are often talking about we're B2B marketers, what do we do with TikTok. I said, "Here's info about our company on all these things. Build me a TikTok strategy, give me 10 TikTok video ideas appropriate for a B2B marketer." It came up with these 10 ideas. Four of these are actually good ideas. Six of them not so much. Guess what? We're going to start trying these things out.

So, for things where you have questions like, "What should I do with this thing," these are all good starting points. I think for B2B marketers that are sitting there, you're stressed, you have 82 things on your to-do list and another 20 are going to come in tomorrow, it's a way to speed things up for the ideation phase and for the refinement phase.

One other thing that I love to do, and I'm almost hesitant to tell you this… I go to a lot of conferences and things, and because of the pandemic and stuff, when I can I just drive if it's within driving distance. I have these fantastic P100 masks that are biowarfare masks and they work great, but if I can spend seven hours driving and be in the comfort of my own car, or three hours on a plane packed in like a sardine, I'm going to take the seven hours. I have a little audio recorder, I just plug it in, and I dictate freeform while I'm in the car just thinking out loud, or I listen to a podcast and yell out loud into the recorder.

Then I take the transcript and put it through one of the large language models and say, "Rewrite this with correct grammar, punctuation, spelling, syntax, and formatting." These language models are okay at creating, but they are fantastic at transforming, at rewriting, at distilling. I can take a one-hour conversation I've had with myself and turn it into 10 to 15 pages of clear coherent content that is me. It's still me, it's not the machine. It's my words, but refined, coherent, logical.

Suddenly, I've solved my content marketing problem because I'm using the time that I have available to me. When you are going to the grocery store, you have 10 minutes in the car. Fire up your audio recorder. Feed it to a machine and clean it up, and boom, you have more content than you ever knew what to do with.

George: Oh my gosh. I'm just sitting here as a nerdy guy with 32,000 questions and I'm trying to figure out how to keep it on the rails for what's going to give the listener the most value. All of these are great ways. Again, it's about saving time, it's about being able to be creative. I love that in there you dropped the words of the iterative process, the transformational process. Most of us think about how do I say something real simple and extract a lot of value, versus how do I add a lot of value and it quickly be dressed up to be usable.

As you were talking in that last piece, I was like I really want to ask Chris about AI content as a starting point versus a finished product. But then also when you were giving the prompt, I was like he's teaching a baby to walk. Then I was like actually, what he's doing from an agency standpoint, where I come from historically, is he's feeding it the creative brief. He's giving the creative brief to the AI machine and saying now do this thing for me.

Christopher: Exactly.

George: It was so amazing.

Christopher: With these prompts and with these machines, they put in one or two sentences. No. It is a full creative brief. You can put a full page of text in there with all of the details. Here's the thing about all of these models, and this is an important lesson, this is the fundamental logic behind them all. I heard this first on a machine-learning podcast. A word is known by the company it keeps.

If I were to put George B. Thomas in and collect all the corpus of information about George B. Thomas in proximity to that word, all the text that is publicly available, what words would be most in proximity? HubSpot, Marketing Smarts, marketing, social media, so on and so forth. The machine can know what the meaning of George B. Thomas is based on the words in proximity to that. If I put in MarketingProfs, what are the words that are going to come up? Ann Handley, B2B Forum, so on and so forth.

That's how these tools work, they work based on proximity of words and the understanding of language. No one has written down the rules of grammar or anything like that. It is all known by the statistical distribution of words around other words at a very large scale. We're talking billions and billions of mathematical computations. That's why these things are so huge.

What that also means is that when you are constructing the prompts to direct these machines, if you're using the right words and a lot of them, you get better results. If I put in 'dog on a skateboard' into an image generator, we're going to get a dog on a skateboard, but it's going to be the machine's choice. If I put in '12-year-old Pitbull Shar-pei mix with black and white fur and a red collar on a Tony Hawk 2012 skateboard in a park on a sunny day at 9:00 AM in Colorado,' I am going to get a much better result because there are more words to work with and more things to judge proximity based on.

When we're talking about these prompts, these creative briefs, we've all had that creative director who is like, "What the heck? I don't know what to do with this creative brief. You need more details." That is what you are doing with these machines, you're building an extensive creative brief.

When I'm building a thing to do tweets, I will say the URL is TrustInsights.ai/ga4, the URL should be used in every tweet, use Google Analytics, analytics data, data science, use one of these four hashtags at least, do not use more than two per tweet, the total number of characters should be 280 or less including the URL. I give these specifications in these prompts, and I get what I want because I am specific.

Andy Crestodina said at B2B Forum specificity correlates with conversion. I love that phrase. Specificity correlates with success with AI.

George: It's interesting because it's not a new principle. That's exactly where my brain was going as far as when we talk about marketing, or communication, or content marketing, one of the things I'm always saying is specificity wins the day. Know the people you're talking to, how they want you to talk to them, and go further. What you're saying is just that thing.

The other piece that popped into my brain is I know that there are marketers out there going, "Come on, that's going to take more time. I just want a dog on a skateboard." My brain immediately went to right now you're spending three hours writing that blog article that is maybe 1,000 words, when you could spend 30 minutes typing out this creative brief of what you want it to spit it out. We can go back to the whole refinement conversation we've had. It's just this magical mix that you can get into.

Here is what's funny. This whole conversation has nothing to do with the starter questions that we had in a document that we were going to use for this conversation, because this is the world we live in. There are going to be things that always change. You are trying to as a marketer have a plan, you have goals. In those plans and goals, you have content. You hear about this conversation that we've had today, which is AI and machine-learning, and you're trying to figure out where it all fits in together. Hopefully, today's conversation has helped you.

Christopher, you have dropped a ton of value, but you more than anybody I know are in the weeds with this stuff and you've had a journey, which means you have some wisdom around this topic. What are some final words of wisdom that you would leave the audience as we send them back to their regularly scheduled day?

Christopher: The thing to think about with all this stuff is right now people are experimenting, they're playing with it, which is awesome and what we want. You have to think about how you put it into production. All these tools have APIs, application programming interfaces, that software can talk to.

I'm going to give you a simple example. I have an SEO keyword list. I can take the search volume for that and I can use predictive analytics, machine-learning, to forecast when in the next 52 weeks that term is going to be searched for the most. Pretty straightforward. This is old math, this is not new stuff. If I have a keyword list of 800 keywords, which I do for my company, I forecast all 800 to figure out which each week are going to be the top five keywords for that week. That used to be how we would figure out our content strategy.

Then ChatGPT and the GPT models came along. We played with it, and then we looked at the little code button and said now we can put this into production. Now what we do is we take the top five keywords every week and feed that to the AI and say, "Write me five blog outlines for the week." Now I have the top content for that week in prewritten first drafts that I can then hand off to a writer to clean up.

I've gone from write a cool prompt and make the thing do something to putting it into production where now it scales. Instead of one blog posts or 20 tweets, it's 200 tweets or 2,000 tweets. That is what is going to set apart the winners from the losers in this.

AI is not going to take your job. If you're a B2B marketer, AI is not going to be a B2B marketer. But a B2B marketer who uses AI is going to take the job of a B2B marketer who does not. That is the end of the game. If you are a marketer who is not using these AI tools, you are in danger, your career is in danger, because other people who are using these tools are operating better, faster, maybe even cheaper, and can get more done than you can just by the nature of these tools.

That is my parting words of wisdom. A marketer who uses AI will beat out a marketer who does not use AI. It is like the first time that a basketball player put on sneakers, suddenly the game has changed.

George: Marketing Smarts listeners, did you take lots of notes? I have to ask, what is your one thing, your number one execution opportunity after this podcast episode? Make sure you reach out and let us know in my inbox or on Twitter using the hashtag #MPB2B.

I also have to ask are you a free member of the MarketingProfs community yet? If not, head over to Mprofs.com/mptoday. You won't regret the additional B2B marketing education that you'll be adding to your life.

We'd like it if you could leave us a rating or review on your favorite podcast app, but we'd love it if you would share this episode with a coworker or friend. Until we meet in the next episode of the Marketing Smarts Podcast where we talk with Mark Schaefer about the why, what, and how of community and brand building for B2B marketing victory, I hope you do just a couple of things. One, reach out and let us know what conversation you'd like to listen in on next. Two, focus on getting 1% better at your craft each and every day. Finally, remember to be a happy, helpful, humble B2B marketing human. We'll see you in the next episode of the Marketing Smarts Podcast.

...sign up for free to continue reading

Sign up for free resources.

Continue reading 'A B2B Marketing Deep Dive on AI Foundations, the Future, and More: Christopher Penn on Marketing Smarts [Podcast]'

Don't worry ... it's FREE!

Already a member? Sign in now.

Sign in with your preferred account, below.

Don't miss a MarketingProfs podcast, subscribe to our free newsletter!

Published on