Everyone has an opinion about ChatGPT and AI. Engineers and entrepreneurs see it as a new frontier: a bold new world to invent products, services, and solutions. Social scientists and journalists are worried, with prominent New York Times author Ezra Klein calling it an “information warfare machine.” What has god wrought?
Let me just say upfront, I see enormous possibilities here. And as with all new technologies, we cannot fully predict the impact quite yet. There will be problems and failures, but the ultimate story is “hooray.”
What is ChatGPT?
To put it quite simply, this technology (and there are many others like it) is what is often called a “language machine” that uses statistics, reinforcement learning, and supervised learning to index words, phrases, and sentences. While it has no real “intelligence” (it doesn’t know what a word “means” but it knows how it is used), it can very effectively answer questions, write articles, summarize information, and more.
Engines like ChatGPT are “trained” (programmed and reinforced) to mimic writing styles, avoid certain types of conversations, and learn from your questions. In other words, the more advanced models can refine answers as you ask more questions, and then store what it learned for others.
Related: ChatGPT: ‘Bigger than anything’ HR has ever seen?
While this is not a new idea (we’ve had chatbots for a decade, including Siri, Alexa, Olivia, and more), the level of performance in GPT-3.5 (the latest version) is astounding. I’ve asked it questions like “what are the best practices for recruiting?” or “how do you build a corporate training program?” and it answered pretty well. Yes, the answers were quite elementary and somewhat incorrect, but with training, they will clearly get better.
And it has lots of other capabilities. It can answer historic questions (who was president of the U.S. in 1956?), it can write code (Microsoft CEO Satya Nadella believes 80% of code will be automatically generated), and it can write news articles, information summaries, and more.
One of the vendors I talked with last week is using a derivative of GPT-3 to create automatic quizzes from courses and serve as a “virtual Teaching Assistant.” And that gets me to the potential use cases here.
(P.S. In some ways the chatbot itself may be a commodity: There are at least 20 startups with highly funded AI teams building derivative or competing products.)
How can ChatGPT and similar technologies be used?
Before I get into the market, let me talk about why I believe this will be so enormous. These systems are “trained and educated” by the corpus (database) of information they index. The GPT-3 system has been trained on the internet and some highly validated data sets, so it can answer a question about almost anything. That means it’s kind of “stupid” in a way, because “the internet” is a jumble of marketing, self-promotion, news, and opinion. Honestly, I think we all have enough problems figuring out what is real (try searching for health information on your latest affliction, it’s frightening what you find).
The Google competitor to GPT-3 (which is rumored to be Sparrow) was built with “ethical rules” from the start. According to my sources, it includes ideas like “do not give financial advice” and “do not discuss race or discriminate” and “do not give medical advice.” I don’t know yet if GPT-3 has this level of “ethics,” but you can bet that OpenAI (the company that’s building this) and Microsoft (one of their biggest partners) are working on it.
So what I’m implying is that while “conversation and language” are important, some very erudite people (I won’t mention names) are actually kind of jerks. And that means that chatbots like ChatGPT need refined, deep content to really build industrial strength intelligence. It’s OK if the chatbot works “pretty well” if you’re using it to get past writer’s block. But if you really want it to work reliably, you want it to source valid, deep and expansive domain data.
I guess an example would be Elon Musk’s over-hyped automatic driving software. I, for one, don’t want to drive or even be on the road with a bunch of cars that are 99% safe. Even 99.9% safe isn’t enough. Ditto here: If the corpus of information is flawed and the algorithms aren’t “constantly checking for reliability,” this thing could be a “disinformation machine.” And one of the most senior AI engineers I know told me it’s very likely that ChatGPT will be biased, simply because of the data it tends to consume.
Imagine, for example, if the Russians used GPT-3 to build a chatbot about “United States Government Policy” and point it to every conspiracy theory website ever written. It seems to me this wouldn’t be very hard, and if they put an American flag on it, many people would use it. So the source of information is important.
AI engineers know this well, so they believe that “more data is better.” OpenAI CEO Sam Altman believes these systems will “learn” from invalid data, as long as the dataset gets bigger. While I understand that idea, I tend to believe the opposite. I believe the most valuable uses of OpenAI in business will be pointing this system at refined, smaller, validated, deep databases we trust. (Microsoft, as a major investor, has its own Ethical Framework for AI, which we have to believe will be enforced based on their partnership.)
In the demos I’ve seen over the years, the most impressive solutions I’ve seen are those that focus on a single domain. Olivia, the AI chatbot developed by Paradox, is smart enough to screen, interview and hire a McDonald’s employee with amazing effectiveness. Another vendor built a chatbot for bank compliance that operates as a “chief compliance officer” and it works very well.
Imagine, as I discuss in the podcast, if we built an AI that pointed to all our HR research and professional development. It would be a “virtual Josh Bersin” and might even be smarter than I am. (We are starting to prototype this now.)
I saw a demo of a system last week that took existing courseware in software engineering and data science and automatically created quizzes, a virtual teaching assistant, course outlines, and even learning objectives. This kind of work typically takes a lot of cognitive effort by instructional designers and subject matter experts. If we “point” the AI toward our content, we suddenly release it to the world at scale. And we, as experts or designers, can train it behind the scenes.
Imagine the hundreds of applications in business: recruiting, onboarding, sales training, manufacturing training, compliance training, leadership development, even personal and professional coaching. If you focus the AI on a trusted domain of content (most companies have oodles of this), it can solve the “expertise delivery” problem at scale.
Where will this market go?
As with any new technology, the pioneers often end up with arrows in their backs. So while ChatGPT seems miraculous, we have to predict that innovators will advance, extend and refine this quickly. I would be willing to bet that most VC firms are now writing blank checks to startups in this area, so there’s plenty of competition to come.
My gut feeling is that companies like OpenAI and Microsoft will likely compete with many other players (Google, Oracle, Salesforce, ServiceNow, Workday, etc.) so every major vendor will “bulk up” on AI and machine learning expertise. If Microsoft builds OpenAI APIs into Azure, then thousands of innovators will build domain-specific offerings, new products, and creative solutions on that platform. But it’s still too early to tell, and my guess is that industry-specific and domain-specific solutions will win out.
Imagine the number of “opportunity spaces” out there to consider. Leadership development, fitness coaching, psychological counseling, technical training, customer service, the list goes on and on. And that’s why, as early as this market remains, I still believe the opportunity is “enormous.” (I recently tried to get help with PayPal through its chatbot and was so frustrated I decided to shut down my account.)
I liken this tech to the early days of “mobile computing.” In the early days, we saw it as an “add-on” to our corporate systems. Then it grew, expanded and matured. And today most digital systems are designed for mobile first, they build entire tech stacks around mobile, and we study behavior, markets and consumers through their phones. The same thing will happen here. Imagine when you can see all the questions your customers ask about your products. The opportunity is just staggering.
And as I discuss in the podcast, a lot of jobs will change. I just did an analysis of all the jobs immediately impacted by ChatGPT (editors, reporters, analysts, customer service agents, QA engineers, etc.) and found that today, with about 10.3 million jobs open, about 8% (800,000) will immediately be impacted. These jobs won’t go away, but they’ll be upgraded and enhanced by these systems over time. (And there are lots of new jobs like “Chatbot trainer” now being created.)
There’s much more to discuss on this topic, so I invite you to join us as a Josh Bersin Academy or Corporate Member to discuss more. And if you have your own experience or you’re building something cool, we definitely want to see it.
Onward and upward: Let’s think about this as one of the brightest stars in our future, and try to prevent it from getting out of control.
*
Learn more from keynoter Josh Bersin about the impact of ChatGPT and other emerging technologies at the free, online HR Technology Conference Virtual, Feb. 28-March 2. Click here for more information.