There are several basic principles you should understand about our new digital co-workers. The first is that most current “artificial intelligence” is really nothing more than a set of sophisticated statistics.
Without delay, here is another.
The output of an “intelligent machine” is an opinion, not a fact.
This may be the most important thing you learn about intelligent tools. Whether it’s sophisticated matching, chatbot interactions, machine learning, natural-language processing, sentiment analysis or data models, the machine can only offer an opinion. Just because it comes from a computer doesn’t mean it’s either real or true.
In the same way that humans have unconscious bias, machines have uncoded bias. They only know about things that are measured, quantified and given to them. Like people, they are bad at accounting for the things they can’t see and don’t know. Lacking any imagination whatsoever, their worldview is limited to the data in their possession.
What’s worse, machines can only know how things are in the past. Their opinions are limited to associations; this is like that. They can’t experiment, propose alternate scenarios or intervene to improve. When the world is not like it was yesterday, their work stumbles. They would be “happiest” if it were always yesterday.
Currently (and for the foreseeable future), our new digital “interns” are the worst sort of employee imaginable. They are literal-minded, opinionated, require extensive training, only stop when you tell them to, have no conscience and require retraining from scratch when something is not right. And still, we need to use them now while the technologists work to take us to the next level because the next level depends on more data from us.
This means that we need to learn to argue with their machines and train our human employees to understand this new type of software that gives suggestions and opinions instead of facts. Like video gamers looking for the next hack, employees will need to monitor, understand, question and exploit the vulnerabilities of their tools and account for them. Digital employees are central to our future, but managing them is very different than managing people or older software.
With humans, a manager can afford to be imprecise or distracted. Trust can be expansive. With machines, every delegation, follow-up or training must be flawless. The effectiveness of an intelligent tool is entirely dependent on its manager. Humans can overcome bad management; machines cannot.
Ready for one more?
The company with the biggest database usually wins.
Data are the new infrastructure, the foundation of intelligent tools and new forms of business. Unlike old-fashioned enterprise computing where workflow was king, intelligent tools thrive on the boundless opportunity to discover patterns. For companies, this means that the need to clean up their data is urgent. For vendors, the credibility of their claims rests entirely on the volume and quality of data they use.
I know of one start-up that spent millions teaching machines to generate “fake data” to test algorithms, data models and sentiment analysis. They knew that, without the ability to really substantiate their claims, they would be placing the risk on their customers’ shoulders. If only this was the norm.
Many small start-ups need data so badly that they are willing to discount deeply to get them. It’s a high-risk, high-return gambit for companies who choose this path. It requires real confidence in the vendor and their funding, coupled with the internal capacity to weather big mistakes.
On the other hand, legacy companies, with massive troves of data, spend time scientifically validating their theoretical models before handing them to customers. For this reason, big companies have a lopsided advantage over small startups.
You recently may have heard someone refer to data as “the new oil.” It means that our world today is as, or more, dependent on data as it has been on petroleum. Start-ups building intelligent tools must have the equivalent of mining and exploration divisions to function.
So when exploring your options, always ask your potential suppliers these questions about their offerings:
- How does the machine form its opinion?
- What do we do if it’s wrong?
- Where do your data come from?
- How much data do you have? (Or, what is the sample size?)
Then, understand that we are at the beginning of building and using intelligent tools, there is much work ahead and we will have to think about our machines differently from now on.