It can be difficult leading an organization in today’s climate, especially if you are trying to move your organization forward by embedding a modern AI solution. It may seem easy at first, as virtually everywhere you can read online about the development of Data Science, the miracles of machine learning, or the advances of AI. It makes sense that you want to improve your competitive advantage by bringing these new capabilities into the organization and reaping benefits immediately. Unfortunately, you will not be met with celebratory confetti, powder cannons, or Tag Team’s song “Whoop There It Is”. Simply put, “It ain’t easy”. In this write-up we will explore the three most common obstacles leaders of any size organization will likely encounter and be forced to overcome in order to move the organization forward with a modern AI solution.
Lesson #1: Leaders should help their organizations see they are trying to build planes instead of birds.
Most organizations have an expert or group of experts they have been leaning on for some time and believe to be critically important to the company. Many leaders are interested in top-line growth, market share, and scaling their enterprise, but they’re stuck. At times leaders feel they are forced to choose between “experts” or “AI solutions”. This moment hits hard, no matter if you work in a startup or run a multi-billion organization; the problem is ubiquitous.
If an executive is brave enough to take on machine learning, he or she will experience many individuals in their organization who may began to squirm. Call it fear, distrust, or an uneasiness with too much technology too fast. Here is how you can tell you have struck a nerve:
- I have gained my knowledge over 30 years; how can this solution be trained in a single day?
- Why we are using more and different data inputs to make these predictions?
- If this solution is so good, why did it miss predictions in several cases?
So how should organizations respond when an AI solution produces the same or superior results to the status-quo solution, but arrives at that outcome differently?
Do you know when humans finally took off with flight, when the Wright brothers stopped imitating birds and started down a different path – like wind tunnels and pursuing aerodynamics. Flying was about getting into the air, maintaining flight, and landing safely; it was never about fooling birds to think a plane was a bird. The fact that an AI solution doesn’t do the same things as your company’s expert could be the absolute best thing that happens to your organization.
Lesson #2: Leaders should help organizations determine what an apples-to-apples comparison looks like.
At the start of the company’s transformation from “old world” to “new world” thinking, there is typically an organizational process or business area that the leadership team will want to focus on because they see it as low-hanging fruit for the organization.
It is important to realize that once leadership decides to start a Data Science initiative, they will likely experience a series of undoing efforts as an attack on the success of the project. These efforts happen in most cases, unintentionally, but in some cases intentionally; they happen virtually on every Data Science project, no matter who is at the helm.
Here is how a leader can identify if his or her initiative might be at risk for succeeding based on comments and questions they might hear:
- Is your machine learning effort reproducible?
- Your solution missed obvious cases – how do you explain that?
- Can you explain your model to me?
These questions are really ones of comparison between the “experts” and the “AI solutions”. Many of the organization’s brightest people will ask these questions, and in many cases, they seem like good questions to ask, rooted in helping the organization. Let me show you why those who are asking these questions may actually be hurting their organization instead of helping it.
Concerning reproducibility, an enterprise-grade machine learning solution is always reproducible. I would make the claim that this is just good science, but many scientists seem to struggle with this reproducibility themselves. In many cases, when the expert is asked to meet the same criteria, he or she couches what he or she does as that “je ne sais quoi” – that uncountable thing or simply “art”.
Concerning missing cases when predicting, machine learning solutions are not perfect although they strive by design to be. Unfortunately, this question acts like a David Copperfield magic trick: having the leadership team focus on my left hand while the coin is really in my right. A well-designed solution for the real-world should not only be measured on how close to 100% it is, but rather besting the status-quo of the organization. Let me explain:
- If an organization has no current baseline to a Data Science initiative, they should use random as the starting point to try and best (that is 50%)
So, if the AI solution comes in at 75% accuracy, it does not help the organization for experts to say the AI solution missed 25%, when in fact it beat random guessing (which is what the organization is doing in this example) by 25%.
- If an organization has experts who use a series of business rules (if this than that) which help them achieve 75% accuracy, we should use this as the baseline to improve upon.
So, if the AI solution comes in at 90% accuracy besting the current solution by 15%, it does not help the organization for experts to say, “Let’s just keep what we have in place, since the AI solution misses 10%.”
Concerning “explain your model”, there are a range of solutions when it comes to machine learning. Some models are simple in their approach and others more complex. The question can be answered on a variety of levels – everything from what data was used to how a convulsion neural network is designed. While all these things can be explained, it may not be immediately understood to an organization’s expert.
When these questions are turned back to the organization’s expert to answer about the current process, the following are responses that the leadership team might hear:
- I did what I did out of my own experience.
- I will not be able to reproduce my results in every case, because it is an “in the moment” thing.
- Nobody is perfect; of course I am going to miss cases. I am only human.
So how should organizations carry out an apples-to-apples comparison between a current and a future solution? Easy – either agree they are not the same or be consistent in your approach of evaluation between the two. Also, if you want to determine the success of a model, compare it to what exists today in your organization, not what it should be.
Lesson #3: Leaders should stretch their organizations to go beyond their own intellect.
If the leadership in the organization has made it this far with their initiative, they may feel a bit like Alice’s Adventures in Wonderland next. You should expect to see the many bizarre physical changes of the AI solution that you intended to be so easy to implement within your organization. Just like in Wonderland there are mysterious potions and cake all along the way that an organization may have to consume to grow and shrink, in hopes of not missing out on the size it needs to be in the proverbial hall of doors.
Imagine this scenario: let’s say you are a CEO who oversees 48 convenient stores distributed across the county, and you want to build a machine learning model to help you maximize the price of gasoline based on your inventory level and what the market will bear in price. To contrast the future state with the current state, you already have 18 expert individuals who have been in the industry for a combined 180 years of experience doing your pricing for you. Let’s further say you were able to build a model that improves the group of experts’ unrealized profits by 66%. Ready to deploy it and start making money? Not so fast. Here is the next blow that the executives will receive: one, if not all of your experts, will want you to explain to them what you built and why they should use it.
Even the simplest AI solutions can’t fit in the head of a single individual, but an organization’s expert will ask for it any way. Let’s face it, 15 years of daily gasoline price data for the United States of America, micro- and macro-economic data, weather data, and socio-economics data plus all the linear algebra, probability & statistics, multivariable calculus, and optimization strategies that were deployed in the AI solution, may be hard to comprehend. While that seems unlikely to be consumed by your organization’s expert, it is harder still for experts who have been working with two dozen assumptions for the last 10 years or have been operating “straight from the gut”.
So how should organizations carry out the request for an industry expert needing to know all the ins-and-outs of an AI solution? Well leadership needs to know what is at stake when they answer this question. If you answer this technically, you may not obtain adoption for the solution. If you work towards really having your experts understand or change your machine learning approach to fit what they want to comprehend – such as modifiable and intuitive data inputs and no fancy math, methods, and techniques – then the organization has allowed the experts to govern how much success the organization can participate in and what they will actually be able to deploy.
Practically, it looks like this:
- Current State: experts, misses $0.06/per gallon
- Future State: Model_1, misses $0.04/per gallon (a 33% improvement over the current state)
- Future State: Model_2, misses $0.02/per gallon (a 66% improvement over the current state)
In most cases, an organization’s leadership will end up taking Model_1 because it is a simpler model that can be somewhat explained to the organization’s group of experts in a palatable way. This highlights a phenomenon in the applied aspects of an AI solution: the fact that real-world problems tend to be multi-objective. In this case, the multi-part objective that should be optimized is 1) the most performant model and 2) what the organization’s experts will accept as truth.
Go Forth and Conquer
A well-designed AI solution tries to ease the burden that the organization will likely experience when embarking on this sort of transformational journey. Having reliable strategies that both move the organization forward and bring along the others proves to be the difference between machine learning success and failure in the enterprise. Business leaders should see that the use of this technology demands more agility and change-friendly organizations. It most certainly requires more leadership from more people, and not just top management. It requires more strategic sophistication. At the most basic level, an organization must have a much greater capacity to execute bold strategic initiatives rapidly while minimizing the size and number of bumps in the road that slow an organization down.