Adam Drake

Developing your AI BS detector (v2)

Introduction

In April 2018 I gave a lecture at an event in Toronto which I titled Developing Your AI BS Detector.. That event was designed to be a discussion around “Rational AI in the Enterprise” and I’ve found that since then others have been interested in hosting a version of the lecture as well.

Recently, I updated the slides a bit and changed the focus of the lecture somewhat to make it more clear and accessible. I had a great opportunity to give version 2 of this lecture to a group organized by Army Futures Command.

As I usually do for my lectures, below are some of the slide headings or titles along with an overview of the material I spoke about for that particular slide.

You can also download the slides if you find that helpful.

Overview

The purpose of the lecture is really to help people understand what kinds of things might be red flags for AI-related projects, and to sharpen their skills at recognizing when AI might be getting thrown around as a manipulation tactic.

Definitions

  • AI is a field of study, not a thing.
  • AI is computers minimizing a cost function.

Or as my own modification of Tesler’s Theorem (AI is whatever hasn’t been done yet):

  • AI is a set of tools that do not exist YET for solving problems that have not been solved YET.

Once one of those conditions are violated, people do not regard the thing as AI anymore.

Before getting too far into a lecture it’s important to get all the definitions settled. AI can be considered more of a field of study than a thing that is somehow acquired.

If a technology already exists then usually people do not consider that technology to be AI. Consider things like Siri or some of the various Google products. Many of those were absolutely considered to be AI only a few years before they became standard products and household names.

As technology is developed, or even the tools to create the technology are developed, we usually cease thinking of the tools and the thing as futuristic and in the case of AI that means we do not consider the thing to be AI once it has been built.

Tesler’s theorem states the concept thusly: “AI is whatever hasn’t been done yet”

AI = task automation

A good way to think about AI as it relates to the current product and solution offerings is to think in terms of task automation. In other words, if I had this AI thing, what tasks would I use it for? What current activities are humans engaging in that, if they had sufficiently advanced technology, the human would no longer do? This useless work can be called toil and is precisely the kind of activity that AI will be able to largely eliminate.

Some people become very nervous or defensive when considering the task automation aspects of AI, and this is unavoidable. However, that should not prevent honest people from having honest conversations about the topic of task automation. The goal, after all, is to automate away portions of work that do not require human effort or intervention, thus freeing the person in question to engage in more important and valuable pursuits. Those who would be concerned about having their job automated away often do not see themselves as being able to contribute in other ways, or see that their sole value is in their process knowledge. As a leader, it is your role to help them understand that while they are valuable in their current role, they could also add much more value if the repetitive tasks of their current role no longer required their attention.

RED FLAGS

There are a few major red flags when talking about AI. The top few that I see can be loosely grouped into two main groups: anthropomorphizing and selling.

  • Anthropomorphizing

    The question of whether Machines Can Think is about as relevant as the questions of whether Submarines Can Swim. – Dijkstra, EWD898, 1984

    The quote above from Dijkstra perfectly illustrates the risk in humans anthropomorphizing technology. By making it seem like the machine is doing something that a human does, we make the machine something to which you can relate, something more exciting, and those who play this horrible trick often use the resulting failures of intuition to swindle non-savvy consumers.

    It is much easier for a sales and marketing person for some AI vendor to sell magical technology if they can leave out any real explanation for how the technology works and instead focus on exploiting people’s intuitive understanding of concepts like thinking machines and so on. I once sat in on a pitch from a vendor who was trying to convince people that he had created machine learning systems that exhibited creativity due to him inducing stress and anxiety in the mind of the machine. All he really did was introduce some random perturbations in the weight vector of his machine learning algorithm. It was academically criminal, and thankfully I was able to ask probing questions in the sales pitch that revealed the vendor for the fraud that he was.

    If someone is anthropomorphizing the technology, run away. They are probably lying to you.

  • Selling

    Consider an Anomaly Detection AI that someone is trying to get you to purchase in order to solve your particular problem. They tell you that it is a 100% proprietary solution that nobody else has (i.e., it is not built on open-source). They say that the system has been proven in production, developed in the real world and tunable for your application, and it is extremely scalable. So scalable in fact that it can handle 2.3 billion calculations per node, per day, and can be parallelized across many nodes for even further scalability.

    That might sound like a reasonable and interesting sales pitch if you are receiving it from a vendor, or if you are a VC and are getting such a pitch from a potential portfolio company, but the code below satisfies all the criteria listed in the text above.

    func isFraud(ccIsForeign bool, amount float64, transactionsToday int) bool {
        if ccIsForeign && amount > 1000 && transactionsToday > 5 {
            return true
        }
        return false
    }

    That code is reasonably similar to a fraud detection system I found in a company I once advised. You could make an argument that it is an expert/rules-based system, and therefore could be considered a type of AI. The system also worked rather well, and despite spending millions of a dollars on a data science team in order to try to address the fraud problem by more sophisticated means, they did not find any improvement.

    If someone is trying to sell you some kind of advanced AI solution to a problem, it’s a good idea to confirm that what they are selling is actually advanced technology. Even more important, however, is the fact that said technology should actually improve upon the solution currently in place. Paying money for an expensive and complex solution that performs no better than the current solution is foolish and unnecessary.

  • Cost or proprietary technology

    FACTS: State-of-the-art AI/ML frameworks are open source and therefore cost zero dollars, accessing and preparing data is probably 95% (or more) of the work, and there is no cost associated with the technical tools required to do state-of-the-art machine learning.

    Therefore, if someone is claiming that they have some kind of magical proprietary technology then they are misleading you for two main reasons. First, their technology is almost certainly the same open source stuff everyone else is using. Since the open source systems are the best you can get in most cases, any proprietary solution offers only a marginal improvement over the current freely-available standard if it offers any improvement at all.

    Second, and more importantly, even if they have a fantastic machine learning solution they will still need access to the underlying systems and data, which must be connected and cleaned, in order to feed the algorithms themselves. Herein lies the problem! Most work in machine learning and data science, from a practical perspective, involves all the janitorial work of acquiring data from multiple sources, cleaning it up, normalizing it, tying together disparate systems and data sets, and so on. The company selling you the machine learning system is going to make the bulk of their money off of you by charging you to do all of this cleanup and integration work. In other words, the vendor is going to make their money by being the technological equivalent of a maid.

    Don’t be swindled by a vendor. If your systems and data are not set up to appropriately use a system internally anyway, you will not be able to make use of whatever machine learning or AI stuff they are trying to sell you. Before reaching for AI or machine learning, you will have to get your technological house in order, and there is no need to pay a vendor a disgusting amount of money to attempt to do that for you. Especially since they will likely fail due to lack of domain knowledge compared to your own staff.

  • Patents

    In the realm of technological advancement, patents really do not make much sense. First of all, you cannot patent an algorithm, so that rules out truly patenting most machine learning or AI approaches. Second, the top machine learning and AI research is, as mentioned above, free and open source. Research papers are published, and competition write-ups are created, in order to tell the rest of the research community how the start of the art is evolving. This information is almost always freely available, to anyone interested to look.

    Furthermore, when a system is patented, it becomes public. In my experience, companies who have interesting approaches to machine learning and AI problems never patent such things for the precise reason that then all of their competitors would also be aware of the approach. Therefore, the most interesting machine learning and AI applications are usually kept private as trade secrets.

    In other words, if something in this field is patented then it is some twist on research that is already available to the public. Therefore, the only reason to pay the cost of the patent is to try to capitalize on the fact that there is a misunderstanding of the value of a patent among the buyers or users of the technology.

    Lastly, a bit of information that is not often well known is that in the larger companies the researchers are paid bonuses for patents that they are able to get granted to the company. The researcher gets money and something else to put on their resume, the company gets more marketing material to add to their dubious claims of how technologically advanced they are, and the potential customer gets swindled by overpaying for technology that is probably already open source.

    If someone is invoking some kind of patent list to support why their solution is valuable, run away.

Separating wheat from chaff

The questions below are part of the set of questions I ask when trying to evaluate if a project team, startup, or other organization is actually doing something interesting in the field of AI and machine learning, or if they are just trying to capitalize on the current hype surrounding the field.

What specific problem are you solving?

The goal of this question is to understand if they are solving an actual problem, or if they see AI as a solution in search of a problem. Furthermore, it is an opportunity for the person to demonstrate understanding of the actual organization issue they are trying to address.

What is the most naive solution? Did you try that?

If the previous question demonstrates an understanding of the business problem and not just someone who is trying to hawk their AI wares, then the next logical question is whether or not they tried the most simple or naive solution to the problem. Often in business cases, the simple solution is perfectly adequate, cheap, and allows for time and money to be spent on solving other difficult problems that do not lend themselves to the simple solutions. Verify that the simple solution has been explored and found to be unacceptable before proceeding with more elaborate solutions.

What about the next-most-naive solution?

As above, verify that the simpler solutions have been tried. You might not be able to do time series forecasting with some linear regression approach, but you might be able to get it done just fine with some basic smoothing methods.

Why is what you’re doing considered AI?

This is a fun question because it serves to sanity check the understanding of the person selling the AI solution. It refocuses the conversation on the technology, after having talked about the actual business problem for some time, and sets things up for the next question.

Are you using a framework?

If so, why is your solution special? If not, explain why you aren’t wasting time.

This question, and the follow up questions above, are something of a trick question. There is almost no way to answer this question without exposing the fact that most of the state of the art AI stuff is free and open source, and therefore being built with largely the same few frameworks or that the person or organization in question has eschewed the state of the art in order to try to build something proprietary and on their own which probably came at great expense and marginal benefit, if any.

Heilmeier Catechism

As it turns out, these questions were in some sense formulated by George H. Heilmeier, who was the DARPA Director from 1975 to 1977. The Heilmeier Catechism is as follows.

  • What are you trying to do? Articulate your objectives using absolutely no jargon.
  • How is it done today, and what are the limits of the current practice?
  • What is new in your approach and why do you think it will be successful?
  • Who cares? If you are successful, what difference will it make?
  • What are the risks?
  • How much will it cost?
  • How long will it take?
  • What are the mid-term and final “exams” to check for successful?

Conclusion

There is a lot of hype surrounding the topic of AI at the moment, and unfortunately there are many vendors, project/program managers, and others who are more than happy to attempt to use the current hype as a way to bolster their resumes or bank accounts.

It is critical when evaluating possible AI solutions to problems that pragmatism is maintained as paramount and the focus on the actual problem is not lost. If that effort is successful, then it is often the case that some kind of AI solution isn’t really needed at all. Rather, a simple solution using known methods that allows the organization to solve the problem adequately and move on to the next problem is almost always the most beneficial.

When someone is trying to sell you AI, be suspicious.