Adam Drake

Artificial Intelligence and The Heilmeier Catechism

As part of my advising work, I encounter a plethora of companies. Most of these companies want to begin, improve, or continue initiatives surrounding Artificial Intelligence (AI), Machine Learning, Data Science, Data Analytics, or other such labels applied to a project wherein data is used to accomplish some business objectives.

The difficulty is that these labels often serve to obfuscate the true nature of the project in order to make it sound more exciting, and to entice funding from those with the power to grant it. This is a travesty of the greatest degree. This practice intentionally belies the point of the project in the first place, which is not to engage in some AI research, but to build something valuable for the customer and therefore the business.

Therefore, I and other pragmatic data professionals spend a lot of our time doing what is commonly referred to as expectation management. In other words, I spend a lot of time trying to bring people back to reality and undoing the detrimental effects of other people’s lies.

This is part and parcel of doing technology work in a rapidly advancing field. In fact, having to clean up after this constant assault on opinions by scheisters was a task well handled by George Heilmeier, who ran DARPA from 1975 to 1977.

Heilmeier employed a catechism very similar to the one I use, as outlined in my talk Developing Your AI BS Detector. In the talk, I define some questions to ask in order to extract the true business value of a proposed AI project or investment. I could have instead given a lecture on the Heilmeier Catechism. Let’s go over what that is, exactly.

The Heilmeier Catechism

When Heilmeier was running DARPA, he had to have a way to evaluate research programs that seemed promising and decide which ones should get funding. In order to do this, he developed a series of questions designed to provide critical information. These guiding questions are known as the Heilmeier Catechism.

DARPA operates on the principle that generating big rewards requires taking big risks. But how does the Agency determine what risks are worth taking?

George H. Heilmeier, a former DARPA director (1975-1977), crafted a set of questions known as the “Heilmeier Catechism” to help Agency officials think through and evaluate proposed research programs.

  1. What are you trying to do? Articulate your objectives using absolutely no jargon.
  2. How is it done today, and what are the limits of current practice?
  3. What is new in your approach and why do you think it will be successful?
  4. Who cares? If you are successful, what difference will it make?
  5. What are the risks?
  6. How much will it cost?
  7. How long will it take?
  8. What are the mid-term and final “exams” to check for success?

Taken together, this set of questions should be very effective in separating wheat from chaff in the calculus of allocating time, money, and people to a certain project or portfolio of projects. I take some issue with questions 6 and 7 since, by definition, if you’re building something new that has not been built before, it is impossible to know with precision how long it will take or how much it will cost. However, research organizations (and any high-output organization for that matter) have employed Agile-like work practices since the beginning. I’m confident, in the modern context, that points 6 and 7 can be evaluated on an ongoing basis.

I think the first 4 questions are critical when evaluating a project that is pitched as having an AI focus. The reason, as mentioned in my BS Detector article, is that the AI part is usually the least important part of the project. It’s often not even a critical component. Usually, it’s thrown on top purely as an attempt to differentiate from competitors in the funding or business development process. This is harmful because the inflated expectations resulting from the differentiation must eventually be addressed.

In fact, employing only the first question of the Heilmeier Catechism would likely suffice. If you’re using AI or other Machine Learning jargon to describe what you’re doing, or if what you’re doing doesn’t sound interesting without it, then perhaps you should reconsider what you’re doing in the first place.

The main goal of any effort, or to use the German phrasing, the Schwerpunkt, is to build something valuable for customers who are willing to pay for it. Customers don’t care if the thing they’re paying for is AI or not. They just want their problem solved.

In the same vein, if you are considering projects for funding or allocation of people and resources generally, perhaps it’ll be helpful to employ some or all of the Heilmeir Catechism. You may be able to determine who is actually building something amazing, and who is simply putting the proverbial lipstick on a pig.