intelligent failure

Good Article on Why AI Projects Fail

Posted on Updated on

high angle photo of robot
Photo by Alex Knight on Pexels.com

Today I ran across this article that was very good as it focused on lessons learned, which potentially helps everyone interested in these topics. It contained a good mix of problems at a non-technical level.

Below is the link to the article, as well as commentary on the Top 3 items listed from my perspective.

https://www.cio.com/article/3429177/6-reasons-why-ai-projects-fail.html

Item #1: 

The article starts by discussing how the “problem” being evaluated was misstated using technical terms. It led me to believe that at least some of these efforts are conducted “in a vacuum.” That was a surprise given the cost and strategic importance of getting these early-adopter AI projects right.

In Sales and Marketing you start the question, “What problem are we trying to solve?” and evolve that to, “How would customers or prospects describe this problem in their own words?” Without that understanding, you can neither initially vet the solution nor quickly qualify the need for your solution when speaking with those customers or prospects. That leaves a lot of room for error when transitioning from strategy to execution.

Increased collaboration with Business would likely have helped. This was touched on at the end of the article under “Cultural challenges,” but the importance seemed to be downplayed. Lessons learned are valuable – especially when you are able to learn from the mistakes of others. To me, this should have been called out early as a major lesson learned.

Item #2: 

This second area had to do with the perspective of the data, whether that was the angle of the subject in photographs (overhead from a drone vs horizontal from the shoreline) or the type of customer data evaluated (such as from a single source) used to train the ML algorithm.

That was interesting because it appears that assumptions may have played a part in overlooking other aspects of the problem, or that the teams may have been overly confident about obtaining the correct results using the data available. In the examples cited those teams did figure those problems out and took corrective action. A follow-on article describing the process used to make their root cause determination in each case would be very interesting.

As an aside, from my perspective, this is why Explainable AI is so important. There are times that you just don’t know what you don’t know (the unknown unknowns). Being able to understand why and on what the AI is basing its decisions should help with providing better quality curated data up-front, as well as being able to identify potential drifts in the wrong direction while it is still early enough to make corrections without impacting deadlines or deliverables.

Item #3: 

This didn’t surprise me but should be a cause for concern as advances are made at faster rates and potentially less validation is made as organizations race to be first to market with some AI-based competitive advantage. The last paragraph under ‘Training data bias’ stated that based on a PWC survey, “only 25 percent of respondents said they would prioritize the ethical implications of an AI solution before implementing it.

Bonus Item:

The discussion about the value of unstructured data was very interesting, especially when you consider:

  1. The potential for NLU (natural language understanding) products in conjunction with ML and AI.
  2. The importance of semantic data analysis relative to any ML effort.
  3. The incredible value that products like MarkLogic’s database or Franz’s AllegroGraph provide over standard Analytics Database products.
    • I personally believe that the biggest exception to assertion this will be from GPU databases (like OmniSci) that easily handle streaming data, can accomplish extreme computational feats well beyond those of traditional CPU based products, and have geospatial capabilities that provide an additional dimension of insight to the problem being solved.

 

Update: This is a link to a related article that discusses trends in areas of implementation, important considerations, and the potential ROI of AI projects: https://www.fastcompany.com/90387050/reduce-the-hype-and-find-a-plan-how-to-adopt-an-ai-strategy

This is definitely an exciting space that will experience significant growth over the next 3-5 years. The more information, experiences, and lessons learned shared the better it will be for everyone.

Failing Productively

Posted on Updated on

As an entrepreneur you will typically get advice like, “Fail fast and fail often.” I always found this somewhat amusing, similar to the saying, “It takes money to make money” (a lot of bad investments are made using that philosophy). Living this yourself is an amazing experience – especially when things turn out well. But as I have written about before, you learn as much from the good experiences as you do from the bad ones.

Innovating is tough. You need people who are always thinking of different and better ways of doing things, or who question why something has to be done or made a certain way. It takes confidence to ask questions that many would view as stupid (“Why would you do that, it’s always been done this way.”) But, when you have the right mix of people and culture, amazing things can and do happen, and it feels great.

Innovating also takes a willingness to lose time and money, with the hope of winning something big enough later to make it all worthwhile. This is where a lot of companies fall short because they lack the patience, budget, or appetite to fail. I personally believe that this is the reason why innovation often flows from small companies and small teams. For them, the prospect of doing something really cool or making a big impact is motivation enough to give something a try.

It takes a lot of discipline to follow a plan when a project appears to be failing, but it takes even more discipline to kill a project that has demonstrated real potential but isn’t meeting expectations. That was one of my first, and probably most important lessons learned in this area. Let me explain…

In 2000 we looked at franchising our “Consulting System” – processes, procedures, tools, metrics, etc. that had been developed and proven in my business. We believed that this approach could help average consultants deliver above average work products. The idea seemed to have real potential.

It took a lot of work finding an attorney who would even consider this idea. Most believed it would be impossible to proceduralize a somewhat ambiguous task like solving a business or technical problem. We finally found an attorney who, after a 2-hour no-cost interview, agreed to work with us. When I later asked him about his approach, he replied “I did not want to waste my [his] time or our money on a fool’s errand.”

We estimated it would take 12 months and cost approximately $100,000 to fully develop our consulting system. We met with potential prospects to validate the idea (it would have been illegal to pre-sell the system) and then got to work. Twelve months turned into 18, and the original $100K budget increased nearly 50%. All indications were positive and we felt very good about the success and business potential for this effort.

Then, the terror attacks occurred on Sept. 11th and businesses everywhere saw a decline. In early 2002 we reevaluated the project and felt that it could be completed within the next 6-8 months and would cost another $50K+.

After a long and emotional debate we decided to kill the project – not because we felt it would not work, but rather because there was less of a target market and now the payback period (time to value) would double or triple. This was one of the most difficult business decisions that I ever made.

A big lesson learned from this experience was that our approach needed to be more analytical.

  • From that point forward we created a budget for “time off” (we bought our own time, as opposed to waiting for bench time) and for other project related items.
  • We developed a simple system for collecting and tracking ideas and feedback. When an idea felt right we would take the next steps and create a plan with a defined budget, milestones, and timeline. If the project failed to meet any of the defined objectives it would be killed – No questions asked.
  • We documented what we did, why we decided to do it, our goals, and expected outcomes. Regardless of success or failure we would have postmortem reviews to learn and document as much as possible from every effort.

We still had failures, but with each one we took less time and spent less money. More importantly, we learned how to do this better, and that helped us realize several successes. It provided both the structure and the freedom to create some amazing things. And, since failing was an acceptable outcome it was never feared.

This approach was much more than just, “failing fast and failing often,” it was intelligent failure, and it served us well for nearly a decade.