Today I ran across this article that was very good as it focused on lessons learned, which potentially helps everyone interested in these topics. It contained a good mix of problems at a non-technical level.
Below is the link to the article, as well as commentary on the Top 3 items listed from my perspective.
The article starts by discussing how the “problem” being evaluated was misstated using technical terms. It led me to believe that at least some of these efforts are conducted “in a vacuum.” That was a surprise given the cost and strategic importance of getting these early-adopter AI projects right.
In Sales and Marketing you start the question, “What problem are we trying to solve?” and evolve that to, “How would customers or prospects describe this problem in their own words?” Without that understanding, you can neither initially vet the solution nor quickly qualify the need for your solution when speaking with those customers or prospects. That leaves a lot of room for error when transitioning from strategy to execution.
Increased collaboration with Business would likely have helped. This was touched on at the end of the article under “Cultural challenges,” but the importance seemed to be downplayed. Lessons learned are valuable – especially when you are able to learn from the mistakes of others. To me, this should have been called out early as a major lesson learned.
This second area had to do with the perspective of the data, whether that was the angle of the subject in photographs (overhead from a drone vs horizontal from the shoreline) or the type of customer data evaluated (such as from a single source) used to train the ML algorithm.
That was interesting because it appears that assumptions may have played a part in overlooking other aspects of the problem, or that the teams may have been overly confident about obtaining the correct results using the data available. In the examples cited those teams did figure those problems out and took corrective action. A follow-on article describing the process used to make their root cause determination in each case would be very interesting.
As an aside, from my perspective, this is why Explainable AI is so important. There are times that you just don’t know what you don’t know (the unknown unknowns). Being able to understand why and on what the AI is basing its decisions should help with providing better quality curated data up-front, as well as being able to identify potential drifts in the wrong direction while it is still early enough to make corrections without impacting deadlines or deliverables.
This didn’t surprise me but should be a cause for concern as advances are made at faster rates and potentially less validation is made as organizations race to be first to market with some AI-based competitive advantage. The last paragraph under ‘Training data bias’ stated that based on a PWC survey, “only 25 percent of respondents said they would prioritize the ethical implications of an AI solution before implementing it.”
The discussion about the value of unstructured data was very interesting, especially when you consider:
- The potential for NLU (natural language understanding) products in conjunction with ML and AI.
- This is a great NLU-pipeline diagram from North Side Inc. in Canada, one of the pioneers in this space.
- The importance of semantic data analysis relative to any ML effort.
- The incredible value that products like MarkLogic’s database or Franz’s AllegroGraph provide over standard Analytics Database products.
- I personally believe that the biggest exception to assertion this will be from GPU databases (like OmniSci) that easily handle streaming data, can accomplish extreme computational feats well beyond those of traditional CPU based products, and have geospatial capabilities that provide an additional dimension of insight to the problem being solved.
Update: This is a link to a related article that discusses trends in areas of implementation, important considerations, and the potential ROI of AI projects: https://www.fastcompany.com/90387050/reduce-the-hype-and-find-a-plan-how-to-adopt-an-ai-strategy
This is definitely an exciting space that will experience significant growth over the next 3-5 years. The more information, experiences, and lessons learned shared the better it will be for everyone.
As an entrepreneur you will typically get advice like, “Fail fast and fail often.” I always found this somewhat amusing, similar to the saying, “It takes money to make money” (a lot of bad investments are made using that philosophy). Living this yourself is an amazing experience – especially when things turn out well. But as I have mentioned before, you learn as much from the good experiences as you do from the bad ones.
Innovating is tough. You need people who are always thinking of different and better ways of doing things, or question why something has to be done or made a certain way. It takes confidence to ask questions that many would view as stupid (“Why would you do that, it’s always been done this way.”) When you have the right mix of people and culture, amazing things can happen and it feels great.
Innovating also takes a willingness to lose time and money, with the hope of winning something big enough later to make it all worthwhile. This is where a lot of companies fall short because they lack the patience, budget, or appetite to fail. I believe that this is why innovation often flows from small companies and small teams, as with them the prospect of doing something really cool is motivation enough to give something a try.
It takes a lot of discipline to follow a plan when a project appears to be failing, but it takes even more discipline to kill a project that has demonstrated real potential but isn’t meeting expectations. That was one of my first, and most important, lessons learned in this area. Let me explain…
In 2000 we looked at franchising our “consulting system” – processes, procedures, tools, metrics, etc. that were proven in our business. We believed that this could help average consultants deliver above average work products. It took a lot of work finding an attorney who would even consider they believed it would be impossible to proceduralize a somewhat ambiguous task like solving a business or technical problem. We found an attorney who after a 2-hour interview agreed to work with us (as he said, he “didn’t want to waste his time or our money on a fools errand.”)
We estimated it would take 12 months and cost $100,000 or less to fully develop. We met with potential prospects to validate the idea (it would have been illegal to pre-sell the system) and then got to work. Twelve months turned into 18, and the $100K budget increased almost 50%. But, all indications were positive and we felt very good about this effort.
Then, the terror attacks occurred on Sept. 11th and businesses everywhere saw a decline. In early 2002 we reevaluated the project and felt that it could be completed within the next 6-8 months, and would cost another $50K. After a long and emotional debate we decided to kill the project – not because we felt it would not work, but rather because there was less of a target market and now the payback period would double or triple. This was one of the most difficult business decisions that I ever made.
A big lesson learned was that our approach had to be more analytical. From that point forward we created a budget for “time off” (we bought our own time, as opposed to waiting for bench time) and for other project related items. We had a simple system for collecting and tracking ideas and feedback. And, when an idea felt right we would create a plan with a defined budget, milestones, and timeline. If the project failed to meet any of the defined objectives it would be killed. We documented what we did, why we did what we did, and would have postmortem reviews to learn as much as possible from every effort.
We still had failures, but with each one we took less time and spent less money. More importantly, we learned how to do this better, and that helped us realize several successes. It provided both the structure and the freedom to create some amazing things. And, since failing was an acceptable outcome it was never feared.
This was much more than just, “failing fast and failing often,” it was intelligent failure, and it worked for us.