IoT

Using Themes for Enhanced Problem Solving

Posted on Updated on

Thematic Analysis is a powerful qualitative approach used by many consultants. It involves identifying patterns and themes to better understand how and why something happened, which provides the context for other quantitative analyses. It can also be utilized when developing strategies and tactics due to its “cause and effect” nature.

Typical analysis tends to be event-based. Something happened that was unexpected. Some type of triggering or compelling event is sought to either stop something from happening or to make something happen. With enough of the right data, you may be able to identify patterns, which can help predict what will happen next based on past events. This data-based understanding may be simplistic or incomplete, but often it is sufficient.

Photo by Pixabay on Pexels.com

But people are creatures of habit. If you can identify and understand those habits and place them within the context of a specific environment that includes interactions with others, you may be able to identify patterns within the patterns. Those themes can be much better indicators of what may or may not happen than the data itself. They become better predictors of things to come and can help identify more effective strategies and tactics to achieve your goals.

This approach requires that a person view an event (desired or historical) from various perspectives to help understand:

  1. Things that are accidental but predictable because of human nature.
  2. Things that are predictable based on other events and interactions.
  3. Things that are the logical consequence of a series of events and outcomes.

Aside from the practical implications of this approach, I find it fascinating relative to AI and Predictive Analysis.

For example, you can monitor data and activities proactively by understanding the recurring themes and triggers. That is actionable intelligence that can be automated and incorporated into a larger system. Machine Learning and Deep Learning can analyze tremendous volumes of data from various sources in real-time.

Combine that with Semantic Analysis, which is challenging due to the complexity of taxonomies and ontologies. Now, that system more accurately understands what is happening to make accurate predictions. Add in spatial and temporal data such as IoT, metadata from photographs, etc., and you should be able to view something as though you were very high up – providing the ability to “see” what is on the path ahead. It is obviously not that simple, but it is exciting.

From a practical perspective, keeping these thoughts in mind will help you see details others have missed. That makes for better analysis, better strategies, and better execution.

Who wouldn’t want that?

Blockchain, Data Governance, and Smart Contracts in a Post-COVID-19 World

Posted on Updated on

The last few months have been very disruptive to nearly everyone across the globe. There are business challenges galore, such as managing large remote workforces – many of whom are new to working remotely and managing risk while attempting to conduct “business as usual.” Unfortunately, most businesses’ systems, processes, and internal controls were not designed for this “new normal.”

While there have been many predictions around Blockchain for the past few years, it is still not widely adopted. We are beginning to see an uptick in adopting Supply Chain Management Systems for reasons that include traceability of items – especially food and drugs. However, large-scale adoption has been elusive to date.

Image of globe with network of connected dots in the space above it.

I believe we will soon begin to see large shifts in mindset, investments, and effort towards modern digital technology driven by Data Governance and Risk Management. I also believe that this will lead to these technologies becoming easier to use via new platforms and integration tools, which will lead to faster adoption by SMBs and other non-enterprise organizations, and that will lead to the greater need for DevOps, Monitoring, and Automation solutions as a way to maintain control of a more agile environment.

Here are a few predictions:

  1. New wearable technology supporting Medical IoT will be developed to help provide an early warning system for disease and future pandemics. That will fuel a number of innovations in various industries, including Biotech and Pharma.
    • Blockchain can provide data privacy, ownership, and provenance to ensure the data’s veracity.
    • New legislation will be created to protect medical providers and other users of that data from being liable for missing information or trends that could have saved lives or avoided some other negative outcome.
    • In the meantime, Hospitals, Insurance Providers, and others will do everything possible to mitigate the risk of using Medical IoT data, which could include Smart Contracts to ensure compliance (which assumes that a benefit is provided to the data providers).
    • Platforms may be created to offer individuals control over their own data, how it is used and by whom, ownership of that data, and payment for the use of that data. This is something I wrote about in 2013.
  2. Data Governance will be taken more seriously by every business. Today companies talk about Data Privacy, Data Security, or Data Consistency, but few have a strategic end-to-end systematic approach to managing and protecting their data and company.
    • Comprehensive Data Governance will become a driving and gating force as organizations modernize and grow. Even before the pandemic, there were growing needs due to new data privacy laws and concerns around areas such as the data used for Machine Learning.
    • In a business environment where more systems are distributed, there is an increased risk of data breaches and Cybercrime. That must be addressed as a foundational component of any new system or platform.
    • One or two Data Integration Companies will emerge as undisputed industry leaders due to their capabilities around MDM, Data Provenance and Traceability, and Data Access (an area typically managed by application systems).
    • New standardized APIs akin to HL7 FHIR will be created to support a variety of industries as well as interoperability between systems and industries. Frictionless integration of key systems become even more important than it is today.
  3. Anything that can be maintained and managed in a secure and flexible distributed digital environment will be implemented to allow companies to quickly pivot and adapt to new challenges and opportunities on a global scale.
    • Smart Contracts and Digital Currency Payment Processing Systems will likely be core components of those systems.
    • This will also foster the growth of next-generation Business Ecosystems and collaborations that will be more dynamic.
    • Ongoing compliance monitoring, internal and external, will likely become a priority (“trust but verify”).

All in all, this is exciting from a business and technology perspective. Most companies must review and adjust their strategies and tactics to embrace these concepts and adapt to the coming New Normal.

The steps we take today will shape what we see and do in the coming decade so it is important to quickly get this right, knowing that whatever is implemented today will evolve and improve over time.

IoT and Vendor Lock-in

Posted on Updated on

I was researching an idea last weekend and stumbled across something unexpected. My view on IoT has been that it provides a framework to support a rich ecosystem of hardware and software products. That flexibility and extensibility foster innovation, which fosters greater use and adoption of the best products. It was quite a surprise to discover that IoT was being used to do just the opposite.

My initial finding was a YouTube video about “Tractor Hacking” to allow farmers to make their own repairs. That seemed like an odd video to appear in my search results, but it made sense midway or so through the video. There is a discussion about not having access to software, replacement components not working because they are not registered with that tractor’s serial number, and that the only alternative is costly transportation of the equipment to a Dealership to have a costly component installed.

Image of jail cell representing vendor lock-in
Image Copyright (c) gograph.com/VIPDesignUSA

I initially thought there had to be more to the story, as I found it hard to believe that a major vendor in any industry would intentionally do something like this. That led me to an article from nearly two years earlier that contained the following:

“IoT to completely transform their business model”   and

“John Deere was looking for ways to change their business model and extend their products and service offering, allowing for a more constant flow of revenue from a single customer. The IoT allows them to do just that.”

That article closed with the assertion:

“Moreover, only allowing John Deere products access to the ecosystem creates a buyer lock-in for the farmers. Once they own John Deere equipment and make use of their services, it will be very expensive to switch to another supplier, thus strengthening John Deere’s strategic position.”

While any technology – especially platforms, has the potential for vendor lock-in, the majority of vendors offer some form of openness, such as:

  • Supporting open standards, APIs, and processes that support portability and third-party product access.
  • Provide simple ways to unload your data in at least one of several commonly used non-proprietary formats.

Some buyers may deliberately implement systems that support non-standard technology and extensions because they believe the long-term benefits of a tightly coupled system outweigh the risks of being locked into a vendor’s proprietary stack. But, there are almost always several competitive options available, so it is a fully informed decision.

Less technology-savvy buyers may never even consider asking questions like this when purchasing. Even technologically savvy people may not consider IoT a key component of some everyday items, failing to recognize the implications of a closed system for their purchase. It will be interesting to see if this deliberate business strategy changes due to competitive pressure, social pressure, or legislation over the coming years.

In the meantime, the principle of caveat emptor may be truer than ever in this age of connected everything and the Internet of Things.

The Unsung Hero of Big Data

Posted on Updated on

Earlier this week, I read a blog post regarding the recent Gartner Hype Cycle for Advanced Analytics and Data Science, 2015. The Gartner chart reminded me of the epigram, “Plus ça change, plus c’est la même chose” (asserting that history repeats itself by stating the more things change, the more they stay the same.)

To some extent, that is true, as you could consider today’s Big Data as a derivative of yesterday’s VLDBs (very large databases) and Data Warehouses. One of the biggest changes, IMO is the shift away from Star Schemas and practices implemented for performance reasons, such as aggregation of data sets, using derived and encoded values, using surrogate and foreign keys to establish linkage, etc. Going forward, it may not be possible to have that much rigidity and be as responsive as needed from a competitive perspective.

There are many dimensions to big data: A huge sample of data (volume), which becomes your universal set and supports deep analysis as well as temporal and spatial analysis; A variety of data (structured and unstructured) that often does not lend itself to SQL based analytics; and often data streaming in (velocity) from multiple sources – an area that will become even more important in the era of the Internet of Things. These are the “Three V’s” people have talked about for the past five years.

Like many people, my interest in Object Database technology initially waned in the late 1990s. That is, until about four years ago when a project at work led me back in this direction. As I dug into the various products, I learned they were alive and doing well in several niche areas. That finding led to a better understanding of the real value of object databases.

Some products try to be “All Vs to all people,” but generally, what works best is a complementary, integrated set of tools working together as services within a single platform. It makes a lot of sense. So, back to object databases.

One of the things I like most about my job is the business development aspect. One of the product families I’m responsible for is Versant. With the Versant Object Database (VOD – high performance, high throughput, high concurrency) and Fast Objects (great for embedded applications). I’ve met and worked with brilliant people who have created amazing products based on this technology. Creative people like these are fun to work with, and helping them grow their business is mutually beneficial. Everyone wins.

An area where VOD excels is with the near real-time processing of streaming data. The reason it is so adept at this task is the way that objects are mapped out in the database. They do so in a way that essentially mirrors reality. So, optionality is not a problem – no disjoint queries or missed data, no complex query gyrations to get the correct data set, etc. Things like sparse indexing are no problem with VOD. This means that pattern matching is quick and easy, as well as more traditional rule and look-up validation. Polymorphism allows objects, functions, and even data to have multiple forms.

Image of globe with network of connected dots in the space above it.

VOD does more by allowing data to be more, which is ideal for environments where change is the norm. Cyber Security, Fraud Detection, Threat Detection, Logistics, and Heuristic Load Optimization. In each case, performance, accuracy, and adaptability are the key to success.  

The ubiquity of devices generating data today, combined with the desire for people and companies to leverage that data for commercial and non-commercial benefit, is very different than what we saw 10+ years ago. Products like VOD are working their way up that Slope of Enlightenment because there is a need to connect the dots better and faster – especially as the volume and variety of those dots increases. It is not a “one size fits all” solution, but it is often the perfect tool for this type of work.

These are indeed exciting times!

Ideas are sometimes Slippery and Hard to Grasp

Posted on Updated on

I started this blog with the goal of becoming an “idea exchange,” as well as a way to pass along lessons learned to help others. Typical guidance for a blog is to focus on one thing and do it well to develop a following. That is especially important if you want to monetize the blog, but that is not and has not been my goal.

One of the things that has surprised me is how different the comments and likes are for each post. Feedback from the last post was even more diverse and surprising than usual. It ranged from comments about “Siri vs Google” to feedback about Sci-Fi books and movies to Artificial Intelligence.

I asked a few friends for feedback and received something very insightful (Thanks Jim). He stated that he found the blog interesting but wasn’t sure of the objective. He went on to identify several possible goals for the last post. Strangely enough (or maybe not), his comments mirrored the type of feedback that I received. That pointed out an area for improvement, and I appreciated that as well as the wisdom of focusing on one thing. Who knows, maybe in the future…

This also reminded me of a white paper written 12-13 years ago by someone I used to work with. It was about how Bluetooth would be the “next big thing.” He had read an IEEE paper or something and saw potential for this new technology. His paper provided the example of your toaster and coffee maker communicating so that your breakfast would be ready when you walk into the kitchen in the morning.

At that time, I had a couple of thoughts. Who cared about something that only had a 20-30-foot range when WiFi had become popular and had a much greater range? In addition, a couple of years earlier, I had a tour of the Microsoft “House of the Future,” in which everything was automated and key components communicated. But everything in the house was all hardwired or used WiFi – not Bluetooth. It was easy to dismiss his assertion because it seemed to lack pragmatism. The value of the idea was difficult to quantify, given the use case provided.

Idea 2

Looking back now, I view that white paper as having insight. If it was visionary, he would have come out with the first Bluetooth speakers, car interface, or even phone earpiece and gotten rich, but it failed to present practical use cases that were easy enough to understand yet different enough from what was available at the time to demonstrate the real value of the idea. His expression of idea was not tangible enough and, therefore, too slippery to be easily grasped and valued.

I believe that good ideas sometimes originate where you least expect them. Those ideas are often incremental – seemingly simple and sometimes borderline obvious, often building on another idea or concept. An idea does not need to be unique to be important or valuable, but it needs to be presented in a way that makes it easy to understand the benefits, differentiation, and value. That is just good communication.

One of the things I miss most from when my consulting company was active was the interaction between a couple of key people (Jason and Peter) and myself. Those guys were very good at taking an idea and helping build it out. This worked well because we had some overlapping expertise and experiences as well as skills and perspectives that were more complementary. That diversity increased the depth and breadth of our efforts to develop and extend those ideas by asking the tough questions early and ensuring we could convince each other of the value.

Our discussions were creative, highly collaborative, and a lot of fun. We improved from them, and the outcome was usually viable from a commercial perspective. As a growing and profitable small business, you must constantly innovate to differentiate yourself. Our discussions were driven as much by necessity as intellectual curiosity, and I believe this was part of the magic.

So, back to the last post. I view various technologies as building blocks. Some are foundational, and others are complementary. To me, the key is not viewing those various technologies as competing with each other. Instead, I look for potential value created by integrating them with each other. That may not always be possible and does not always lead to something better, but occasionally it does, so to me, it is a worthwhile exercise. With regard to voice technology, I believe we will see more, better, and smarter applications of it – especially as real-time and AI systems become more complex due to the use of an increasing number of specialized chips, component systems, geospatial technology, and sensors.

While today’s smartphone interfaces would not pass the Turing Test or proposed alternatives, they are an improvement over more simplistic voice translation tools available just a few years ago. Advancement requires the tools to understand context in order to make inferences. This brings you closer to machine learning, and big data (when done right) significantly increases that potential.

Ultimately, this all leads back to Artificial Intelligence (at least in my mind). It’s a big leap from a simple voice translation tool to AI, but it is not such a stretch when viewed as building blocks.

Now think about creating an interface (API) that allows one smart device to communicate with another, like the collaborative efforts described above with my old team. It’s not simply having a front-end device exchanging keywords or queries with a back-end device. Instead, it is two or more devices and/or systems having a “discussion” about what is being requested, looking at what each component “knows,” making inferences based on location and speed, asking clarifying questions and making suggestions, and then finally taking that multi-dimensional understanding of the problem to determine what is really needed.

So, possibly not true AI (yet), but a giant leap forward from what we have today. That would help turn the science fiction of the past into science fact in the near future. The better the understanding and inferences by the smart system, the better the results.

I also believe that the unintended consequence of these new smart systems is that they will likely make errors or have biases like a human as they become more human-like in their approach. Hopefully, those smart systems will be able to automatically back-test recommendations to validate and minimize errors. If they are intelligent enough to monitor results and suggest corrective actions when they determine that the recommendation does not have the optimal desired results, they would become even “smarter.” There won’t be an ego creating a distortion filter about the approach or the results. Or maybe there will…

Many of the building blocks required to create these new systems are available today. But it takes vision and insight to see that potential, translate ideas from slippery and abstract to tangible and purposeful, and then start building something cool and useful. As that happens, we will see a paradigm shift in how we interact with computers and how they interact with us. It will become more interactive and intuitive. That will lead us to the systematic integration that I wrote about in a big data / nanotechnology post.

So, what is the real objective of my blog? To get people thinking about things differently, to foster collaboration and partnerships between businesses and educational institutions to push the limits of technology, and to foster discussion about what others believe the future of computing and smart devices will look like. I’m confident that I will see these types of systems in my lifetime, and I believe in the possibility of this occurring within the next decade.

What are your thoughts?