Information of Things

IoT and Vendor Lock-in

Posted on Updated on

I was researching an idea last weekend and stumbled across something unexpected. My view on IoT has been that it provides a framework to support a rich ecosystem of hardware and software products. That flexibility and extensibility foster innovation, which fosters greater use and adoption of the best products. It was quite a surprise to discover that IoT was being used to do just the opposite.

My initial finding was a YouTube video about “Tractor Hacking” to allow farmers to make their own repairs. That seemed like an odd video to appear in my search results, but it made sense midway or so through the video. There is a discussion about not having access to software, replacement components not working because they are not registered with that tractor’s serial number, and that the only alternative is costly transportation of the equipment to a Dealership to have a costly component installed.

Image of jail cell representing vendor lock-in
Image Copyright (c) gograph.com/VIPDesignUSA

I initially thought there had to be more to the story, as I found it hard to believe that a major vendor in any industry would intentionally do something like this. That led me to an article from nearly two years earlier that contained the following:

“IoT to completely transform their business model”   and

“John Deere was looking for ways to change their business model and extend their products and service offering, allowing for a more constant flow of revenue from a single customer. The IoT allows them to do just that.”

That article closed with the assertion:

“Moreover, only allowing John Deere products access to the ecosystem creates a buyer lock-in for the farmers. Once they own John Deere equipment and make use of their services, it will be very expensive to switch to another supplier, thus strengthening John Deere’s strategic position.”

While any technology – especially platforms, has the potential for vendor lock-in, the majority of vendors offer some form of openness, such as:

  • Supporting open standards, APIs, and processes that support portability and third-party product access.
  • Provide simple ways to unload your data in at least one of several commonly used non-proprietary formats.

Some buyers may deliberately implement systems that support non-standard technology and extensions because they believe the long-term benefits of a tightly coupled system outweigh the risks of being locked into a vendor’s proprietary stack. But, there are almost always several competitive options available, so it is a fully informed decision.

Less technology-savvy buyers may never even consider asking questions like this when purchasing. Even technologically savvy people may not consider IoT a key component of some everyday items, failing to recognize the implications of a closed system for their purchase. It will be interesting to see if this deliberate business strategy changes due to competitive pressure, social pressure, or legislation over the coming years.

In the meantime, the principle of caveat emptor may be truer than ever in this age of connected everything and the Internet of Things.

My perspective on Big Data

Posted on Updated on

Ever since I worked on redesigning a risk management system at an insurance company (1994-1995) I was impressed at how better decisions could be made with more data – assuming it was the right data.  The concept of “What is the right data?” has intrigued me for years, as what may seem common sense today could have been unknown 5-10 years ago and could be completely passé 5-10 years from now. Context becomes very important because of the variability and relevance of data over time.

This is what makes Big Data interesting. There really is no right or wrong answer or definition. Having a framework to define, categorize, and use that data is important. And at some point, being able to refer to the data in context will also be very important. Just think about how challenging it could be to compare scenarios or events from 5 years ago with those of today. It’s likely not an apples-to-apples comparison, but it could certainly be done. The concept of maximizing the value of data is pretty cool stuff.

The way I think of Big Data is similar to a water tributary system. Water enters the system in many ways – rain from the clouds, sprinkles from private and public supplies, runoff, overflow, etc.  It also has many interesting dimensions, such as quality/purity (not necessarily the same due to different aspects of need), velocity, depth, capacity, and so forth. Not all water gets into the tributary system (e.g., some is absorbed into the groundwater tables, and some evaporates) – just as some data loss should be anticipated.

Image of the world with a water hose wrapped around it.

If you think of streams, ponds, rivers, lakes, reservoirs, deltas, etc., many relevant analogies can be made. And just like the course of a river may change over time, data in our “big data” water tributary system could also change over time.

Another part of my thinking is based on my experience of working on a project for a Nanotech company about a decade ago (2002 – 2003 timeframe). In their labs, they were testing various products. There were particles that changed reflectivity based on the temperature that were embedded in shingles and paint. There were very small batteries that could be recharged quickly tens of thousands of times, were light, and had more capacity than a common 12-volt car battery.

And there was a section where they were doing “biometric testing” for the military. I have since read articles about things like smart fabrics that could monitor a soldier’s health and apply basic first aid and notify others once a problem is detected.  This company felt that by 2020, advanced nanotechnology would be widely used by the military, and by 2025, it would be in wide commercial use.  Is that still a possibility? Who knows…

Much of what you read today is about the exponential growth of data. I agree with that, but as stated earlier, and this is important, I believe that the nature and sources of that data will change significantly.  For example, nanoparticles in engine oil will provide information about temperature, engine speed, load, and even rapid changes in motion (fast take-off or stops, quick turns). The nanoparticles in the paint will provide weather conditions. The nanoparticles on the seat upholstery will provide information about occupants (number, size, weight). Sort of like the “sensor web” from the original Kevin Delin perspective. A lot of “Information of Things” (IoT) data will be generated, but then what?

I believe that time will become an essential aspect of every piece of data and that location (X, Y, and Z coordinates) will be just as important. However, not every sensor collects location (spatial) data. I believe multiple data aggregators will be in everyday use at common points (your car, your house, your watch). Those aggregators will package the available data into something akin to an XML object, allowing flexibility.  From my perspective, this is where things become very interesting relative to commercial use and data privacy.

Currently, companies like Google make a lot of money by aggregating data from multiple sources, correlating it with various attributes, and then selling knowledge derived from that data. I believe there will be opportunities for individuals to use “data exchanges” to manage, sell, and directly benefit from their own data. The more interesting their data, the more value it has and the more benefit it provides to the person selling it. This could have a significant economic impact, fostering both the use and expansion of the commercial ecosystems needed to manage this technology’s commercial and privacy aspects, especially as it relates to machine learning.

The next logical step in this vision is “smart everything.” For example, you could buy a shirt that is just a shirt. But you could turn on medical monitoring or refractive heating/cooling for an extra cost. And, if you felt there was a market for extra dimensions of data that could benefit you financially, you could also enable those sensors. Just think of the potential impact that technology would have on commerce in this scenario.

I believe this will happen within the next decade or so. This won’t be the only type of use of big data. Instead, there will be many valid types and uses of data – some complementary and some completely discrete. It has the potential to become a confusing mess. But, people will find ways to ingest, categorize, and correlate data to create value – today or in the future.

Utilizing data will become an increasingly competitive advantage for people and companies, knowing how to do something interesting and useful. Who knows what will be viewed as valuable data 5-10 years from now, but it will likely be different than what we view as valuable data today.

So, what are your thoughts? Can we predict the future based on the past? Or, is it simply enough to create platforms that are powerful enough, flexible enough, and extensible enough to change our understanding as our perspective of what is important changes? Either way, it will be fun!