Latest Event Updates

The Unsung Hero of Big Data

Posted on Updated on

Earlier this week, I read a blog post regarding the recent Gartner Hype Cycle for Advanced Analytics and Data Science, 2015. The Gartner chart reminded me of the epigram, “Plus ça change, plus c’est la même chose” (asserting that history repeats itself by stating the more things change, the more they stay the same.)

To some extent, that is true, as you could consider today’s Big Data as a derivative of yesterday’s VLDBs (very large databases) and Data Warehouses. One of the biggest changes, IMO is the shift away from Star Schemas and practices implemented for performance reasons, such as aggregation of data sets, using derived and encoded values, using surrogate and foreign keys to establish linkage, etc. Going forward, it may not be possible to have that much rigidity and be as responsive as needed from a competitive perspective.

There are many dimensions to big data: A huge sample of data (volume), which becomes your universal set and supports deep analysis as well as temporal and spatial analysis; A variety of data (structured and unstructured) that often does not lend itself to SQL based analytics; and often data streaming in (velocity) from multiple sources – an area that will become even more important in the era of the Internet of Things. These are the “Three V’s” people have talked about for the past five years.

Like many people, my interest in Object Database technology initially waned in the late 1990s. That is, until about four years ago when a project at work led me back in this direction. As I dug into the various products, I learned they were alive and doing well in several niche areas. That finding led to a better understanding of the real value of object databases.

Some products try to be “All Vs to all people,” but generally, what works best is a complementary, integrated set of tools working together as services within a single platform. It makes a lot of sense. So, back to object databases.

One of the things I like most about my job is the business development aspect. One of the product families I’m responsible for is Versant. With the Versant Object Database (VOD – high performance, high throughput, high concurrency) and Fast Objects (great for embedded applications). I’ve met and worked with brilliant people who have created amazing products based on this technology. Creative people like these are fun to work with, and helping them grow their business is mutually beneficial. Everyone wins.

An area where VOD excels is with the near real-time processing of streaming data. The reason it is so adept at this task is the way that objects are mapped out in the database. They do so in a way that essentially mirrors reality. So, optionality is not a problem – no disjoint queries or missed data, no complex query gyrations to get the correct data set, etc. Things like sparse indexing are no problem with VOD. This means that pattern matching is quick and easy, as well as more traditional rule and look-up validation. Polymorphism allows objects, functions, and even data to have multiple forms.

Image of globe with network of connected dots in the space above it.

VOD does more by allowing data to be more, which is ideal for environments where change is the norm. Cyber Security, Fraud Detection, Threat Detection, Logistics, and Heuristic Load Optimization. In each case, performance, accuracy, and adaptability are the key to success.  

The ubiquity of devices generating data today, combined with the desire for people and companies to leverage that data for commercial and non-commercial benefit, is very different than what we saw 10+ years ago. Products like VOD are working their way up that Slope of Enlightenment because there is a need to connect the dots better and faster – especially as the volume and variety of those dots increases. It is not a “one size fits all” solution, but it is often the perfect tool for this type of work.

These are indeed exciting times!

Non-Linear Thought Process and a Message for my Children

Posted on Updated on

I have recently been investigating and visiting universities with my eldest daughter, a Senior in High School. Last week we visited Stanford University (an amazing experience) and spent a week in Northern California on vacation. After being home for a day and a half, I am in Texas for a week of team meetings and training.

On the first night of a trip, I seldom sleep, so I listened to the song “Don’t Let It Bring You Down” by Annie Lennox, a cover of a Neil Young song. That led to a YouTube search for the original Neil Young version, which led to me listening to “Old Man” – a favorite song of mine for over 30 years. That led to some reflection which ultimately led to this post.

I mention this because it is an example of the nonlinear or divergent thought process (generally viewed as a negative trait) that occurs naturally for me. It helps me “connect the dots” faster and more naturally. It is a manner of thinking associated with ADHD (again, something generally viewed as negative). The interesting thing is that to fit in and succeed with ADHD, you tend to develop logical systems for focus and consistency. That has many positive benefits for me – such as systemic thinking, creating repeatable processes, and automation.

Photo by Cu00e9sar Gaviria on Pexels.com

The combination of linear and non-linear thinking can really fuel creativity. The downside is that it can take quite a while for others to see the potential of your ideas, which can be extremely frustrating. But, you learn to communicate better and deal with the fact that ideas can be difficult to grasp. The upside is that you tend to create relationships with other innovators because they think like you, making you relatable and interesting to them. The world is a strange place.

It is funny how there are several points in your life when you have an epiphany, and things suddenly make complete sense. That causes you to realize how much time and effort could have been saved if you had only been able to figure something out sooner. As a parent, I always try to identify and create learning shortcuts for my children so they reach those points much sooner than I did.

I started this post thinking that I would document as many of those lessons as possible to serve as a future reminder and possibly help others. Instead, I decided to post a few things I view as foundational truisms in life that could help foster that personal growth process. So, here goes…

  1. Always work hard to be the best, but never let yourself believe you are the best. Even if you truly are, it will be short-lived, as there are always people doing everything they can to be the best. Ultimately, that is a good thing. You need to have enough of an ego to test the limits and capabilities of things, but not one that is so big that it alienates or marginalizes those around you.
  2. Learn from everything you do – good and bad. Continuous improvement is so important. By focusing on this, you constantly challenge yourself to try new things and find better (i.e., more effective, more efficient, and more consistent) ways to do things.
  3. Realize that the difference between a brilliant and a stupid idea is often perspective. Years ago, I taught technical courses, and occasionally someone would describe something they did that seemed strange or wrong. But, if you asked questions and tried to understand why they did what they did, you would often identify the brilliance in that approach. It is something that is both exciting and humbling.
  4. Incorporating new approaches or the best practices of others into your own proven methods and processes is part of continuous improvement, but it only works if you can set aside your ego and keep an open mind.
  5. Believe in yourself, even when others don’t share that belief. Remain open to feedback and constructive criticism as a way to learn and improve, but never give up on yourself. There is a huge but sometimes subtle difference between confidence and arrogance, and that line is often drawn at the point where you can accept that you might be wrong or that there might be a better way to do something. Become the person people like working with and not the person they avoid or want to see fail.
  6. Surround yourself with the best people that you can find. Look for people with diverse backgrounds and complementary skills. The best teams I have ever been involved with consisted of high achievers who constantly raised the bar for each other while simultaneously creating a safety net for their teammates. Those teams grew and did amazing things because everyone was very competitive and supportive of each other.
  7. Keep notes or a journal because good ideas are often fleeting and hard to recall. Remember, good ideas can come from anywhere, so keep track of the suggestions of others and make sure that you attribute those ideas to the proper source.
  8. Try to make a difference in the world. Try to leave everything you “touch” (job, relationship, project, whatever) in a better state than before you were there. Helping others improve and leading by example are two simple ways of making a difference.
  9. Accept that failure is a natural obstacle on your path to success. You are not trying hard enough if you never fail. But you are also not trying hard enough if you fail too often. That is very subjective, and honest introspection is your best gauge. Be accountable, accept responsibility, document the lessons learned, and move on.
  10. Dream big, and use that as motivation to learn new things. While I funded medical research, I learned about genetics, genomics, and biology. That expanded to interests in nanotechnology, artificial intelligence, machine learning, neural networks, and interfaces such as natural language and non-verbal / emotional. Someday I hope to tie these together to help cure a disease (Arthritis) and improve the quality of life for millions of people. Will that ever happen? I don’t know, but I do know that if I don’t try, it will never happen because of anything I did.
  11. Focus on the positive, not the negative. Creativity is stifled in environments where fear and blame rule.
  12. Never hesitate to apologize when you are wrong. This is a sign of strength, not weakness.
  13. And above all else, honesty and integrity should be the foundation for everything you do and are.

Hopefully, this will help my children become the best people possible, ideally early on in their lives. I was 30 years old before I had a clue about many of these things. Until that point, I was somewhat selfish and focused on winning. Winning and success are good things, but are better when accomplished the right way.

Ideas are sometimes Slippery and Hard to Grasp

Posted on Updated on

I started this blog with the goal of becoming an “idea exchange,” as well as a way to pass along lessons learned to help others. Typical guidance for a blog is to focus on one thing and do it well to develop a following. That is especially important if you want to monetize the blog, but that is not and has not been my goal.

One of the things that has surprised me is how different the comments and likes are for each post. Feedback from the last post was even more diverse and surprising than usual. It ranged from comments about “Siri vs Google” to feedback about Sci-Fi books and movies to Artificial Intelligence.

I asked a few friends for feedback and received something very insightful (Thanks Jim). He stated that he found the blog interesting but wasn’t sure of the objective. He went on to identify several possible goals for the last post. Strangely enough (or maybe not), his comments mirrored the type of feedback that I received. That pointed out an area for improvement, and I appreciated that as well as the wisdom of focusing on one thing. Who knows, maybe in the future…

This also reminded me of a white paper written 12-13 years ago by someone I used to work with. It was about how Bluetooth would be the “next big thing.” He had read an IEEE paper or something and saw potential for this new technology. His paper provided the example of your toaster and coffee maker communicating so that your breakfast would be ready when you walk into the kitchen in the morning.

At that time, I had a couple of thoughts. Who cared about something that only had a 20-30-foot range when WiFi had become popular and had a much greater range? In addition, a couple of years earlier, I had a tour of the Microsoft “House of the Future,” in which everything was automated and key components communicated. But everything in the house was all hardwired or used WiFi – not Bluetooth. It was easy to dismiss his assertion because it seemed to lack pragmatism. The value of the idea was difficult to quantify, given the use case provided.

Idea 2

Looking back now, I view that white paper as having insight. If it was visionary, he would have come out with the first Bluetooth speakers, car interface, or even phone earpiece and gotten rich, but it failed to present practical use cases that were easy enough to understand yet different enough from what was available at the time to demonstrate the real value of the idea. His expression of idea was not tangible enough and, therefore, too slippery to be easily grasped and valued.

I believe that good ideas sometimes originate where you least expect them. Those ideas are often incremental – seemingly simple and sometimes borderline obvious, often building on another idea or concept. An idea does not need to be unique to be important or valuable, but it needs to be presented in a way that makes it easy to understand the benefits, differentiation, and value. That is just good communication.

One of the things I miss most from when my consulting company was active was the interaction between a couple of key people (Jason and Peter) and myself. Those guys were very good at taking an idea and helping build it out. This worked well because we had some overlapping expertise and experiences as well as skills and perspectives that were more complementary. That diversity increased the depth and breadth of our efforts to develop and extend those ideas by asking the tough questions early and ensuring we could convince each other of the value.

Our discussions were creative, highly collaborative, and a lot of fun. We improved from them, and the outcome was usually viable from a commercial perspective. As a growing and profitable small business, you must constantly innovate to differentiate yourself. Our discussions were driven as much by necessity as intellectual curiosity, and I believe this was part of the magic.

So, back to the last post. I view various technologies as building blocks. Some are foundational, and others are complementary. To me, the key is not viewing those various technologies as competing with each other. Instead, I look for potential value created by integrating them with each other. That may not always be possible and does not always lead to something better, but occasionally it does, so to me, it is a worthwhile exercise. With regard to voice technology, I believe we will see more, better, and smarter applications of it – especially as real-time and AI systems become more complex due to the use of an increasing number of specialized chips, component systems, geospatial technology, and sensors.

While today’s smartphone interfaces would not pass the Turing Test or proposed alternatives, they are an improvement over more simplistic voice translation tools available just a few years ago. Advancement requires the tools to understand context in order to make inferences. This brings you closer to machine learning, and big data (when done right) significantly increases that potential.

Ultimately, this all leads back to Artificial Intelligence (at least in my mind). It’s a big leap from a simple voice translation tool to AI, but it is not such a stretch when viewed as building blocks.

Now think about creating an interface (API) that allows one smart device to communicate with another, like the collaborative efforts described above with my old team. It’s not simply having a front-end device exchanging keywords or queries with a back-end device. Instead, it is two or more devices and/or systems having a “discussion” about what is being requested, looking at what each component “knows,” making inferences based on location and speed, asking clarifying questions and making suggestions, and then finally taking that multi-dimensional understanding of the problem to determine what is really needed.

So, possibly not true AI (yet), but a giant leap forward from what we have today. That would help turn the science fiction of the past into science fact in the near future. The better the understanding and inferences by the smart system, the better the results.

I also believe that the unintended consequence of these new smart systems is that they will likely make errors or have biases like a human as they become more human-like in their approach. Hopefully, those smart systems will be able to automatically back-test recommendations to validate and minimize errors. If they are intelligent enough to monitor results and suggest corrective actions when they determine that the recommendation does not have the optimal desired results, they would become even “smarter.” There won’t be an ego creating a distortion filter about the approach or the results. Or maybe there will…

Many of the building blocks required to create these new systems are available today. But it takes vision and insight to see that potential, translate ideas from slippery and abstract to tangible and purposeful, and then start building something cool and useful. As that happens, we will see a paradigm shift in how we interact with computers and how they interact with us. It will become more interactive and intuitive. That will lead us to the systematic integration that I wrote about in a big data / nanotechnology post.

So, what is the real objective of my blog? To get people thinking about things differently, to foster collaboration and partnerships between businesses and educational institutions to push the limits of technology, and to foster discussion about what others believe the future of computing and smart devices will look like. I’m confident that I will see these types of systems in my lifetime, and I believe in the possibility of this occurring within the next decade.

What are your thoughts?

The Future of Smart Interfaces

Posted on Updated on

Recently, I was helping one of my children research a topic for a school paper. She was doing well, but the results she was getting were overly broad. So, I taught her some “Google-Fu,” explaining how you can structure queries in ways that yield better results. She replied that search engines should be smarter than that. I explained that sometimes the problem is that search engines look at your past searches and customize results as an attempt to appear smarter or to motivate someone to do or believe something.

Unfortunately, those results can be skewed and potentially lead someone in the wrong direction. It was a good reminder that getting the best results from search engines often requires a bit of skill and query planning, as well as occasional third-party validation.

Then the other day I saw this commercial from Motel 6 (“GasStation Trouble”) where a man has problems getting good results from his smartphone. That reminded me of seeing someone speak to their phone, getting frustrated by the responses received. His questions went something like this:

Siri, I want to take my wife to dinner tonight, someplace that is not too far away, and not too late. And she likes to have a view while eating so please look for something with a nice view. Oh, and we don’t want Italian food because we just had that last night.

Just as amazing as the question being asked was watching him ask it over and over again in the exact same way, each time becoming even more frustrated. I asked myself, “Are smartphones making us dumber?Instead of contemplating that question I began to think about what future smart interfaces would or could be like. 

I grew up watching Sci-Fi computer interfaces like “Computer” on Star Trek (1966), “HAL” on 2001 : A Space Odyssey (1968), “KITT” from Knight Rider (1982), and “Samantha” from Her (2013). These interfaces had a few things in common:

  1. They responded to verbal commands.
  2. They were interactive – not just providing answers, but also asking qualifying questions and allowing for interrupts to drill-down or enhance the search (e.g., with pictures or questions that resembled verbal Venn diagrams).
  3. They often suggested alternative queries based on intuition. That would have been helpful for the gentleman trying to find a restaurant.
Digitized image of a man's face overlaying the globe

Despite having 50 years of science fiction examples, we are still a long way off from realizing the goal of a truly intelligent interface. Like many new technologies, they were originally envisioned by science fiction writers long before they appeared in science.

There seems to be a spectrum of common beliefs about modern interfaces. On one end, some products make visualization easy, facilitating understanding, refinement, and drill-down of data sets. Tableau is an excellent example of this type of easy-to-use interface. At the other end of the spectrum, the emphasis is on back-end systems – robust computer systems that digest huge volumes of data and return the results to complex queries within seconds. Several other vendors offer powerful analytics platforms. In reality, you really need a strong front-end and back-end if you want to achieve the full potential of either. 

But, there is so much more potential…

I predict that within the next 3 – 5 years, we will see business and consumer interface examples (powered by AI and Natural Language Processing, or NLP) that are closer to the verbal interfaces from those familiar Sci-Fi shows (albeit with limited capabilities and no flashing lights).

Within the next 10 years, I believe we will have computer interfaces that intuit our needs and facilitate the generation of correct answers quickly and easily. While this is unlikely to be at the level of “The world’s first intelligent Operating System” envisioned in the movie “Her,” and probably won’t even be able to read lips like “HAL,” it should be much more like HAL and KITT than like Siri (from Apple) or Cortana (from Microsoft).

Siri was groundbreaking consumer technology when it was introduced. Cortana seems to have taken a small leap ahead. While I have not mentioned Google Now, it is somewhat of a latecomer to this consumer smart interface party, and in my opinion, it is behind both Siri and Cortana.

So, what will this future smart interface do? It will need to be very powerful, harnessing a natural language interface on the front-end with an extremely flexible and robust analytics interface on the back-end. The language interface will need to take a standard question (in multiple languages and dialects) – just as if you were asking a person, deconstruct it using Natural Language Processing, and develop the proper query based on the available data. That is important, but it only gets you so far.

Data will come from many sources – things that we consider today with relational, object, graph, and NoSQL databases. There will be structured and unstructured data that must be joined and filtered quickly and accurately. In addition, context will be more important than ever. Pictures and videos could be scanned for facial recognition, location (via geotagging), and, in the case of videos, analyze speech. Relationships will be identified and inferred based on a variety of sources, using both data and metadata. Sensors will collect data from almost everything we do and (someday) wear, which will provide both content and context.

The use of Stylometry will identify outside content likely related to the people involved in the query and provide further context about interests, activities, and even biases. This is how future interfaces will truly understand (not just interpret), intuit (so it can determine what you really want to know), and then present results that may be far more accurate than we are used to today. Because the interface is interactive in nature, it will provide the ability to organize and analyze subsets of data quickly and easily.

So, where do I think that this technology will originate? I believe that it will be adapted from video game technology. Video games have consistently pushed the envelope over the years, helping drive the need for higher bandwidth I/O capabilities in devices and networks, better and faster graphics capabilities, and larger and faster storage (which ultimately led to flash memory and even Hadoop). Animation has become very lifelike, and games are becoming more responsive to audio commands. It is not a stretch of the imagination to believe that this is where the next generation of smart interfaces will be found (instead of from the evolution of current smart interfaces).

Someday, it may no longer be possible to “tweak” results through the use or omission of keywords, quotation marks, and flags. Additionally, it may no longer be necessary to understand special query languages (SQL, NoSQL, SPARQL, etc.) and syntax. We won’t have to worry as much about incorrect joins, spurious correlations, and biased result sets. Instead, we will be given the answers we need – even if we don’t realize that this was what we needed in the first place – which will likely be driven by AI. At that point, computer systems may appear nearly omniscient.

When this happens, parents will no longer need to teach their children “Google-Fu.” Those are going to be interesting times indeed.

Spurious Correlations Follow-up

Posted on Updated on

In an earlier post, I wrote about spurious correlations. Over the weekend, I ran across a site that focuses on finding and posting amusing, spurious correlations. While the posts are intended to be funny, they make some very valid points. So, check it out, let me know what you think, and have some fun!

http://www.tylervigen.com/