A few months ago I purchased Fitbit watches for my children and myself. My goals were twofold. First, I was hoping that they would motivate all of us to be more active. Second, I wanted to foster a sense of competition (including fair play and winning) within my children. Much of their pre-High School experiences focused on “participation,” as many schools feel that competition is bad. Unfortunately, competition is everywhere in life, so if don’t play to win you may not get the opportunity to play at all.
It is fun seeing them push to be the high achiever for the day, and to continually push themselves to do better week-by-week and month-by-month. I believe this creates a wonderful mindset that makes you want to do more, learn more, achieve more, and make an overall greater impact with everything they do. People who do that are also more interesting to spend time with, so that is a bonus.
Recently my 14 year-old son and I went for a long walk at night. It was a cold, windy, and fairly dark night. We live in fairly rural area so it is not uncommon to see and hear various wild animals on a 3-4 mile walk. I’m always looking for opportunities to teach my kids things in a way that is fun and memorable, and in a way that they don’t realize they are being taught. Retention of the concepts is very high when I am able to make it relevant to something we are doing.
That night we started talking about the wind. It was steady with occasional gusts, and at times it changed direction slightly. I pointed out the movement on bushes and taller grass on the side of the road. We discussed direction, and I told him to think about the wind like an invisible arrow, and then explained how those arrows traveled in straight lines or vectors until they met some other object. We discussed which object would “win,” and how the force of one object could impact another object. My plan was to discuss Newton’s three laws of motion.
My son asked if that is why airplanes sometimes appear to be flying at an angle but are going straight. He seemed to be grasping the concept. He then asked me if drones would be smart enough to make those adjustments, which quickly led to me discussing the use potential future of “intelligent” AI-based drones by the military. When he was 9 he wanted to be a Navy SEAL, but once he saw how much work that was he decided that he would rather be Transformer (which I explained was not a real thing). My plan was to use this example to discuss robotics and how you might program a robot to do various tasks, and then move to how it could learn from the past tasks and outcomes. I wanted him to logically break down the actions and think about managing complexity. But, no such luck that night.
His mind jumped to “Terminator” and “I, Robot.” I pointed out that Science Fiction does occasionally become Science Fact, which makes this type of discussion even more interesting. I also pointed out that there is spectrum between the best possible outcome – Utopia, and the worst possible outcome – Dystopia, and asked him what he thought could happen if machines could learn and become smarter on their own.
His response was that things would probably fall somewhere in the middle, but that there would be people at each end trying to pull the technology in their direction. That seemed like a very enlightened estimation. He asked me what I thought and I replied that I agreed with him. I then noted how some really intelligent guys like Stephen Hawking and Elon Musk are worried about the dystopian future and recently published a letter to express their concerns about potential pitfalls of AI (artificial intelligence). This is where the discussion became really interesting…
We discussed why you would want a program or a robot to learn and improve – so that it could continue to become better and more efficient, just like a person. We discussed good and bad, and how difficult it could be to control something that doesn’t have morals or understand social mores (which he felt if this robot was smart enough to learn on its own that it would also learn those things based on observations and interactions). That was an interesting perspective.
I told him about my discussions with his older sister, who wants to become a Physician, about how I believe that robotics, nanotechnology, and pharmacology will become the future of medicine. He and I took the logical next step and thought about a generic but intelligent medicine that identified and fixed problems independently, and then sent the data and lessons learned for others to learn from.
I’m sure that we will have an Internet of Things (IoT) discussion later, but for now I will tie this back to our discussion and Fitbit wearable technology.
After the walk I was thinking about what just happened, and was pleased because it seemed to spark some genuine interest in him. I’m always looking for that perfect recipe for innovation, but it is elusive and so far lacks repeatability. It may be possible to list many of the “ingredients” (intelligence, creativity, curiosity, confidence (to try and accept and learn from failure), multi-disciplinary experiences and expertise) and “measurements” (such as a mix of complementary skills, a mix of roles, and a special environment (i.e., strives to learn and improve, rewards both learning and success but doesn’t penalize failure, and creates a competitive environment that understands that in most cases the team is more important than any one individual)).
That type of environment is magical when you can create it, but it takes so much more than just having people and a place that seem to match the recipe. A critical “activation” component or two is missing. Things like curiosity, creativity, ingenuity, and a bit of fearlessness.
I tend to visualize things, so while I was thinking about this I pictured a tree with multiple “brains” (my mental image looked somewhat like broccoli) that had visible roots. Those roots were creative ideas that went off in various directions. Trees with more roots that were bigger and went deeper would stand out in a forest of regular trees.
Each major branch (brain/person) would have a certain degree of independence, but ultimately everything on the tree worked as a system. To me, this description makes so much more sense than the idea of a recipe, but it still doesn’t bring me closer to being able map the DNA of this imaginary tree.
At the end of our long walk it seemed that I probably learned as much as my son did. We made a connection that will likely lead to more walks and more discussions.
And in a strange way, I can thank the purchase of these Fitbit watches for being the motivation for an activity that led to this amazing discussion. From that perspective alone this was money well spent.
I have recently been investigating and visiting universities with my eldest daughter, who is currently a Senior in High School. Last week we visited Stanford University (an amazing experience) and then we spent a week in Northern California on vacation. After being home for a day and a half I am currently in Texas for a week of team meetings and training.
The first night of a trip I seldom sleep, so I was listening to the song, “Don’t let it bring you down” by Annie Lennox, which is a cover of a Neil Young song. That led to a Youtube search for the original Neil Young version, which led to me listening to the song “Old Man” – a favorite song of mine for over 30 years. That led to some reflection which ultimately led to this post.
The reason that I mention this is because it is an example of the tangential thought process (which is generally viewed as a negative trait) that occurs naturally for me. It is something that helps me “connect the dots” faster and more naturally. It is part of the non-linear thinking associated with ADHD (again, something generally viewed as negative). The interesting thing is that in order to fit in and be successful with ADHD you tend to develop logical systems for focus and consistency. For me personally, that has many positive benefits – such as creating repeatable processes and automation.
The combination of linear and non-linear thinking can really fuel creativity. The downside is that it can take quite a while for others to see the potential of your ideas, which can be extremely frustrating. But, you learn to communicate better and deal with the fact that ideas can be difficult to grasp. The upside is that you tend to create relationships with other innovators because they tend to think like you, so you become relatable and interesting to them. The world is a strange place.
It is funny how there are several points in your life when you have an epiphany and things suddenly make complete sense. That causes you to realize how much time and effort could have been saved if you had only been able to figure something out sooner. As a parent I am always trying to identify and create learning shortcuts for my children so that they can reach those points much sooner than I did.
I started this post thinking that I would document as many of those lessons as possible to serve as a future reminder and possibly help others. Instead, I decided to post a few things that I view as foundational truisms in life that could help foster that personal growth process. So, here goes…
- Always work hard to be the best, but never let yourself believe that you are the best. Even if you truly are, it will be short lived as there are always people out there doing everything that they can to be the best. Ultimately, that is a good thing. You need to have enough of an ego to test the limits of things, but not one that is so big that it alienates or marginalizes those around you.
- Learn from everything you do – good and bad. Continuous improvement is so important. By focusing on this you constantly challenge yourself to try new things and find better (i.e., more effective, more efficient, and more consistent) ways to do things.
- Realize that the difference between a brilliant and a stupid idea is often perspective. Years ago I taught technical courses, and occasionally someone would describe something they did that just seemed strange or wrong. But, if you took the time to ask questions and try to understand why they did what they did you would often identify the brilliance in that approach. It is something that is both exciting and humbling.
- Incorporating new approaches or the best practices of others into your own proven methods and processes is part of continuous improvement, but it only works if you are able to set aside your ego and keep an open mind.
- Believe in yourself, even when others don’t share that belief. Remain open to feedback and constructive criticism as a way to learn and improve, but never give up on yourself. There is a huge but sometimes subtle difference between confidence and arrogance, and that line is often drawn at the point where you can accept that you might be wrong, or that there might be a better way to do something. Become the person that people like working with, and not the person that they avoid or want to see fail.
- Surround yourself with the best people that you can find. Look for people with diverse backgrounds and complementary skills. The best teams that I have ever been involved with consisted of high achievers who constantly raised the bar for each other while simultaneously creating a safety net for their teammates. The team grew and did amazing things because everyone was both very competitive and very supportive of each other.
- Keep notes or a journal because good ideas are often fleeting and hard to recall. And remember, good ideas can come from anywhere so keep track of the suggestions of others and make sure that attribute those ideas to the proper source.
- Try to make a difference in the world. Try to leave everything your “touch” (job, relationship, project, whatever) in a better state that before you were there. Helping others improve and leading by example are two simple ways of making a difference.
- Accept that failure is a natural obstacle on your path to success. You are not trying hard enough if you never fail. But, you are also not trying hard enough if you fail too often. That is very subjective, and honest introspection is your best gauge. Be accountable, accept responsibility, document the lessons learned, and move on.
- Dream big, and use that as motivation to learn new things. While I was funding medical research efforts I spent time learning about genetics, genomics, and biology. That expanded to interests in nanotechnology, artificial intelligence, machine learning, neural networks and interfaces such as natural language and non-verbal / emotional. Someday I hope to tie these together in a way that could help cure a disease (Arthritis) and improve the quality of life for millions of people. Will that ever happen? I don’t know, but I do know that if I don’t try it will never happen because of anything that I did.
- Focus on the positive, not the negative. Creativity is stifled in environments where fear and blame rule.
- Never hesitate to apologize when you are wrong. This is a sign of strength, not weakness.
- And above all else, honesty and integrity should be the foundation for everything you do and are.
Hopefully, this will help my children become the best people possible, and ideally early-on in their lives. I was 30 years old before I feel that I really had a clue about a lot of these things. Until that point I was somewhat selfish and focused on winning. Winning and success are good things, but they are better when done the right way.
I started this blog the goal of it becoming an “idea exchange,” as well a way to pass along lessons learned to help others. Typical guidance for a blog is to focus on one thing only and do it well in order to develop a following. That is especially important if you want to monetize the blog, but that is not and has not been my goal.
One of the things that has surprised me is how different the comments and likes are for each post. Feedback from the last post was even more diverse and surprising than usual. It ranged from comments about “Siri vs Google,” to feedback about Sci-Fi books and movies, to Artificial Intelligence.
I asked a few friends for feedback and received something very insightful (Thanks Jim). He stated that he found the blog interesting, but wasn’t sure what the objective was. He went on to identify several possible goals for the last post. Strangely enough, his comments mirrored the type of feedback that I received. That pointed out an area for improvement to me, and I appreciated that, as well as the wisdom of focusing on one thing. Who knows, maybe in the future…
This also reminded me of a white paper written 12-13 years ago by someone I used to work with. It was about how Bluetooth was going to be the “next big thing.” He had read an IEEE paper or something and saw potential for this new technology. His paper provided the example of your toaster and coffee maker communicating so that your breakfast would be ready when you walk into the kitchen in the morning.
At that time I had a couple of thoughts. Who cared about something that only had a 20-30 foot range when WiFi was becoming popular and had much greater range? In addition, a couple of years earlier I had a tour of the Microsoft “House of the Future,” in which everything was automated and key components communicated with each other. But everything in the house was all hardwired or used WiFi – not Bluetooth. It was easy to dismiss his assertion because it seemed to lack pragmatism, and the value of the idea was difficult to quantify given the use case provided.
Looking back now I view that white paper as having insight (if it were visionary he would have come out with the first Bluetooth speakers, or car interface, or even phone earpiece and gotten rich), but it failed to present use cases that were easy enough to understand yet different enough from what was available at the time to demonstrate the real value of the idea. His expression of idea was not tangible enough and therefore too slippery to be easily grasped and valued.
I’m a huge believer that good ideas sometimes originate where you least expect them. Often those ideas are incremental in nature – seemingly simple and sometimes borderline obvious, often building on some other idea or concept. An idea does not need to be unique in order to be important or valuable, but it does need to be presented in a way that is easy to understand the benefits, differentiation, and value. That is just good communication.
One of the things I miss most from when my consulting company was active was the interaction between a couple of key people (Jason and Peter) and myself. Those guys were very good at taking an idea and helping build it out. This worked well because we had some overlapping expertise and experiences as well as skills and perspectives that were more complementary in nature. That diversity increased the depth and breadth to our efforts to develop and extend those ideas by asking the tough questions early and ensuring that we could convince each other of the value.
Our discussions were creative and highly collaborative as well as a lot of fun. Each of us improved from them, and the outcome was usually something viable from a commercial perspective. As a growing and profitable small business you need to constantly innovate to differentiate yourself from your competition. Our discussions were driven as much by necessity as they were by intellectual curiosity, and I personally believe that this was part of the magic.
So, back to the last post. I view various technologies as building blocks. Some are foundational and others are complementary. To me, the key is not viewing those various technologies as competing with each other. Instead, I look for potential value created by integrating them with each other. That may not always possible and does not always lead to something better, but occasionally it does so to me it is a worthwhile exercise. With regard to voice technology, I do believe that we will see more, better and smarter applications of it – especially as realtime systems become more complex due to the use of an increasing number of specialized component systems and sensors.
While today’s smartphones would not pass the Turing Test or proposed alternatives, they are an improvement over more simplistic voice translation tools available just a few years ago. Advancement requires the tools to understand context in order to make inferences. This brings you closer to machine learning, and big data (when done right) significantly increases that potential.
Ultimately, this all leads back to Artificial Intelligence (at least in my mind). It’s a big leap from a simple voice translation tool to AI, but when viewed as building blocks it is not such a stretch.
Now think about creating an interface (API) that allows one smart device to communicate with another in a manner akin to the collaborative efforts described above with my old team. It’s not simply having a front-end device exchanging keywords or queries with a back-end device. Instead, it is two or more devices and/or systems having a “discussion” about what is being requested, looking at what each component “knows,” asking clarifying questions and making suggestions, and then finally taking that multi-dimensional understanding of the problem to determine what is really needed.
So, possibly not true AI, but a giant leap forward from what we have today. That would help turn the science fiction of the past into science fact in the near future. The better the understanding and inferences by the smart system, the better the results.
I also believe that the unintended consequences of these new smart systems is that as they become more human-like in their approach the more likely they will be to make errors like a human. Hopefully they will be able to back test recommendations to validate and minimize errors. If they are intelligent enough to monitor results and make suggestions about corrective actions when they determine that the recommendation is not having the optimal desired results would make them even “smarter.” Best of all there won’t be an ego creating a distortion filter on the results. Or maybe there will…
A lot of the building blocks required to create these new systems are available today. But, it takes both vision and insight to see that potential, translate ideas from slippery and abstract to tangible and purposeful, and then start building something really cool. As that happens we will see a paradigm shift in how we interact with computers and how they interact with us. That will lead us to the systematic integration that I wrote about in a big data / nanotechnology post.
So, what is the real objective of my blog? To get people thinking about things in a different way, to foster collaboration and partnerships between businesses and educational institutions in order to push the limits of technology, and to foster discussion about what others believe the future of computing and smart devices will look like. I’m confident that I will see these types of systems in my lifetime, and believe in the possibility of a lot of this occurring within the next decade.
What are your thoughts?
Recently I was helping one of my children research a topic for a school paper. She was doing well, but the results she was getting were overly broad. So, I taught her some “Google-Fu,” explaining how you can structure queries in ways that yield better results. She replied that search engines should be smarter than that. I explained that sometimes the problem is that search engines look at your past searches and customize results as an attempt to appear smarter or to motivate someone to do or believe something.
Unfortunately, those results can be skewed and potentially lead someone in the wrong direction. It was a good reminder that getting the best results from search engines often requires a bit of skill and query planning, as well as occasional third-party validation.
Then the other day I saw this commercial from Motel 6 (“Gas Station Trouble”) where a man has problems getting good results from his smart phone. That reminded me of seeing someone speak to their phone, getting frustrated by the responses received. His questions went something like this:
“Siri, I want to take my wife to dinner tonight, someplace that is not too far away, and not too late. And she likes to have a view while eating so please look for something with a nice view. Oh, and we don’t want Italian food because we just had that last night.”
Just as amazing as the question being asked was watching him ask it over and over again in the exact same way, each time becoming even more frustrated. I asked myself, “Are smartphones making us dumber?” Instead of contemplating that question I began to think about what future smart interfaces would or could be like.
I grew up watching Sci-Fi computer interfaces like “Computer” on Star Trek (1966), “HAL” on 2001 : A Space Odyssey (1968), “KITT” from Knight Rider (1982), and “Samantha” from Her (2013). These interfaces had a few things in common:
- They responded to verbal commands;
- They were interactive – not just providing answers, but also asking qualifying questions and allowing for interrupts to drill-down or enhance the search (e.g., with pictures or questions that resembled verbal Venn diagrams);
- They often provided suggestions for alternate queries based on intuition. That would have been helpful for the gentleman trying to find a restaurant.
Despite having 50 years of science fiction examples, we are still a long way off from realizing that goal of a truly intelligent interface. Like many new technologies, they were originally envisioned by science fiction writers long before they appeared in science.
There seems to be a spectrum of common beliefs about modern interfaces. On one end there are products that make visualization easy, facilitating understanding, refinement and drill-down of data sets. Tableau is a great example of this type of easy to use interface. At the other end of the spectrum the emphasis is on back-end systems – robust computer systems that digest huge volumes of data and return the results to complex queries within seconds. Several other vendors offer powerful analytics platforms. In reality, you really need a strong front-end and back-end if you want to achieve the full potential of either.
But, there is so much more potential…
I predict that within the next 3 – 5 years we will see business and consumer interface examples (powered by Natural Language Processing, or NLP) that are closer to the verbal interfaces from those familiar Sci-Fi shows (albeit with limited capabilities and no flashing lights).
Within the next 10 years I believe we will have computer interfaces that intuit our needs and facilitate the generation of correct answers quickly and easily. While this is unlikely to be at the level of “The world’s first intelligent Operating System” envisioned in the movie “Her,” and probably won’t even be able to read lips like “HAL,” it should be much more like HAL and KITT than like Siri (from Apple) or Cortana (from Microsoft).
Siri was groundbreaking consumer technology when it was introduced. Cortana seems to have taken a small leap ahead. While I have not mentioned Google Now, it is somewhat of a latecomer to this consumer smart interface party, and in my opinion is behind both Siri and Cortana.
So, what will this future smart interface do? It will need to be very powerful, harnessing a natural language interface on the front-end with an extremely flexible and robust analytics interface on the back-end. The language interface will need to take a standard question (in multiple languages and dialects) – just as if you were asking a person, deconstruct it using Natural Language Processing, and develop the proper query based on the available data. That is important but only gets you so far.
Data will come from many sources – things that we consider today with relational, object, graph, and NoSQL databases. There will be structured and unstructured data that must be joined and filtered quickly and accurately. In addition, context will be more important than ever. Pictures and videos could be scanned for facial recognition, location (via geotagging), and in the case of videos analyze speech. Relationships will be identified and inferred based on a variety of sources, using both data and metadata. Sensors will collect data from almost everything we do and (someday) wear, which will provide both content and context.
The use of Stylometry will identify outside content likely related to the people involved in the query and provide further context about interests, activities, and even biases. This is how future interfaces will truly understand (not just interpret), intuit (so it can determine what you really want to know), and then present results that may be far more accurate than we are used to today. Because the interface is interactive in nature it will provide the ability to organize and analyze subsets of data quickly and easily.
So, where do I think that this technology will originate? I believe that it will be adapted from video game technology. Video games have consistently pushed the envelope over the years, helping drive the need for higher bandwidth I/O capabilities in devices and networks, better and faster graphics capabilities, and larger and faster storage (which ultimately led to flash memory and even Hadoop). Animation has become very lifelike and games are becoming more responsive to audio commands. It is not a stretch of the imagination to believe that this is where the next generation of smart interfaces will be found (instead of from the evolution of current smart interfaces).
Someday it may no longer be possible to “tweak” results through the use or omission of keywords, quotation marks, and flags. Additionally, it may no longer be necessary to understand special query languages (SQL, NoSQL, SPARQL, etc.) and syntax. We won’t have to worry as much about incorrect joins, spurious correlations and biased result sets. Instead, we will be given the answers we need – even if we don’t realize that this was what we needed in the first place. At that point computer systems may appear nearly omniscient.
When this happens parents will no longer need to teach their children “Google-Fu.” Those are going be interesting times indeed.
Back in early 2011 myself and 15 other members of the Executive team at Ingres were taking a bet on the future of our company. We knew that we needed to do something big and bold, and decided to build what we thought the standard data platform would be in 5-7 years. A small minority of the people on that team did not believe this was possible and left, while the rest of us focused on making that happen. There were three strategic acquisitions to fill-in the gaps on our Big Data platform. Today (as Actian) we have nearly achieved our goal. It was a leap of faith back then, but our vision turned out to be spot-on and our gamble is paying off today.
Every day my mailbox is filled with stories, seminars, white papers, etc. about Big Data. While it feels like this is becoming more mainstream, it is interesting to read and hear the various comments on the subject. They range from, “It’s not real” and “It’s irrelevant” to “It can be transformational for your business” to “Without big data there would be no <insert company name here>.”
What I continue to find amazing is hearing comments about big data being optional. It’s not – that genie has already been let out of the bottle. There are incredible opportunities for those companies that understand and embrace the potential. I like to tell people that big data can be their unfair advantage in business. Is that really the case? Let’s explore that assertion and find out.
We live in the age of the “Internet of Things.” Data about nearly everything is everywhere, and the tools to correlate that data to gain understanding of so many things (activities, relationships, likes and dislikes, etc.) With smart devices that enable mobile computing we have the extra dimension of location. And, with new technologies such as Graph Databases (based on SPARQL), graphic interfaces to analyze that data (such as Sigma), and identification technology such as Stylometry, it is getting easier than ever to identify and correlate that data.
We are generating increasingly larger and larger volumes of data about everything we do and everything going on around us, and tools are evolving to make sense of that data better and faster than ever. Those organizations that perform the best analysis, get the answers fastest, and act on that insight quickly are more likely to win than the organizations that look at a smaller slice of the world or adopt a “wait and see” posture. So, that seems like a significant advantage in my book. But, is it an unfair advantage?
First, let’s keep in mind that big data is really just another tool. Like most tools it has the potential for misuse and abuse. And, whether a particular application is viewed as “good” or “bad” is dependent on the goals and perspective of the entity using the tool (which may be the polar opposite view of the groups of people targeted by those people or organizations). So, I will not attempt to make judgments about the various use cases, but rather present a few use cases and let you decide.
Scenario 1 – Sales Organization: What if you could not only understand what you were being told a prospect company needs, but also had a way to validate and refine that understanding? That’s half the battle in sales (budget, integration, and support / politics are other key hurdles). Data that helped you understand not only the actions of that organization (customers and industries, sales and purchases, gains and losses, etc.), but also the goals, interests and biases of the stakeholders and decision makers. This could provide a holistic view of the environment and allow you to provide a highly targeted offering, with messaging tailored to each individual. That is possible, and I’ll explain soon.
Scenario 2 – Hiring Organization: As a hiring manager there are many questions that cannot be asked. While I’m not an attorney, I would bet that State and Federal laws have not kept pace with technology. And, while those laws vary state by state, there are likely loopholes that allow for use of public records. Moreover, implied data that is not officially taken into consideration could color the judgment of a hiring manager or organization. For instance, if you wanted to “get a feeling” if a candidate might fit-in with the team or the culture of the organization, or have interests and views that are aligned with or contrary to your own, you could look for personal internet activity that would provide a more accurate picture of that person’s interests.
Scenario 3 – Teacher / Professor: There are already sites in use to search for plagiarism in written documents, but what if you had a way to make an accurate determination about whether an original work was created by your student? There are people who, for a price, will do the work and write a paper for a student. So, what if you could not only determine that the paper was not written by your student, but also determine who the likely author was?
Do some of these things seem impossible, or at least implausible? Personally, I don’t believe so. Let’s start with the typical data that our credit card companies, banks, search engines, and social network sites already have related to us. Add to that the identified information that is available for purchase from marketing companies and various government agencies. That alone can provide a pretty comprehensive view of us. But, there is so much more that’s available.
Think about the potential of gathering information from intelligent devices that are accessible through the Internet, or your alarm and video monitoring system, etc. These are intended to be private data sources, but one thing history has taught us is that anything accessible is subject to unauthorized access and use (just think about the numerous recent credit card hacking incidents).
Even de-identified data (medical / health / prescription / insurance claim data is one major example), which receives much less protection and can often be purchased, could be correlated with a reasonably high degree of confidence to gain understanding on other “private” aspects of your life. The key is to look for connections (websites, IP addresses, locations, businesses, people), things that are logically related (such as illnesses / treatments / prescriptions), and then make as accurate of an identification as possible (stylometry looks at things like sentence complexity, function words, co-location of words, misspellings and misuse of words, etc. and will likely someday take into consideration things like idea density). It is nearly impossible to remain anonymous in the Age of Big Data.
There has been a paradigm shift when it comes to the practical application of data analysis, and the companies that understand this and embrace it will likely perform better than those who don’t. There are new ethical considerations that arise from this technology, and likely new laws and regulations as well. But for now, the race is on!