For several years my company and my family funded a dozen or so medical research projects. I had the pleasure of meeting and working with many brilliant MD/Ph.D. researchers. My goal was to fund $1 million of medical research and find a cure for Arthritis. We didn’t reach that goal, but many good things came out of that research.
Something that amazed me was how research worked. Competition for funding is intense, so there was much less collaboration between institutions than I would have expected. At one point we were funding similar projects at two institutions. The projects went in two very different directions, and it was clear to me that one was going to be much more successful than the other. It seemed almost wasteful, and I thought that there must be a better, more efficient and cost-effective way of managing research efforts.
So, in 2006 I had an idea. What if I could create a cloud based (a very new term at the time) research platform that would support global collaboration? It would need to support true analytical processing, statistical analysis, document management (something else that was fairly new at the time), and desktop publishing at a minimum. Publishing research findings is very important in this space, so my idea was to provide a workspace that supported end-to-end research efforts (inception to publication) and fostered collaboration.
This platform would only really work if there were a new way to allow interested parties to fund this research that was easy to use and could reach a large audience. People could make contributions based on area of interest, specific projects, specific individuals working on projects, or projects in a specific regional area. The idea was a lot like what Crowdtilt (www.crowdtilt.com) is today. This funding mechanism would support non-traditional collaboration, and would hopefully have a huge impact on the research community and their findings.
Additionally, this platform would support the collection of suggestions and ideas. Good ideas can come from anywhere – especially when you don’t know that something is not supposed work.
During one funding review meeting I made a naïve statement about using cortisone injections to treat TMJ arthritis. I was told why this would not work. But, a month or so later I received a call explaining how this might actually work. That led to a research project and positive results (see http://onlinelibrary.wiley.com/doi/10.1002/art.21384/pdf). You never know where the next good idea might come from, so why not make it easy for people to share those ideas.
By the end of 2007 I had designed an architecture using SOA (service oriented architecture) using open source products that would do most of what I needed. Then, in 2008 Google announced the “Project 10^100” competition. I entered, confident that I would at least get honorable mention (alas, nothing came from this).
Then, in early 2010 I spent an hour discussing my idea with the CTO of a popular Cloud company. This CTO had a medical background, liked my idea, offered a few suggestions, and even offered to help. It was the perfect opportunity. But, I had just started a new position at work and this fell to the wayside. That was a shame, and I only have myself to blame. It is something that has bothered me for years.
It’s 2013, there are far more tools available today to make this platform a reality, and it still does not exist. The reason that I’m writing this is because the idea has merit, and think that there might be others who feel he same way and would like to work on making this dream a reality. It’s a change to leverage technology to potentially make a huge impact on society. And, it can create opportunities for people in regions that might otherwise be ignored to contribute to this greater good.
Idealistic? Maybe. Possible? Absolutely!
Two years ago I was assigned some of the product management and product marketing work for a new version of a database product we were releasing. To me this was the trifecta of bad fortune. I didn’t mind product marketing but knew it took a lot of work to do it well. I didn’t feel that product management was a real challenge (I was so wrong here), and I really didn’t want to have anything to do with maps.
I was so wrong in so many ways. I didn’t realize that real product management was just as much work as product marketing. And, I learned that geospatial was far more than just maps. It was quite an eye-opening experience for me – one that turned out to be very valuable as well.
First, let me start by saying that I now have a huge appreciation for Cartography. I never realized how complex mapmaking really is, and how there just as much art as there is science (a lot like programming). Maps can be so much more than just simple drawings.
I had a great teacher when it came to geospatial – Tyler Mitchell (@spatialguru). He showed me the power of overlaying tabular business data with common spatial data (addresses, zip / postal codes, coordinates) and presenting the “conglomeration of data” in layers that made things easier to understand. People buy easy, so that is good in my book.
The more I thought about this technology – simple points, lines, and area combined with powerful functions, the more I began to think about other uses. I realized that you could use it to correlate very different data sets and graphically show relationships that would otherwise extremely difficult to make.
Think about having access to population data, demographic data, business and housing data, crime data, health / disease data, etc. Now, think about a simple and easy to use graphical dashboard that lets you overlay as many of those data sets as you wanted. Within seconds you see very specific clusters of data that is correlated geographically.
Some data may only be granular to a zip code or city, but other data will allow you to identify patterns down to specific streets and neighborhoods. Just think of how something so simple can help you make decisions that are so much better. The interesting thing is how few businesses are really taking advantage of this cost-effective technology.
If that wasn’t enough, just think about location aware applications, the proliferation of smart devices that completely lend themselves to so many helpful and lucrative mobile applications. Even more than that, they make those devices more helpful and user friendly. Just think about how easy it is to find the nearest Indian restaurant when the thought of curry for lunch hits you. And these things are just the tip of the iceberg.
What a lucky day it was for me when I was assigned this work that I did not want. Little did I know that it would change the way that I think about so many things. That’s just the way things work out sometimes.
Ever since I worked on redesigning a risk management system at an insurance company (1994-1995) I was impressed at how you could make better decisions with more data – assuming it was the right data. The concept of, “What is the right data?” has intrigued me for years, as what may seem common sense today could have been unknown 5-10 years ago, and may be completely passe 5-10 years from now. Context becomes very important because of the variability of data over time.
And this is what makes Big Data interesting. There really is no right or wrong answer or definition. Having a framework to define, categorize, and use that data is important. And at some point being able to refer to the data in-context will be very important as well. Just think about how challenging it could be to compare scenarios or events from 5 years ago with those of today. It’s not apples-to-apples but could certainly be done. It is pretty cool stuff.
The way I think of Big Data is similar to a water tributary system. Water gets into the system many ways – rains from the clouds, sprinkles from private and public supplies, runoff and overflow, etc. It also has many interesting dimensions, such as quality / purity (not necessarily the same due to different aspects of need), velocity, depth, capacity, and so forth. Not all water gets into the tributary system (e.g., some is absorbed into the groundwater tables, and some evaporates), so data loss is expected. If you think in terms of streams, ponds, rivers, lakes, reservoirs, deltas, etc. there are many relevant analogies that can be made. And just like the course of a river may change over time, data in our water tributary system could also change over time.
Another part of my thinking is based on an experience I had about a decade ago (2002 – 2003 timeframe) working on a project for a Nanotech company. In their labs they were testing various things. There were particles that changed reflectivity based on temperature that were embedded in shingles and paint. There were very small batteries that could be recharged tens of thousands of times, were light, and had more capacity than a 12-volt car battery. And, there was a section where they were doing “biometric testing” for the military. I have since read articles about things like smart fabrics that could monitor the health of a soldier, and do things like apply basic first aid when a problem was detected. This company felt that by 2020 advanced nanotechnology would be widely used by the military, and by 2025 it would be in wide commercial use. Is that still a possibility? Who knows…
Much of what you read today is about the exponential growth of data. I agree with that, but also believe that the nature of and sources of that data will change significantly. For example, nano-particles in engine oil will provide information about temperature, engine speed and load, and even things like rapid changes in movement (fast take-off or stops, quick turns). The nanoparticles in the paint will provide weather conditions. The nanoparticles on the seat upholstery will provide information about occupants (number, size, weight). Sort of like the “sensor web,” from the original Kevin Delin perspective. A lot of data will be generated, but then what?
I believe that time will become an important aspect of every piece of data, and that location (X, Y, and Z coordinates) will be just as important. But, not every sensor will collect location. I believe there will be multiple data aggregators in common use at common points (your car, your house, your watch). Those aggregators will package the available data in something akin to an XML object, which allows flexibility. From my perspective this is where things become very interesting.
Currently companies like Google make a lot of money from aggregating data from multiple sources, correlating it to a variety of attributes, and then selling knowledge derived from that plethora of data. I believe that there will be opportunities for individuals to use “data exchanges” to manage, sell, and directly benefit from their own data. The more interesting their data, the more value it has and the more benefit it provides to the person selling it. This could have a huge economic impact, and that would foster both the use and expansion of various commercial ecosystems required to manage the commercial and privacy aspects of this technology.
The next logical step in this vision is “smart everything.” For example, you could buy a shirt that is just a shirt. But, for an extra cost you could turn-on medical monitoring or refractive heating / cooling. And, if you felt there was a market for extra dimensions of data that could benefit you financially, then you could enable those sensors as well. Just think of the potential impact that technology would make to commerce in this scenario.
This is what I personally believe will happen within the next decade or so. This won’t be the only type or use of big data. Rather, there will be many valid types and uses of data – some complementary and some completely discrete. It has the potential to become a confusing mess. But, people will find ways to ingest and correlate that data to identify value in it – today or in the future, and decide to store it (potentially forever). Utilizing that data will become a competitive advantage for people and companies knowing how to do something interesting with it. Who knows what will be viewed as valuable data 5-10 years from now, but it will likely be different than what we view as valuable data today.
So, what are your thoughts? Can we predict the future, or simply create platforms that are powerful enough, flexible enough, and extensible enough to change as our perspective of what is important changes? Either way it will be fun!
Technology was not native to me, at least relative to children and young adults today. Simple four function calculators started becoming popular when I was in Elementary School. I only had a single computer course in High School (it was the only one offered). We had a Timex Sinclair and later a Commodore 64 computer at home. It was fun, but I wasn’t hooked.
I started a car and motorcycle parts business when I was 18 years old. Initially I was looking for a way to get cheaper parts for myself and thought if I could make money doing it then all the better. Nearly everything I did was manual. Then I learned about a Radio Shack TRS-80 at college that had a word processing program. I used that to create mailings to parts companies, distributors, and potential customers. Before long I had a catalog of products that I could sell and a small but loyal customer base buying products and services from me. If Quickbooks had been available back then I may have kept the business running. Doing everything manually just took too much time. Even so, this was my first technology win and I liked it.
A few years later I was programming at a local marketing company. The MIS Director (what IT used to be called) decided to purchase a new relational database product that came with a 4GL application language. This was in 1987 and this technology was very new. The product was sold as being able to save “75% of your development time and effort.” Most of the seasoned people didn’t want to risk their reputations on something that might not work.
I was new and had nothing to lose, so for the next month I read every manual cover-to-cover. Before long I was working on new applications, and soon I became the in-house expert. This led to a fast track of promotions and being selected to develop the majority of new applications being sold. It was not easy, but it was definitely fun and good for my career.
My first and arguably most influential mentor was my manager at this job (Jim). He taught me about designing parameter driven systems that were flexible and extensible. He also taught me that “good enough usually isn’t good enough.” Most people are lucky to have one really good mentor during their career. I’ve been blessed with four of them at different stages of my career. It has motivated me to return the favor and help others whenever I can.
A few years later I was working at a software company that was creating a new standard product on this database platform. Nobody was trained on the product and most wrote their embedded C / SQL programs just like any other 3GL program. I pointed out to the VP of Development that this would be a problem. He didn’t want to hear that. I pushed for a concurrency test and everything locked-up. Many people were suddenly upset with me.
We spent the next two months creating functions to manage transactions, optimizing everything (even table structures to get the best byte alignment), and making this new packaged system work. The VP now liked and respected me and that changed our working dynamics. That shifted the focus from people and personalities to technologies and results.
We worked on other aspects of the system to enhance performance. We created a system much like Memcached in Perl (back in 1990) that allowed us to handle the workflow of even the fastest warehouses. We did many leading-edge things at the time (HA clusters with automatic failover, automated restart of remote devices to resume work in progress to the point of failure, outsourcing to India and an X.400 connection that I configured, distributed systems, client/server systems, etc.) I learned a lot from that experience.
A few years later I was working for a database vendor. This was in the heyday of consulting where projects were huge and rates were high. My first project (on my second day on the job) was being assigned to redesign a Risk Management System at an insurance company that started using our products. I soon found that the project had been going for two years, had binders full of specifications, but that nothing was actionable. I did not make many friends those first two weeks as I pointed these things out.
I offered to facilitate a JAD (joint application design) session with multiple lines of business. This pointed out issues that even they were not aware of and allowed us to begin designing a flexible system that would accommodate all lines of business. We used an agile approach to prototype the new system, demonstrations to get buy-in, and moved the project forward quickly. Six months later the first part of that functionality went live. The system was fully functional within a year!
I had the opportunity to work on some of the largest databases at the time (roughly 30 GB total which is small by today’s measures), work on leading-edge technology (Clustering, VLDB, and Enterprise Unix systems), and really become a true Consultant along the way (with the help of another mentor – Bill). I was sent to several Unix Internals courses and then worked with our Engineering team to improve our products and create configurations that supported other large companies having similar problems.
A few years later I was working at a small start-up company that created the world’s first commercial JDBC driver. I have worked with many very smart people before, but now I was working with a couple of very brilliant people. My main contribution this time was on the business side, but we learned a lot from each other as we grew the business to over $1M in sales within the first year.
One thing that sticks with me is that during this time I became interested in VRML (virtual reality modeling language). I had an idea (1997) that we could create a website to show the insides of buildings, productize it, and sell it to real estate companies and larger apartment complex owners. My idea was not well received by the team, but a few years later systems like this were being developed and a few people were making a lot of money. That taught me to have more faith in ideas based on new technology, regardless of what others thought. It also brought me back to an important concept in Business and Consulting, which is being able to communicate ideas and benefits in ways that are easy enough for everyone to understand as opposed to focusing on the technology itself.
Over the years these lessons learned have helped with BI (business intelligence) – building dashboards using relevant KPIs tailored to the specific audience, mobile computing, cloud computing, and now big data. To most people these things are “not important until they become important,” which is often 6 – 12 months (or more) later. From my perspective the real trick isn’t in trying to understand the next big thing, but rather to consider better, easier, and more efficient ways of doing things you do today.
This is why I love technology. It has helped me accomplish many things that have had a tangible impact on the businesses that I have worked for and consulted with. It has taught me to think about problems and ideas from various perspectives, and to leverage lessons learned in one area to help solve problems in another (i.e., transfer knowledge and skills from one area to another). Technology has provided me opportunities to learn about and work on solving business and technical problems in several industries as I ponder, “Why not?”
And, my interest in technology has allowed me to meet and work with so many interesting and incredible people throughout my career in so many industries and settings. That’s much more than I ever expected when I took my first programming course so long ago, and has become a significant aspect to almost everything I do.