IoT
Genetics, Genomics, Nanotechnology, and more
Science has been interesting to me for most of my lifetime, but it wasn’t until my first child was born that I shifted from “interested” to “involved.” My eldest daughter was diagnosed with Systemic Onset Juvenile Idiopathic Arthritis (SoJIA – originally called Juvenile Rheumatoid Arthritis, or JRA) when she was 15 months old, which also happened to be about six months into the start of my old Consulting company and in the middle of a very critical Y2K ERP system upgrade and rehosting project. It was definitely a challenging time in my life.
At that time there was very little research on JRA because it was estimated there were only 30,000 children affected by the disease and the implication was that funding research would not have a positive ROI. As an aside, this was also a few years before the breakthroughs of biological medicines like Enbrel for use on children.
One of the things that I learned was that this disease could be horribly debilitating. Children often had physical deformities as a result of this disease. Even worse, the systemic type that my daughter has could result in premature death. As a first-time parent it was extremely difficult to imagine that type of life for your child.
Luckily, the company that I had just started was taking off so I decided to finds ways to personally make a tangible difference for all children with this disease. We decided to take 50% of our net profits and use them to fund medical research. We had a goal of funding $1 million in research and finding a cure for Juvenile Arthritis within the next 5-7 years.
As someone new to “major gifts” and philanthropy I quickly learned that some forms of gifts were more beneficial than others. While most organizations wanted you to start a fund (which we did), the impact from that tended to be more long-term and less immediate. I met someone who was passionate, knowledgeable, and successful in her field who showed me a different and better approach (here’s a post that describes that in more detail).
I no longer wanted to blindly give money and hope that it was used quickly and properly. Rather, I wanted to treat these donations like investments in a near-term cure. In order to be successful, I needed to understand research from both medical and scientific perspectives in these areas. That began a new journey of research and independent learning in completely new areas.
There was a lot going on in the fields of Genetics and Genomics at the time (here’s a good explanation of the difference between the two). My interest and efforts in this area led to a position on the Medical and Scientific Advisory Committee with the Arthritis Foundation. With the exception of me, the other members were talented and successful physicians who were also involved with medical research. We met quarterly, and I did ask questions and make suggestions that made a difference. But, unlike everyone else on the committee, I needed to study and prepare for 40+ hours for each call to ensure that had enough of an understanding to add value and not be a distraction.
A few years later we did work for a Nanotechnology company (more info here for those interested). The Chief Scientist wasn’t that interested in explaining what they did until I described some of our research projects on gene expression. He then went into great detail about what they were doing and how he believed it would change what we do in the future. I saw that and agreed, but also started thinking of the potential for leveraging nanotechnology with medicine.
I was listening to the “TED Radio Hour” while driving today and heard a segment about entrepreneur Richard Resnick. It was exciting because it got me thinking about this again, and this is a topic that I haven’t thought about much for the past few years (the last time was contemplating how new analytics products could be useful in this space).
There are efforts going on today with custom, personalized medicines that target only specific genes for a very specific outcome. The genetic modifications being performed on plants today will likely be performed on humans in the near future (I would guess within 10-15 years). The body is an incredibly adaptive organism, so it will be very challenging to implement anything that is consistently safe and effective long-term. But, that day will come.
It’s not a huge leap to go from genetically modified “treatment cells” to true nanotechnology (and not just extremely small particles). Just think, machines that can be designed to work independently within us, do what they are programmed to do, and more importantly identify and understand adaptations (i.e., artificial intelligence) as they occur and alter their approach and treatment plan accordingly. To me, this is extremely exciting. It’s not that I want to live to be 100+ years old – because I don’t. But, being able to do things that have a positive impact on the quality of life for children and their families is a worthy goal from my perspective.
My advice is to always continue learning, keep an open mind, and see what you can personally do to make a difference. You will never know unless you try.
Note: Updated to fix and remove dead links.
Getting Started with Big Data
Being in Sales I have the opportunity to speak to a lot of customers and prospects about many things. Most are interested in Cloud Computing and Big Data, but often they don’t fully understand how they will leverage the technology to maximize the benefit.
Here is a simple three-step process that I use:
1. For Big Data I explain that there is no single correct definition. Because of this I recommend that companies focus on what they need rather than on what to call it. Results are more important than definitions for these purposes.
2. Relate the technology to something people are likely already familiar with (extending those concepts). For example: Cloud computing is similar to virtualization and has many of the same benefits; Big Data is similar to data warehousing. This helps make new concepts more tangible in any context.
3. Provide a high-level explanation of how “new and old” are different, and why new is better using specific examples that they should relate to. For example: Cloud computing often occurs in an external data center – possibly one that you may not even know where it is, so security can be even more complex than with in-house systems and applications. It is possible to have both Public and Private Clouds, and a public cloud from a major vendor may be more secure and easier to implement than a similar system using your own hardware;
Big Data is a little bit like my first house. I was newly married, anticipated having children and also anticipated moving into a larger house in the future. My wife and I started buying things that fit into our vision of the future and storing it in our basement. We were planning for a future that was not 100% known.
But, our vision changed over time and we did not know exactly what we needed until the very end. After 7 years our basement was very full and it was difficult to find things. When we moved to a bigger house we did have a lot of what we needed. But, we also had many things that we no longer wanted or needed. And, there were a few things we wished that we had purchased earlier. We did our best, and most of what we did was beneficial, but those purchases were speculative and in the end there was some waste.
How many of you would have thought that Social Media Sentiment Analysis would be important 5 years ago? How many would have thought that hashtag usage would have become so pervasive in all forms of media? How many understood the importance of location information (and even the time stamp for that location)? My guess is that it would be less than 50% of all companies.
This ambiguity is both the good and bad thing about big data. In the old data warehouse days you knew what was important because this was your data about your business, systems, and customers. While IT may have seemed tough before, it can be much more challenging now. But, the payoff can also be much larger so it is worth the effort. Many times you don’t know what you don’t know – and you just need to accept that.
Now we care about unstructured data (website information, blog posts, press releases, tweets, etc.), streaming data (stock ticker data is a common example), sensor data (temperature, altitude, humidity, location, lateral and horizontal forces), temporal data, etc. Data arrives from multiple sources and likely will have multiple time frame references (e.g., constant streaming versus updates with varying granularity), often in unknown or inconsistent formats.
Robust and flexible data integration, data protection, and data privacy will all become far more important in the near future! This is just the beginning for Big Data.
My perspective on Big Data
Ever since I worked on redesigning a risk management system at an insurance company (1994-1995) I was impressed at how better decisions could be made with more data – assuming it was the right data. The concept of, “What is the right data?” has intrigued me for years, as what may seem common sense today could have been unknown 5-10 years ago and could be completely passé 5-10 years from now. Context becomes very important because of the variability and relevance of data over time.
This is what makes Big Data interesting. There really is no right or wrong answer or definition. Having a framework to define, categorize, and use that data is important. And at some point being able to refer to the data in-context will be very important as well. Just think about how challenging it could be to compare scenarios or events from 5 years ago with those of today. It’s likely not an apples-to-apples comparison but could certainly be done. The concept of maximizing the value of data is pretty cool stuff.
The way I think of Big Data is similar to a water tributary system. Water enters the system many ways – rain from the clouds, sprinkles from private and public supplies, runoff, overflow, etc. It also has many interesting dimensions, such as quality/purity (not necessarily the same due to different aspects of need), velocity, depth, capacity, and so forth. Not all water gets into the tributary system (e.g., some is absorbed into the groundwater tables, and some evaporate) – just as some data loss should be anticipated.
If you think in terms of streams, ponds, rivers, lakes, reservoirs, deltas, etc. there are many relevant analogies that can be made. And just like the course of a river may change over time, data in our “big data” water tributary system could also change over time.
Another part of my thinking is based on an experience I had about a decade ago (2002 – 2003 timeframe) working on a project for a Nanotech company. In their labs, they were testing various things. There were particles that changed reflectivity based on the temperature that was embedded in shingles and paint. There were very small batteries that could be recharged tens of thousands of times, were light and had more capacity than a common 12-volt car battery.
And, there was a section where they were doing “biometric testing” for the military. I have since read articles about things like smart fabrics that could monitor the health of a soldier and do things like apply basic first aid and notify others once a problem was detected. This company felt that by 2020 advanced nanotechnology would be widely used by the military, and by 2025 it would be in wide commercial use. Is that still a possibility? Who knows…
Much of what you read today is about the exponential growth of data. I agree with that, but as stated earlier, and this is important, I believe that the nature of and sources of that data will change significantly. For example, nano-particles in engine oil will provide information about temperature, engine speed and load, and even things like rapid changes in movement (fast take-off or stops, quick turns). The nanoparticles in the paint will provide weather conditions. The nanoparticles on the seat upholstery will provide information about occupants (number, size, weight). Sort of like the “sensor web,” from the original Kevin Delin perspective. A lot of “Information of Things” data will be generated, but then what?
I believe that time will become an important aspect of every piece of data, and that location (X, Y, and Z coordinates) will be just as important. But, not every sensor will collect location (spatial data). I do believe there will be multiple data aggregators in common use at common points (your car, your house, your watch). Those aggregators will package the available data in something akin to an XML object, which allows flexibility. From my perspective, this is where things become very interesting relative to commercial use and data privacy.
Currently, companies like Google make a lot of money from aggregating data from multiple sources, correlating it to a variety of attributes, and then selling knowledge derived from that plethora of data. I believe that there will be opportunities for individuals to use “data exchanges” to manage, sell, and directly benefit from their own data. The more interesting their data, the more value it has and the more benefit it provides to the person selling it. This could have a huge economic impact, and that would foster both the use and expansion of various commercial ecosystems required to manage the commercial and privacy aspects of this technology.
The next logical step in this vision is “smart everything.” For example, you could buy a shirt that is just a shirt. But, for an extra cost, you could turn-on medical monitoring or refractive heating/cooling. And, if you felt there was a market for extra dimensions of data that could benefit you financially, then you could enable those sensors as well. Just think of the potential impact that technology would make to commerce in this scenario.
This is what I personally believe will happen within the next decade or so. This won’t be the only type of or use of big data. Rather, there will be many valid types and uses of data – some complementary and some completely discrete. It has the potential to become a confusing mess. But, people will find ways to ingest, categorize, and correlate data to create value with it – today or in the future.
Utilizing data will become an increasingly competitive advantage for people and companies knowing how to do something interesting and useful with it. Who knows what will be viewed as valuable data 5-10 years from now, but it will likely be different than what we view as valuable data today.
So, what are your thoughts? Can we predict the future based on the past? Or, is it simply enough to create platforms that are powerful enough, flexible enough, and extensible enough to change our understanding as our perspective of what is important changes? Either way it will be fun!
- ← Previous
- 1
- 2