cloud

Lessons Learned from Selling Kubernetes

Posted on Updated on

A picture of a man walking down a path on a moonlight night. It is foggy and there are many puzzle pieces floating in front of him, representing the challenges of business problems.
Image created by DALL-E on Chat GPT 4

Cloud-native, containerization, microservices, and Kubernetes have become very popular over the past few years. They are as complex as they are powerful, and for a large, complex organization, these technologies can be a game changer. Kubernetes itself is a partial solution – the foundation for something extraordinary. It can take 20-25 additional products to handle all aspects of the computing environment (e.g., ingress, services mesh, storage, networking, security, observability, continuous delivery, policy management, and more).

Consider the case of a major Financial Services company, one of my clients. They operated with 200 Development teams, each comprising 5-10 members, who were frequently tasked with deploying new applications and application changes. Prior to embracing Kubernetes, their approach involved deploying massive monolithic applications, with changes occurring only 2-3 times per year. However, with the introduction of Kubernetes, they were able to transition to a daily deployment model, maintaining control and swiftly rolling back changes if necessary. This shift in their operations not only allowed them to innovate at a faster pace but also enabled them to respond to opportunities and address needs more promptly.

Most platforms utilize Ansible and Terraform for creating playbooks, configuration management, and other purposes. Those configurations could become very long and complex over time and were prone to errors. For more complicated configurations, such as multi-cloud and hybrid environments, the complexity is further amplified. “Configuration Drift,” or runtime configurations that differ from what was expected for various reasons, leads to problems such as increased costs due to resource misconfiguration, potential security issues resulting from incorrectly applied or missing policies, and issues with identity management.

The surprising thing was that when prospects identified those problems, they would look to new platforms that used the same tools to solve them. Sometimes, things would temporarily improve (after much time and expense for a migration), but then fall back into disarray as the underlying process issues still needed to be addressed.

Our platform used a new technology called Cluster API (or CAPI). It provided a central (declarative) configuration repository, making it quick and easy to create new clusters. More importantly, it would perform regular configuration checks and automatically reconcile incorrect or missing policies. It was an immutable and self-healing Kubernetes infrastructure. It simplified overall cluster management and standardized infrastructure implementation. 

All great stuff – who would not want that? This technology was new but proven, but it was different, which scared some people. These were a couple of recurring themes:

  1. The Platform and DevOps teams had a backlog of work due to existing problems, so there was more fear about falling further behind than confidence in a better alternative.
  2. Teams focused on their existing investment in a platform or on the sunk costs spent over a long period, attempting to solve their problems. The ROI on a new platform was often only 3-4 months, but that was challenging to believe, given their own experiences on an inferior platform.
  3. Teams would look at outsourcing the problem to a managed service provider. They could not explain how the problems would be specifically resolved, but did not seem concerned about that lack of clarity.
  4. There was a lack of consistency on the versions of Kubernetes used, the add-ons and their versions, and one-off changes that were never intended to become permanent. Reconciling those issues or migrating to new, clean clusters both involved time and effort. That became an excuse to maintain the status quo.
  5. Unplanned outages were common and usually expensive. Using the cost of those outages as justification for something new was typically a last resort, as people did not like acknowledging problems that put a spotlight on themselves.
  6. Architects had a curiosity about new and different things, but often lacked the gravitas within business leadership to effect change. They were usually unwilling or unable to explain how real changes happen within their company, or introduce you to the actual decision-makers and budget holders.

Focusing on outcomes and working with the Executives most affected by them tended to be the best path forward. Those companies and teams were rewarded with a platform that simplified fleet management, improved observability, and helped them avoid the risky, expensive problems that had plagued them in the past. And, working with satisfied customers who appreciated your efforts and became loyal partners made selling this platform that much more rewarding.

Getting Started with Big Data

Posted on Updated on

Image

Being in Sales, I have the opportunity to speak to many customers and prospects about many things. Most are interested in Cloud Computing and Big Data, but often they don’t fully understand how they will leverage the technology to maximize the benefits.

Here is a simple three-step process that I use:

1. For Big Data, I explain that there is no single correct definition. Because of this, I recommend that companies focus on what they need rather than what to call it. Results are more important than definitions for these purposes.

2. Relate the technology to something people are likely already familiar with (extending those concepts). For example: Cloud computing is similar to virtualization and has many of the same benefits; Big Data is similar to data warehousing. This helps make new concepts more tangible in any context.

3. Provide a high-level explanation of how “new and old” are different and why new is better using specific examples that they should relate to. For example: Cloud computing often occurs in an external data center – possibly one where you may not even know where it is- so security can be even more complex than in-house systems and applications. It is possible to have both Public and Private Clouds, and a public cloud from a major vendor may be more secure and easier to implement than a similar system using your own hardware;

Big Data is a little bit like my first house. I was newly married, anticipated having children and also anticipated moving into a larger house in the future. My wife and I started buying things that fit into our vision of the future and storing them in our basement. We were planning for a future that was not 100% known.

But, our vision changed over time and we did not know exactly what we needed until the end. After 7 years, our basement was very full, and finding things difficult.  When we moved to a bigger house, we did have a lot of what we needed. But we also had many things that we no longer wanted or needed. And, there were a few things we wished that we had purchased earlier. We did our best, and most of what we did was beneficial, but those purchases were speculative, and in the end, there was some waste.

How many of you would have thought Social Media Sentiment Analysis would be important 5 years ago? How many would have thought that hashtag usage would have become so pervasive in all forms of media? How many understood the importance of location information (and even the time stamp for that location)? I guess it would be less than 50% of all companies.

This ambiguity is both a good and bad thing about big data. In the old data warehouse days, you knew what was important because this was your data about your business, systems, and customers.  While IT may have seemed tough in the past, it can be much more challenging now. But the payoff can also be much larger, so it is worth the effort. You often don’t know what you don’t know – and you just need to accept that.

Now we care about unstructured data (website information, blog posts, press releases, tweets, etc.), streaming data (stock ticker data is a common example), sensor data (temperature, altitude, humidity, location, lateral and horizontal forces), temporal data, etc. Data arrives from multiple sources and likely will have multiple time frame references (e.g., constant streaming versus updates with varying granularity), often in unknown or inconsistent formats. Someday soon, data from all sources will be automatically analyzed to identify patterns and correlations and gain other relevant insights.

Robust and flexible data integration, data protection, and data privacy will all become far more important in the near future! This is just the beginning for Big Data.

Using Technology for the Greater Good

Posted on Updated on

My company and my family funded a dozen or so medical research projects over several years. I had the pleasure of meeting and working with many brilliant MD/Ph.D. researchers. My goal was to fund $1 million in medical research and find a cure for Juvenile Arthritis. We didn’t reach that goal, but many good things came out of that research.

Something that amazed me was how research worked. Competition for funding is intense, so there was much less collaboration between institutions than I would have expected. At one point, we were funding similar projects at two institutions. The projects went in two very different directions, and it was clear that one would be much more successful than the other. It seemed almost wasteful, and I thought there must be a better, more efficient, and cost-effective way of managing research efforts.

So, in 2006 I had an idea. What if I could create a cloud-based (a very new concept at the time) research platform that would support global collaboration? It would need to support true analytical processing, statistical analysis, document management (something fairly new then), and desktop publishing. Publishing research findings is very important in this space, so my idea was to provide a workspace that supported end-to-end research efforts (inception to publication, including auditing and data collection) and fostered collaboration.

This platform would only work if there were a new way to allow interested parties to fund this research that was easy to use and could reach a large audience. Individuals could make contributions based on areas of interest, specific projects, specific individuals working on projects, or projects in a specific regional area. The idea was a lot like what Crowdtilt is today. This funding mechanism would support non-traditional collaboration and hopefully greatly impact the research community and their findings.

Additionally, this platform would support the collection of suggestions and ideas. Good ideas can come from anywhere  especially when you don’t know that something is not supposed to work.

During one funding review meeting at the Children’s Hospital of Philadelphia (CHOP), I made a naïve statement about using cortisone injections to treat TMJ arthritis. I was told why this would not work. A month or so later, I received a call explaining that my suggestion might work, with a request for another in-person meeting and additional funding. Conceptual Expansion at its best! That led to a new research project and positive results (see http://onlinelibrary.wiley.com/doi/10.1002/art.21384/pdf).

You never know where the next good idea might come from, so why not make it easy for people to share those ideas.

By the end of 2007, I had designed an architecture based on SOA (service-oriented architecture) using open-source products that would do most of what I needed. Then, in 2008 Google announced the “Project 10^100” competition. I entered, confident that I would at least get an honorable mention (alas, nothing came from this).

Then, in early 2010 I spent an hour discussing my idea with the CTO of a popular Cloud company. This CTO had a medical background, liked my idea, offered a few suggestions, and even offered to help. It was the perfect opportunity. But, I had just started a new position at work, so this project fell by the wayside. That was a shame, and I only have myself to blame. It is something that has bothered me for years.

It’s 2013, and far more tools are available today to make this platform a reality, and something like this still does not exist. I’m writing this because the idea has merit, and I think there might be others who feel the same way and would like to work on making this dream a reality. It’s a chance to leverage technology to potentially make a huge impact on society. And it can create opportunities for people in regions that might otherwise be ignored to contribute to this greater good.

Idealistic? Maybe. Possible? Absolutely!