DevOps

Lessons Learned from Selling Kubernetes

Posted on Updated on

A picture of a man walking down a path on a moonlight night. It is foggy and there are many puzzle pieces floating in front of him, representing the challenges of business problems.
Image created by DALL-E on Chat GPT 4

Cloud-native, containerization, microservices, and Kubernetes have become very popular over the past few years. They are as complex as they are powerful, and for a large, complex organization, these technologies can be a game changer. Kubernetes itself is a partial solution – the foundation for something extraordinary. It can take 20-25 additional products to handle all aspects of the computing environment (e.g., ingress, services mesh, storage, networking, security, observability, continuous delivery, policy management, and more).

Consider the case of a major Financial Services company, one of my clients. They operated with 200 Development teams, each comprising 5-10 members, who were frequently tasked with deploying new applications and application changes. Prior to embracing Kubernetes, their approach involved deploying massive monolithic applications, with changes occurring only 2-3 times per year. However, with the introduction of Kubernetes, they were able to transition to a daily deployment model, maintaining control and swiftly rolling back changes if necessary. This shift in their operations not only allowed them to innovate at a faster pace but also enabled them to respond to opportunities and address needs more promptly.

Most platforms utilize Ansible and Terraform for creating playbooks, configuration management, and other purposes. Those configurations could become very long and complex over time and were prone to errors. For more complicated configurations, such as multi-cloud and hybrid environments, the complexity is further amplified. “Configuration Drift,” or runtime configurations that differ from what was expected for various reasons, leads to problems such as increased costs due to resource misconfiguration, potential security issues resulting from incorrectly applied or missing policies, and issues with identity management.

The surprising thing was that when prospects identified those problems, they would look to new platforms that used the same tools to solve them. Sometimes, things would temporarily improve (after much time and expense for a migration), but then fall back into disarray as the underlying process issues still needed to be addressed.

Our platform used a new technology called Cluster API (or CAPI). It provided a central (declarative) configuration repository, making it quick and easy to create new clusters. More importantly, it would perform regular configuration checks and automatically reconcile incorrect or missing policies. It was an immutable and self-healing Kubernetes infrastructure. It simplified overall cluster management and standardized infrastructure implementation. 

All great stuff – who would not want that? This technology was new but proven, but it was different, which scared some people. These were a couple of recurring themes:

  1. The Platform and DevOps teams had a backlog of work due to existing problems, so there was more fear about falling further behind than confidence in a better alternative.
  2. Teams focused on their existing investment in a platform or on the sunk costs spent over a long period, attempting to solve their problems. The ROI on a new platform was often only 3-4 months, but that was challenging to believe, given their own experiences on an inferior platform.
  3. Teams would look at outsourcing the problem to a managed service provider. They could not explain how the problems would be specifically resolved, but did not seem concerned about that lack of clarity.
  4. There was a lack of consistency on the versions of Kubernetes used, the add-ons and their versions, and one-off changes that were never intended to become permanent. Reconciling those issues or migrating to new, clean clusters both involved time and effort. That became an excuse to maintain the status quo.
  5. Unplanned outages were common and usually expensive. Using the cost of those outages as justification for something new was typically a last resort, as people did not like acknowledging problems that put a spotlight on themselves.
  6. Architects had a curiosity about new and different things, but often lacked the gravitas within business leadership to effect change. They were usually unwilling or unable to explain how real changes happen within their company, or introduce you to the actual decision-makers and budget holders.

Focusing on outcomes and working with the Executives most affected by them tended to be the best path forward. Those companies and teams were rewarded with a platform that simplified fleet management, improved observability, and helped them avoid the risky, expensive problems that had plagued them in the past. And, working with satisfied customers who appreciated your efforts and became loyal partners made selling this platform that much more rewarding.

Blockchain, Data Governance, and Smart Contracts in a Post-COVID-19 World

Posted on Updated on

The last few months have been very disruptive to nearly everyone across the globe. There are business challenges galore, such as managing large remote workforces – many of whom are new to working remotely and managing risk while attempting to conduct “business as usual.” Unfortunately, most businesses’ systems, processes, and internal controls were not designed for this “new normal.”

While there have been many predictions around Blockchain for the past few years, it is still not widely adopted. We are beginning to see an uptick in adopting Supply Chain Management Systems for reasons that include traceability of items – especially food and drugs. However, large-scale adoption has been elusive to date.

Image of globe with network of connected dots in the space above it.

I believe we will soon begin to see large shifts in mindset, investments, and effort towards modern digital technology driven by Data Governance and Risk Management. I also believe that this will lead to these technologies becoming easier to use via new platforms and integration tools, which will lead to faster adoption by SMBs and other non-enterprise organizations, and that will lead to the greater need for DevOps, Monitoring, and Automation solutions as a way to maintain control of a more agile environment.

Here are a few predictions:

  1. New wearable technology supporting Medical IoT will be developed to help provide an early warning system for disease and future pandemics. That will fuel a number of innovations in various industries, including Biotech and Pharma.
    • Blockchain can provide data privacy, ownership, and provenance to ensure the data’s veracity.
    • New legislation will be created to protect medical providers and other users of that data from being liable for missing information or trends that could have saved lives or avoided some other negative outcome.
    • In the meantime, Hospitals, Insurance Providers, and others will do everything possible to mitigate the risk of using Medical IoT data, which could include Smart Contracts to ensure compliance (which assumes that a benefit is provided to the data providers).
    • Platforms may be created to offer individuals control over their own data, how it is used and by whom, ownership of that data, and payment for the use of that data. This is something I wrote about in 2013.
  2. Data Governance will be taken more seriously by every business. Today companies talk about Data Privacy, Data Security, or Data Consistency, but few have a strategic end-to-end systematic approach to managing and protecting their data and company.
    • Comprehensive Data Governance will become a driving and gating force as organizations modernize and grow. Even before the pandemic, there were growing needs due to new data privacy laws and concerns around areas such as the data used for Machine Learning.
    • In a business environment where more systems are distributed, there is an increased risk of data breaches and Cybercrime. That must be addressed as a foundational component of any new system or platform.
    • One or two Data Integration Companies will emerge as undisputed industry leaders due to their capabilities around MDM, Data Provenance and Traceability, and Data Access (an area typically managed by application systems).
    • New standardized APIs akin to HL7 FHIR will be created to support a variety of industries as well as interoperability between systems and industries. Frictionless integration of key systems become even more important than it is today.
  3. Anything that can be maintained and managed in a secure and flexible distributed digital environment will be implemented to allow companies to quickly pivot and adapt to new challenges and opportunities on a global scale.
    • Smart Contracts and Digital Currency Payment Processing Systems will likely be core components of those systems.
    • This will also foster the growth of next-generation Business Ecosystems and collaborations that will be more dynamic.
    • Ongoing compliance monitoring, internal and external, will likely become a priority (“trust but verify”).

All in all, this is exciting from a business and technology perspective. Most companies must review and adjust their strategies and tactics to embrace these concepts and adapt to the coming New Normal.

The steps we take today will shape what we see and do in the coming decade so it is important to quickly get this right, knowing that whatever is implemented today will evolve and improve over time.