Cloud Computing
Beyond the Hunt: Fueling Sustainable Enterprise Sales Growth
The software start-up world is obsessed with “Hunters” – salespeople laser-focused on landing new customers. Job boards are overflowing with companies in “growth phases,” desperate for new logos. Understandably, the revenue from each new customer can mean a 10-20x multiple in company valuation in the VC-fueled race toward an exit. But what happens after the deal closes? Is that frenzied growth sustainable?
The Long Game: Profitability and Sustainability
My experience leading a large business unit and running my own company has taught me that sustainable growth requires more than just a flood of new logos. Here’s why:
- Cash flow is King: Lines of credit can vanish overnight (remember Silicon Valley Bank?). Relying solely on external funding is a risky approach.
- Not All New Business is Good Business: Unprofitable accounts, high-churn risks, and customers with limited growth potential can drain your resources faster than you can acquire them. Efficiency matters for scalability.
- Profits Drive Long-Term Growth: Organic, profit-fueled growth, especially when driven by innovation, creates a much more resilient and valuable business.
The Power of the Hybrid Sales Model
To achieve sustainable growth, you either need an extensive and fully integrated organization that seamlessly transitions from the sale to implementation (not many companies are able to accomplish this), or a hybrid sales team that excels at both hunting (50-70%) and farming (30-50%). Landing a new customer is hard work, but retaining and growing them is just as challenging and crucial. Customer acquisition costs are often too high to justify losing a customer only a year or two after the initial win. Treating new customers like assets (instead of commodities) is essential to long-term success.
Mastering Strategic Accounts: Turnarounds and Growth
Large, strategic accounts—Tier 1 companies with strong brand recognition and significant revenue potential—present unique challenges. In my experience, these accounts often fall into two categories: those I initially closed and then grew (ideal, as you have already laid the foundation for success), and those I inherited in a neglected state and had to turn around.
Turnarounds require a unique skill set. They demand as much time and sometimes even more effort than landing a new logo, blending aspects of both hunting and farming. It takes commitment, skill, patience, and effort to understand an organization and find new ways to deliver value. Relationships and trust take time to develop, and in this situation they are in question (or worse).
Case Study: From Churn Risk to Multi-Million Dollar Expansion
Take, for example, a large Financial Services company I inherited when their Strategic Account Manager left. The account had been neglected for two years and was riddled with problems. We lacked executive relationships and higher-level visibility within the organization. They were evaluating competitors, and we knew nothing about it! Even though they were an existing customer, we were the underdog.
I organized a day-long on-site meeting to understand their pain points. We identified immediate issues, shared our vision for the future, began building trust, and demonstrated our commitment to their success. Following this, we spent a few hours getting to know the team over dinner. It was an eye-opener for me, with them providing more information about their needs and how best to approach them.
The result? Within three months, I closed a $500K expansion deal, followed by a $500K consulting engagement and then a $3.25M two-year cloud expansion and upgrade prepay deal. My team and I rebuilt the relationship, solved critical problems (even going beyond our product scope), and provided a clear path forward with our AI-powered platform. I became a trusted advisor who was valued by their executive team. It was a true win-win.
The Bottom Line: Building a Sustainable Sales Engine
Install base growth and customer retention is critical for long-term success, especially with larger customers. The approach has to be multi-dimensional. It is a team effort, and the Account Executive is the quarterback. Relationship management, customer success teams, support, and services all play a role. Effective communication is bi-directional, where the customer has insight into what is coming down the road and input into the direction of products they rely on. This becomes a true partnership that adds significant value to both organizations.
Does this hybrid approach apply to every company? If you’re IBM or Oracle, your offerings are broad and deep, and your customers are largely locked-in. “Land and expand” is part of their DNA. And if you’re selling end-of-life products, your focus might shift towards maximizing the “long tail” through customer success and services to minimize costs and maximize profitability.
However, for most growth-stage Cloud and SaaS companies, the hybrid sales model is an essential part of their success.
Call to Action:
- Evaluate your sales team structure and compensation plans. Do they incentivize both new customer acquisition and ongoing account growth?
- Invest in training and development for your sales team. Equip them with the skills needed to excel at both hunting and farming. It can be a difficult transition, but is worth it in the long run.
- Need help building a high-performing hybrid sales team or turning around strategic accounts? Let’s connect!
Blockchain, Data Governance, and Smart Contracts in a Post-COVID-19 World
The last few months have been very disruptive to nearly everyone across the globe. There are business challenges galore, such as managing large remote workforces – many of whom are new to working remotely and managing risk while attempting to conduct “business as usual.” Unfortunately, most businesses’ systems, processes, and internal controls were not designed for this “new normal.”
While there have been many predictions around Blockchain for the past few years, it is still not widely adopted. We are beginning to see an uptick in adopting Supply Chain Management Systems for reasons that include traceability of items – especially food and drugs. However, large-scale adoption has been elusive to date.

I believe we will soon begin to see large shifts in mindset, investments, and effort towards modern digital technology driven by Data Governance and Risk Management. I also believe that this will lead to these technologies becoming easier to use via new platforms and integration tools, which will lead to faster adoption by SMBs and other non-enterprise organizations, and that will lead to the greater need for DevOps, Monitoring, and Automation solutions as a way to maintain control of a more agile environment.
Here are a few predictions:
- New wearable technology supporting Medical IoT will be developed to help provide an early warning system for disease and future pandemics. That will fuel a number of innovations in various industries, including Biotech and Pharma.
- Blockchain can provide data privacy, ownership, and provenance to ensure the data’s veracity.
- New legislation will be created to protect medical providers and other users of that data from being liable for missing information or trends that could have saved lives or avoided some other negative outcome.
- In the meantime, Hospitals, Insurance Providers, and others will do everything possible to mitigate the risk of using Medical IoT data, which could include Smart Contracts to ensure compliance (which assumes that a benefit is provided to the data providers).
- Platforms may be created to offer individuals control over their own data, how it is used and by whom, ownership of that data, and payment for the use of that data. This is something I wrote about in 2013.
- Data Governance will be taken more seriously by every business. Today companies talk about Data Privacy, Data Security, or Data Consistency, but few have a strategic end-to-end systematic approach to managing and protecting their data and company.
- Comprehensive Data Governance will become a driving and gating force as organizations modernize and grow. Even before the pandemic, there were growing needs due to new data privacy laws and concerns around areas such as the data used for Machine Learning.
- In a business environment where more systems are distributed, there is an increased risk of data breaches and Cybercrime. That must be addressed as a foundational component of any new system or platform.
- One or two Data Integration Companies will emerge as undisputed industry leaders due to their capabilities around MDM, Data Provenance and Traceability, and Data Access (an area typically managed by application systems).
- New standardized APIs akin to HL7 FHIR will be created to support a variety of industries as well as interoperability between systems and industries. Frictionless integration of key systems become even more important than it is today.
- Anything that can be maintained and managed in a secure and flexible distributed digital environment will be implemented to allow companies to quickly pivot and adapt to new challenges and opportunities on a global scale.
- Smart Contracts and Digital Currency Payment Processing Systems will likely be core components of those systems.
- This will also foster the growth of next-generation Business Ecosystems and collaborations that will be more dynamic.
- Ongoing compliance monitoring, internal and external, will likely become a priority (“trust but verify”).
All in all, this is exciting from a business and technology perspective. Most companies must review and adjust their strategies and tactics to embrace these concepts and adapt to the coming New Normal.
The steps we take today will shape what we see and do in the coming decade so it is important to quickly get this right, knowing that whatever is implemented today will evolve and improve over time.
The Unsung Hero of Big Data
Earlier this week, I read a blog post regarding the recent Gartner Hype Cycle for Advanced Analytics and Data Science, 2015. The Gartner chart reminded me of the epigram, “Plus ça change, plus c’est la même chose” (asserting that history repeats itself by stating the more things change, the more they stay the same.)
To some extent, that is true, as you could consider today’s Big Data as a derivative of yesterday’s VLDBs (very large databases) and Data Warehouses. One of the biggest changes, IMO is the shift away from Star Schemas and practices implemented for performance reasons, such as aggregation of data sets, using derived and encoded values, using surrogate and foreign keys to establish linkage, etc. Going forward, it may not be possible to have that much rigidity and be as responsive as needed from a competitive perspective.
There are many dimensions to big data: A huge sample of data (volume), which becomes your universal set and supports deep analysis as well as temporal and spatial analysis; A variety of data (structured and unstructured) that often does not lend itself to SQL based analytics; and often data streaming in (velocity) from multiple sources – an area that will become even more important in the era of the Internet of Things. These are the “Three V’s” people have talked about for the past five years.
Like many people, my interest in Object Database technology initially waned in the late 1990s. That is, until about four years ago when a project at work led me back in this direction. As I dug into the various products, I learned they were alive and doing well in several niche areas. That finding led to a better understanding of the real value of object databases.
Some products try to be “All Vs to all people,” but generally, what works best is a complementary, integrated set of tools working together as services within a single platform. It makes a lot of sense. So, back to object databases.
One of the things I like most about my job is the business development aspect. One of the product families I’m responsible for is Versant. With the Versant Object Database (VOD – high performance, high throughput, high concurrency) and Fast Objects (great for embedded applications). I’ve met and worked with brilliant people who have created amazing products based on this technology. Creative people like these are fun to work with, and helping them grow their business is mutually beneficial. Everyone wins.
An area where VOD excels is with the near real-time processing of streaming data. The reason it is so adept at this task is the way that objects are mapped out in the database. They do so in a way that essentially mirrors reality. So, optionality is not a problem – no disjoint queries or missed data, no complex query gyrations to get the correct data set, etc. Things like sparse indexing are no problem with VOD. This means that pattern matching is quick and easy, as well as more traditional rule and look-up validation. Polymorphism allows objects, functions, and even data to have multiple forms.
VOD does more by allowing data to be more, which is ideal for environments where change is the norm. Cyber Security, Fraud Detection, Threat Detection, Logistics, and Heuristic Load Optimization. In each case, performance, accuracy, and adaptability are the key to success.
The ubiquity of devices generating data today, combined with the desire for people and companies to leverage that data for commercial and non-commercial benefit, is very different than what we saw 10+ years ago. Products like VOD are working their way up that Slope of Enlightenment because there is a need to connect the dots better and faster – especially as the volume and variety of those dots increases. It is not a “one size fits all” solution, but it is often the perfect tool for this type of work.
These are indeed exciting times!
The Future of Smart Interfaces
Recently, I was helping one of my children research a topic for a school paper. She was doing well, but the results she was getting were overly broad. So, I taught her some “Google-Fu,” explaining how you can structure queries in ways that yield better results. She replied that search engines should be smarter than that. I explained that sometimes the problem is that search engines look at your past searches and customize results as an attempt to appear smarter or to motivate someone to do or believe something.
Unfortunately, those results can be skewed and potentially lead someone in the wrong direction. It was a good reminder that getting the best results from search engines often requires a bit of skill and query planning, as well as occasional third-party validation.
Then the other day I saw this commercial from Motel 6 (“GasStation Trouble”) where a man has problems getting good results from his smartphone. That reminded me of seeing someone speak to their phone, getting frustrated by the responses received. His questions went something like this:
“Siri, I want to take my wife to dinner tonight, someplace that is not too far away, and not too late. And she likes to have a view while eating so please look for something with a nice view. Oh, and we don’t want Italian food because we just had that last night.”
Just as amazing as the question being asked was watching him ask it over and over again in the exact same way, each time becoming even more frustrated. I asked myself, “Are smartphones making us dumber?” Instead of contemplating that question I began to think about what future smart interfaces would or could be like.
I grew up watching Sci-Fi computer interfaces like “Computer” on Star Trek (1966), “HAL” on 2001 : A Space Odyssey (1968), “KITT” from Knight Rider (1982), and “Samantha” from Her (2013). These interfaces had a few things in common:
- They responded to verbal commands.
- They were interactive – not just providing answers, but also asking qualifying questions and allowing for interrupts to drill-down or enhance the search (e.g., with pictures or questions that resembled verbal Venn diagrams).
- They often suggested alternative queries based on intuition. That would have been helpful for the gentleman trying to find a restaurant.
Despite having 50 years of science fiction examples, we are still a long way off from realizing the goal of a truly intelligent interface. Like many new technologies, they were originally envisioned by science fiction writers long before they appeared in science.
There seems to be a spectrum of common beliefs about modern interfaces. On one end, some products make visualization easy, facilitating understanding, refinement, and drill-down of data sets. Tableau is an excellent example of this type of easy-to-use interface. At the other end of the spectrum, the emphasis is on back-end systems – robust computer systems that digest huge volumes of data and return the results to complex queries within seconds. Several other vendors offer powerful analytics platforms. In reality, you really need a strong front-end and back-end if you want to achieve the full potential of either.
But, there is so much more potential…
I predict that within the next 3 – 5 years, we will see business and consumer interface examples (powered by AI and Natural Language Processing, or NLP) that are closer to the verbal interfaces from those familiar Sci-Fi shows (albeit with limited capabilities and no flashing lights).
Within the next 10 years, I believe we will have computer interfaces that intuit our needs and facilitate the generation of correct answers quickly and easily. While this is unlikely to be at the level of “The world’s first intelligent Operating System” envisioned in the movie “Her,” and probably won’t even be able to read lips like “HAL,” it should be much more like HAL and KITT than like Siri (from Apple) or Cortana (from Microsoft).
Siri was groundbreaking consumer technology when it was introduced. Cortana seems to have taken a small leap ahead. While I have not mentioned Google Now, it is somewhat of a latecomer to this consumer smart interface party, and in my opinion, it is behind both Siri and Cortana.
So, what will this future smart interface do? It will need to be very powerful, harnessing a natural language interface on the front-end with an extremely flexible and robust analytics interface on the back-end. The language interface will need to take a standard question (in multiple languages and dialects) – just as if you were asking a person, deconstruct it using Natural Language Processing, and develop the proper query based on the available data. That is important, but it only gets you so far.
Data will come from many sources – things that we consider today with relational, object, graph, and NoSQL databases. There will be structured and unstructured data that must be joined and filtered quickly and accurately. In addition, context will be more important than ever. Pictures and videos could be scanned for facial recognition, location (via geotagging), and, in the case of videos, analyze speech. Relationships will be identified and inferred based on a variety of sources, using both data and metadata. Sensors will collect data from almost everything we do and (someday) wear, which will provide both content and context.
The use of Stylometry will identify outside content likely related to the people involved in the query and provide further context about interests, activities, and even biases. This is how future interfaces will truly understand (not just interpret), intuit (so it can determine what you really want to know), and then present results that may be far more accurate than we are used to today. Because the interface is interactive in nature, it will provide the ability to organize and analyze subsets of data quickly and easily.
So, where do I think that this technology will originate? I believe that it will be adapted from video game technology. Video games have consistently pushed the envelope over the years, helping drive the need for higher bandwidth I/O capabilities in devices and networks, better and faster graphics capabilities, and larger and faster storage (which ultimately led to flash memory and even Hadoop). Animation has become very lifelike, and games are becoming more responsive to audio commands. It is not a stretch of the imagination to believe that this is where the next generation of smart interfaces will be found (instead of from the evolution of current smart interfaces).
Someday, it may no longer be possible to “tweak” results through the use or omission of keywords, quotation marks, and flags. Additionally, it may no longer be necessary to understand special query languages (SQL, NoSQL, SPARQL, etc.) and syntax. We won’t have to worry as much about incorrect joins, spurious correlations, and biased result sets. Instead, we will be given the answers we need – even if we don’t realize that this was what we needed in the first place – which will likely be driven by AI. At that point, computer systems may appear nearly omniscient.
When this happens, parents will no longer need to teach their children “Google-Fu.” Those are going to be interesting times indeed.
Big Data – The Genie is out of the Bottle!
Back in early 2011, me and other members of the Executive team at Ingres were taking a bet on the future of our company. We knew we needed to do something big and bold, so we decided to build what we thought the standard data platform would be in 5-7 years. A small minority of the team members did not believe this was possible and left, while the rest focused on making that happen. There were three strategic acquisitions to fill in the gaps on our Big Data platform. Today (as Actian), we have nearly achieved our goal. It was a leap of faith back then, but our vision turned out to be spot-on, and our gamble is paying off today.
My mailbox is filled daily with stories, seminars, white papers, etc., about Big Data. While it feels like this is becoming more mainstream, reading and hearing the various comments on the subject is interesting. They range from “It’s not real” and “It’s irrelevant” to “It can be transformational for your business” to “Without big data, there would be no <insert company name here>.”
What I continue to find amazing is hearing comments about big data being optional. It’s not – that genie has already been let out of the bottle. There are incredible opportunities for those companies that understand and embrace the potential. I like to tell people that big data can be their unfair advantage in business. Is that really the case? Let’s explore that assertion and find out.
We live in the age of the “Internet of Things.” Data about nearly everything is everywhere, and the tools to correlate that data to gain an understanding of so many things (activities, relationships, likes and dislikes, etc.) With smart devices that enable mobile computing, we have the extra dimension of location. And, with new technologies such as Graph Databases (based on SPARQL), graphic interfaces to analyze that data (such as Sigma), and identification technology such as Stylometry, it is getting easier to identify and correlate that data. Someday, this will feed into artificial intelligence, becoming a superpower for those who know how to leverage it effectively.
We are generating increasingly larger and larger volumes of data about everything we do and everything going on around us, and tools are evolving to make sense of that data better and faster than ever. Those organizations that perform the best analysis get the answers fastest and act on that insight quickly are more likely to win than organizations that look at a smaller slice of the world or adopt a “wait and see” posture. So, that seems like a significant advantage in my book. But is it an unfair advantage?
First, let’s remember that big data is just another tool. Like most tools, it has the potential for misuse and abuse. Whether a particular application is viewed as “good” or “bad” is dependent on the goals and perspective of the entity using the tool (which may be the polar opposite view of the groups of people targeted by those people or organizations). So, I will not attempt to judge the various use cases but rather present a few use cases and let you decide.
Scenario 1 – Sales Organization: What if you could understand what you were being told a prospect company needs and had a way to validate and refine that understanding? That’s half the battle in sales (budget, integration, and support / politics are other key hurdles). Data that helped you understand not only the actions of that organization (customers and industries, sales and purchases, gains and losses, etc.) but also the stakeholders’ and decision-makers’ goals, interests, and biases. This could provide a holistic view of the environment and allow you to provide a highly targeted offering, with messaging tailored to each individual. That is possible, and I’ll explain soon.
Scenario 2 – Hiring Organization: Many questions cannot be asked by a hiring manager. While I’m not an attorney, I would bet that State and Federal laws have not kept pace with technology. And while those laws vary state by state, there are likely loopholes allowing public records to be used. Moreover, implied data that is not officially considered could color the judgment of a hiring manager or organization. For instance, if you wanted to “get a feeling” that a candidate might fit in with the team or the culture of the organization or have interests and views that are aligned with or contrary to your own, you could look for personal internet activity that would provide a more accurate picture of that person’s interests.
Scenario 3 – Teacher / Professor: There are already sites in use to search for plagiarism in written documents, but what if you had a way to make an accurate determination about whether an original work was created by your student? There are people who, for a price, will do the work and write a paper for a student. So, what if you could not only determine that the paper was not written by your student but also determine who the likely author was?
Do some of these things seem impossible or at least implausible? Personally, I don’t believe so. Let’s start with the typical data that our credit card companies, banks, search engines, and social network sites already have related to us. Add to that the identified information available for purchase from marketing companies and various government agencies. That alone can provide a pretty comprehensive view of us. But there is so much more that’s available.
Consider the potential of gathering information from intelligent devices accessible through the Internet, your alarm and video monitoring system, etc. These are intended to be private data sources, but one thing history has taught us is that anything accessible is subject to unauthorized access and use (just think about the numerous recent credit card hacking incidents).
Even de-identified data (medical / health / prescription / insurance claim data is one major example), which receives much less protection and can often be purchased, could be correlated with a reasonably high degree of confidence to gain an understanding of other “private” aspects of your life. The key is to look for connections (websites, IP addresses, locations, businesses, people), things that are logically related (such as illnesses / treatments / prescriptions), and then accurately identify (stylometry looks at things like sentence complexity, function words, co-location of words, misspellings and misuse of words, etc. and will likely someday take into consideration things like idea density). It is nearly impossible to remain anonymous in the Age of Big Data.
There has been a paradigm shift regarding the practical application of data analysis, and the companies that understand this and embrace it will likely perform better than those that don’t. There are new ethical considerations that arise from this technology, and likely new laws and regulations as well. But for now, the race is on!



