When software companies step out on the acquisition trail, looking for companies with products in the same category as what they offer themselves, they frequently explain that their goal is to “increase market share”. Three major drivers influence such a strategy:
- the simple ability to enjoy economies of scale by spreading expenses and effort over a broader revenue stream,
- the ability to claim market leadership status owing to their increased size, and ownership of multiple related products, with the goal of receiving plaudits and recommendations from analysts, and thus
- by virtue of such a market leadership position the ability to attract the mainstream, more conservative, buyers who will provide increasingly profitable revenue streams.
Revenues as Market Share?
However, this notion of “market share” is far more complex than at first appears. It contains dangers that can cause havoc to all parties involved: vendors, customers, investors. The apparent ease with which the currency of increased revenue is converted to that of market share conceals a deep misunderstanding as to what constitutes a market, and a neglect of some time-proven rules about market maturity, and the manner in which organizations with different profiles make buying decisions. Because of this misconception, serious strategic misjudgments often occur. Vendors founder on the rocks of misguided acquisitions that do not deliver. Customers make inappropriate buying decisions, as their expectations of product maturity and vendor stability are not met. Investors make untimely moves on companies that are past their sell-date.
Markets and Non-Markets
Analysis of this phenomenon must start with an understanding of what markets are. A good definition of a market in high-tech industry is that from Geoffrey Moore, of Crossing the Chasm fame. It runs as follows:
“A set of actual or potential customers
For a given set of product or services
Who have a common set of need or wants, and
Who reference each other when making a decision”.
This definition is useful because it concentrates on the commonality of interests to be served by products or services that can be seen to compete with each other. And it draws a line in the sand as to what markets are not – thus highlighting why “shares” of such flimsy entities are themselves just as flimsy. For instance, markets are not:
- Re-statements of platforms or generic technologies (e.g. “the AS/400 market”, “the UNIX market”). Yes, there may be a market for “mid-range servers to be used for hosting packaged applications”, or an after-market for “UNIX administration tools”, but retrospectively counting revenues associated with particular boxes is an exercise that helps to indicate overall spending activity, but does not assist in market definition.
- Aggregations of heterogeneous markets, such as “the data warehouse market”, or “the knowledge management market”. Customers do not buy “data warehouses”. They will make multiple different purchasing decisions for (e.g.) relational DBMSs, decision support tools, data extraction products, servers, storage, metadata management, services. Adding up such revenues likewise is an exercise that may point to the popularity of certain IT topics, but is not helpful in classifying markets, and positioning vendors. While an individual supplier may develop competency in more than one of these markets, such a phenomenon will draw attention to its overall systems integration expertise more than its product leadership.
- Loose congregations of products with a common description but diverse aims and application targets (e.g. “the object-oriented DBMS market”). The history of object-oriented DBMSs is one of uncertain identity. While a few architectural principles pervade the products labeled as such, the variegated nature of the tasks and buyers that they have tried to address over the past ten years reinforces the fact that the vendors hurt themselves by using a generic appellation that did not describe a true market.
Real Market Behavior…
However, when a market has substance, and obeys the definition (e.g. the Enterprise Resource Planning market for organizations with over $1 billion in revenues), buyers and sellers can sensibly participate. Using such a backdrop, Crossing the Chasm thinking – by applying classic Technology Adoption Life Cycle techniques to high-technology markets – categorizes buyer profiles, and identifies the suitable tactics required by vendors to approach these different buyer types. Chasm-thinking classifies different types of customer (innovators, early adopters, the conservative majority, the laggards), with emphasis on their readiness for accepting new technology. Early adopters are comfortable experimenting with innovative products, while conservative users prefer to wait until a technology has matured and clear category leaders have emerged. It is for this reason that signals of market maturity, with recognized leaders in place by virtue of their market share, are so important for the contending vendors. When the leader appears (in the case of ERP, SAP), mainstream buyers signal their approval by adopting it. (Note that the markets for applications software have had an irony of their own: in 1990 McCormack & Dodge “merged” with its bitter rival, MSA, to create an applications market gorilla. However, the new entity floundered in the face of the “client/server” wave, which redefined the rules. Mass counted for nothing. And then, of course, “client/server” lost favor…..)
… and Real Market Share
Market share is defined in terms of individual product revenues attributable to that category. For example, a DBMS can be used for transaction processing or data warehousing (and some competitors work in only one domain); a Windows2000 server can be used as an application platform or a web server. Precision in defining the “needs or wants”, and the users who entertain them is critical. So the quantified market share reflects the degree of success that a particular vendor has had in the chosen category. (That is market success, incidentally – not always customer deployment success – but where those concepts diverge is another story….) And that is why the lure of adding further category revenues through acquisition is so appealing to the CEO. A vendor that can successfully aspire to making claims about the maturity of a market, and its own leadership role in it, will use that as a critical argument for gaining broader acceptance, and approbation by the more numerous, more conservative buyers. However, if it makes inappropriate claims, it will eventually be disillusioned, as the mis-statements will indicate a state of market maturity and leadership that does not exist. The conservative buyers will shy away. The consolidation of CASE tool vendors in the early 1990s is a good example.
Market Measurers and Mis-Measurers
Unfortunately, the opinion-makers for the industry frequently echo – and even encourage – such illusions. Financial analysts tracking public companies can be easily seduced by sheer organizational size: they like to see an active acquisition strategy as an indicator of corporate energy, and they may of course have a pecuniary interest in mergers and acquisitions. Thus, when they analyze market share, they may use any of the three misconceptions above to set the framework for the relative strengths of a favored company. They define a fuzzy market. Then they count revenues. If a company has made acquisitions, the revenues (products and services) of all its companies or divisions are added up in their pie-charts – even if such products have contradictory architectures, are targeted at different vertical segments that do not cross-refer to other verticals, or even do not come near to addressing the same “market”. Voila! A gorilla, a “market-leader”, has been born, and the executives of the company will start believing their own press releases when they quote these flattering reports. Recent events in the EAI (Enterprise Application Integration) market reflect the danger of believing the Wall Street pie-charts.
The industry analysts can be almost as loose. The quantitative analysts (e.g. Dataquest, IDC) are frequently driven by vendors who want some healthy forecasts to show to their venture capitalists, or for justifying projects competing for scarce internal funding. It is much easier to take some vague or aggregated market definition that reflects a hot topic (“XML”, “E-business”), and then project some robust growth percentages, rather than to perform some true market segmentation, collect from the vendors what their real revenues are for different categories, and then evaluate future prospects. (Indeed, the vendors may be unwilling – or even unable – to provide such quantitative data.) It is even more difficult to interject wisdom about external variables to make the forecasts more insightful than an over-exuberant projection of past trends. And many markets are difficult to define. They may be very fluid ( e.g. the current transition from message-oriented middleware to message brokers to integration brokers to business process management). Clarity may be missing simply because vendors do not know what category they are in, cannot position their products effectively, or do not have the management disciplines to focus on a particular segment of potential buyers.
The qualitative analysts (e.g. Gartner Group, Meta Group) may be similarly vague. Sometimes they will slip from more rigorous market definitions, and translate statements of generic strategic interest from technology consumers (e.g. “business process re-engineering”, “business intelligence”, or even the recent classic – “knowledge management”) into immediate definitions of “markets”. By doing this, they grant a theme or technology a substance and coherence that it may not have earned yet. Analysts may do this out of a misguided but well-intentioned desire to “make a market happen”. They may want to draw attention to themselves by projecting heady growth for individual technologies. But enthusiasm alone cannot force a market to congeal, or take off. The vendors may fail to rise to the occasion, and by making false comparisons with other vendors with whom they should not be competing, will confuse buyers. What the analysts should do is stick to their knitting — defining and analyzing markets to help vendors with categorization and positioning, and thus, in the long run, helping buyers as well.
The Challenges of Growth
Why does all this matter? It comes back to the three drivers (Mass, Analyst Approval, and Market Leadership) which – despite all the chaos surrounding markets – are still very real challenges for the young emerging software company.
Achieving heady growth is vital for most software companies. Venture capitalists want their return; stockholders need to see increased share value. Incentives for management and employees may be tied to stock appreciation. CEOs want to build great companies. Size itself can be a protection against a hostile acquisition desires. Not many companies are happy with niche positions and modest growth. Thus going for market share is an attractive clarion call that gives meaning and measurement to the efforts of the company’s employees. Organic growth is the preferred course, but may appear too passive, and give a sense of being left behind – especially when competitive acquisition strategies receive their short-term approbation from analysts and the media. Acquiring domain expertise, or distribution strengths, or complementary technology that helps evolve the category, can be a vital and successful strategy. But growth through acquisition driven by a misguided sense of market share can be very dangerous.
This is especially true with software companies, where technology switching costs are very high, and product substitutability is thus very low. Cisco can quickly replace and integrate telecommunications switches, which have little visibility or conceptual dependency from buyers. But IBM never successfully integrated CICS and Encina. Compaq acquired fault-tolerant expertise in Tandem, but has not yet integrated the technologies into a winning product-line. Teradata for a long time competed with NCR’s own server products. Integrating technology designed completely independently is a very difficult task, almost always underestimated by software executives. And the acquiring company normally has to assuage the concerns of the customers of the acquired product by maintaining its identity, and investing in it further, which also tends to delay any real technology integration. Complexity abounds: existing cost structures are hard to replace. Product-lines remain fragmented, targeting different audiences (markets), and maybe not even doing that so well because of the corporate identity changes. Digitalk and ParcPlace could not, by virtue of their joining forces, turn the clock back on the declining popularity of Smalltalk. Thus the size of the new company – reassuring in the short-term to many investors and prospects – may not be a reliable guide to its comparative health in the markets in which it plays.
Reaching the Majority
The other aspect concerns buying preferences. The thesis behind Crossing the Chasm is that mainstream buyers are necessarily conservative. Vendors know that the bulk of profitable revenues will derive from the mainstream – if they can overcome their cautious tendencies. Such organizations want to buy the market-leading product, as it will be a safe choice. They will have comfort in knowing that many other organizations have made the same choice. They expect to find the product mature and “burned in”. They will believe the supplier has staying power. They will want to find relevant skills on the job market, and find consultants trained in the product they chose. This is a pattern that has repeated itself in multiple technology markets (e.g. document management, enterprise relational DBMS). And if a market fails to cross the chasm – owing to the failures of the participating vendors – the technology will not reach the mainstream. It will be rejected as being too fragmented, too immature, too experimental. Experience shows that conservative buyers will select a product based on the maturity of the category and the product’s relative market share, not based on the total revenues of the vendor, or the total revenues the vendor derives from that category. (Else IBM would be the “leader” in nearly every software market where it has a product.) That is why the mis-stated claims about market-share – whether from the vendors or from objective observers – can be so dangerous to the unwary buyer.
Look at Platinum Technologies, which tried to integrate a motley gathering of independent systems management and information access products in the mistaken belief that aggregate revenues constituted “market share”. The task was impossible; the buyers did not come because of Platinum’s size alone, and the result was the disappearance of the company. (Computer Associates, however, which cleans up the messes of others, makes no claims of market share. It succeeds because it has a rigorous but ultimately effective model for stripping damaged goods down to their essentials, and leveraging the customer base.) Other companies have tried to rewrite the rules by re-naming acquired products to make them look as if they had a single lineage. But customers will not be taken in for long. The Platinum story is a much more reliable analogy for software markets than the much quoted Cisco parable arising from the telecommunications world. There are no universal laws about absolute technology growth. A market is only as big as the total revenues of the products that vendors can successfully deliver.
Thus the final lesson is simple. Markets are subtle entities, and need to be treated as such. Product revenues – not total company or category revenues – define relative market share. Acquisitions can play a role in helping create market share, but they will need a good fit, and a careful insertion of technology over time to leverage the potential of the acquired product. Vendors may continue to try to repeal these laws by drawing attention to their overall growth and mass – and even renaming products to veil the conflicts! But the truth will eventually come out.
Antony Percy, March 2000