Here’s a simple question: Why did your company win or lose its last 50 customers? Surprisingly, even the most customer-focused companies often cannot answer this question with confidence. Market intelligence typically comes via the sales force which may filter, abbreviate, or distort account histories. Furthermore, when a negative result has occurred, such as a failure to purchase or renew, customers often don’t tell sales reps the full or accurate reasons behind their decisions. Finally, account histories are often anecdotal, and not analyzed systematically as a group. As a result, management must draw conclusions about customer behavior based on inputs that are incomplete, inaccurate, or both. Even when companies recognize the limitations of their knowledge, they are typically not in a position on their own to gather better information directly from customers.
To fill this gap, we’ve developed our win-loss program, which takes an independent, objective, and systematic look at a sample of recent customer wins and losses. Through in-depth interviews with customers, we identify the key factors that contributed to the wins or losses. These drivers may include the product content or functionality as well as other factors, including technology, customer support, pricing, and sales effectiveness, among others. After conducting even a modest set of interviews, we are able to connect the dots and discern patterns that are often eye-opening. Here are some examples of recent findings:
- A best-selling reference product for healthcare institutions was experiencing an unexpectedly high rate of cancellations. Despite praising it as a great product, ex-customers told us they cancelled it because of low usage. In digging deeper, we found that the underlying cause was a lack of ongoing vendor support to foster successful product rollouts and ongoing usage across the enterprise. The notion that “if you build it they will come” was simply not working to attract internal users.
- A training product was being replaced by a competitors’ inferior product. Although the competitive product was significantly less expensive, our investigation showed that price was not the primary driver. The real reason was that the entire enterprise had standardized on a specific learning management system that was incompatible with our client’s training product, forcing loyal users to abandon it in favor of a less-preferable product.
- An educational product used to certify new professionals was selling well, but then failing to be renewed. Our analysis showed that customers had lower demand for the product once they had trained their initial crop of new recruits because of low staff turnover, and could not justify renewing the product because its pricing was set to accommodate a large group of learners.
After making these kinds of diagnoses, our win-loss program recommends remedies that can improve customer acceptance and retention. Such recommendations could include a broad range of potential changes, including product improvements, pricing restructuring, re-vamped support programs, or organizational changes, among others.
News that Twitter is now selling selected streams of tweets for analytical purposes illustrates one of the most intriguing aspects of information businesses: Their ability to find new uses for content originally created for very different purposes. Like Rumpelstiltskin spinning straw into gold, these new applications are often as valuable as the original applications, and they often strengthen competitive differentiation. Here are a few other examples:
- ShopperTrak, the leading provider of people-counting technology for retail stores and malls, has started aggregating the data from its customers and deriving competitive benchmarks, market shares, and analytics that it now sells back to customers as a separate product (anonymously, of course).
- TyMetrix, a major vendor of web-based electronic billing applications linking corporations to their outside law firms, has partnered with the Corporate Executive Board to create the “Real Rate Report,” which analyzes billing data from more than 4,000 law firms covering $4 billion in billings to corporate clients. These analyses help corporate counsel understand billing trends, benchmark themselves, and optimize their spending on legal services.
- CrimeDex is a derivative product from 3VR, a video surveillance vendor serving banks, hotels, airports, retailers, and industrial companies. With CrimeDex customers can share videos and details of suspected crimes on their premises via a searchable online database, thereby increasing the possibility of catching the criminals. CrimeDex has over 1,500 users circulating information to over 600,000 investigators and law enforcement professionals from over 1,000 private and public entities nationwide.
Derivative opportunities like these typically rely on aggregating data or linking users such a way that the sum is much greater than its parts. Executing on such opportunities requires an underlying product that is compelling enough to gain a critical mass of market coverage on which to build derivative products. It can take years for these pieces to fall into place, so having capital and patience is critical for success
Tagged 3VR, Corporate Executive Board, crime, CrimeDex, real rate report, retail, ShopperTrak, Twitter, TyMetrix, video, video surveillance
We often think of the government as behind the times in fostering the adoption of new technologies. But healthcare provides a counter-example where the federal government is driving positive change. The government’s ability to drive positive change stems from its power as the largest health insurance company, paying for Medicare and Medicaid. Much as a major retailer like Walmart can force suppliers to comply with its supply chain processes and technologies, the government is now using its power to compel healthcare providers who receive reimbursements to upgrade their practices and technologies.
Electronic medical records (EMRs) have long been recognized as one of the most fundamental technologies in improving healthcare economics, yet their adoption has been slow because of lack of standards and the reluctance of private physicians to purchase them. Under the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009, the government has created a program of grants for any physician or hospital that implements an EMR adhering to a set of so-called “meaningful use” standards designed to ensure that these technologies will actually be used to effect a significant impact on cost and quality of care. In addition, the government has allocated funds for creating regional extension programs that provide technical assistance and share best practices to accelerate health care providers’ effective use of EMRs. (Think extension agents in the Agriculture Department’s highly-successful program that modernized farming in the 20th century.) In addition, the government is funding the establishment of regional health information exchanges connecting health care providers.
Another example of the government fostering the adoption of new technology comes from a Medicare rule requiring pharmacists to conduct an annual comprehensive medication therapy management (MTM) review and provide personal counseling to any patent over 65 years old who takes multiple prescription drugs. The purpose of the MTM review is to ensure that a patient’s set of drugs aren’t duplicative or potentially harmful in their interactions. Complying with this rule will require that pharmacists get equipped with systems that can automatically screen thousands of patient records against drug databases to identify potential problematic situations.
These are just two examples from among many of the government taking a positive, proactive approach toward technology to improve healthcare and lower costs. It’s too early to know whether these moves will result in the desired adoption of new technologies and, more importantly, whether they will ultimately improve healthcare economics. But some of the early results are promising: In its initial month, more than 21,000 healthcare providers initiated registration for the EMR incentive program.
The New York Times desperately needs to find a digital pay model that works. Online advertising now accounts for 25% of the newspaper’s revenues, but will probably never be adequate to fund its operations as print advertising continues to decline. The newspaper’s digital subscription plan announced last week is an attempt to build a subscription revenue stream while it continues to try to grow online advertising. No one knows whether this approach will work, but continuing to rely solely on online advertising was not an option.
The Times’ plan is essentially a freemium (free-premium) model that’s been adopted by some internet services (e.g., LinkedIn, Skype, and Pandora), but remains untested among consumer content information sites. Under its new plan, The Times lets users access content anywhere on its site up to a certain limit (20 articles per month), beyond which users have to pay for a monthly or yearly subscription. Occasional readers have free access while the newspaper captures revenue from more serious readers. This approach conforms to one of our rules for successful pricing: Scale pricing to capture the levels of value that different types of users realize from a service. The Times’ new pay model also fixes problems with the “walled garden” model it tried from 2005-07 in its TimesSelect service under which its unique content, such as op-ed columns, were put behind a pay wall where they were completely inaccessible to non-subscribers. That model ran counter to another of our pricing rules: Always show non-subscribers what they are missing. The old pay wall deprived would-be subscribers the opportunity to sample the very content most likely to drive them to subscribe.
The Times’ plan runs counter to the notion that consumers won’t pay for content. The information business is filled with exceptions to common wisdom. With its unique content (i.e., in-depth reporting from all over the world, investigative reporting, and opinion columnists), The Times may prove that sometimes consumers will pay for what they read. The Times claims that its consumer research shows high willingness to pay. For now, The Times has excluded Kindle and e-book reader subscriptions from its new pricing plans; these subscriptions will continue to exist as stand-alone subscriptions, probably because they are sold through Amazon, and aren’t tied into The Times’ subscription systems, so there is no way to determine whether a Kindle user is also a Times print or digital subscriber. The Times has announced a hike from $15 to $20 per month for a Kindle subscription — a pretty strong sign that The Times is confident readers who pay are relatively price insensitive.
History teaches us that each new communications medium imitates its ancestors before finding its own role. We are just beginning to see the evolution of book publishing from imitation to innovation, thanks largely to the iPad. To date, book content has been mostly ported to new platforms without much added functionality. One might even argue that the Kindle is beloved mostly because it closely imitates the print reading experience but on an ultra-portable device – without the clutter and noise of web-based reading because it offers no movies, no music, no links, and no sharing. In its latest software upgrade, it even took a back-to-the-future step even closer to print by restoring page numbers to its eBooks.
The iPad, despite its considerable multimedia capabilities, has mostly been about porting existing versions of books with only modest feints toward multimedia. But there are some emerging examples of how tablets may find their mojo for book publishing. Not surprisingly, the drive behind this innovation seems to be more from technologists than from publishers. One of these pioneers, Push Pop Press, is a new company formed by former Apple designer Mike Matas to develop interactive products specifically for the iPad and iPhone. The company’s work is mostly still under wraps, but the company has been giving demos of an interactive version of Al Gore’s book Our Choice. Push Pop employs a physics engine designed to give users a highly intuitive, seamless experience as they interact with photos, videos, music, maps, and interactive graphics. The usual user interface of status bars and tabs has been eliminated so that the content and device become one. It’s not clear yet whether Push Pop will become a publisher or remain a technology provider to publishers.
Another pioneer is Inkling, which has produced several digital textbooks. Moving beyond the multimedia experience that most of the major textbook publishers have achieved on PCs, Inkling has sought to re-create textbooks as digitally-born works in which the page is no longer a basic building block. The company says, “…a page is a page not because it makes sense for the content itself, but because that’s just what happened to fit. Enter iPad. There’s no such thing as a page….There’s a display instead of ink. There’s memory instead of paper.” Inkling is also trying to make the textbook into a social learning experience by letting students and faculty share ideas and develop conversations around specific parts of the content.
In newspaper and magazine publishing, there’s a much longer history of evolution toward multimedia versions on the web and more recently in apps. However, it took many years after the web technology was available for publishers to widely embrace products with multimedia at their core, rather than just use multimedia as a way to augment text. How books will adapt to these new technologies is a chapter that’s not yet written.
These days it feels like the internet favors the most successful players in a winner-take-all game. Look at the near monopolies of Facebook, Twitter, Google, Amazon, Craigslist, iTunes, and Pandora. These are some of the examples cited in a recent Forbes article arguing that the internet is fostering monopolistic businesses. But the winner-take-all phenomenon occurs mostly with consumer information services. In business-to-business information the dynamic is strikingly different. Most industries have two and sometimes three highly-competitive information suppliers. Think Westlaw vs. LexisNexis in law; McGraw-Hill vs. Reed in construction; Bloomberg vs. ThomsonReuters in financial information; Wolters Kluwer vs. Elsevier in clinical healthcare information; First Data Bank vs. Wolters Kluwer vs. Elsevier in drug information; Standard & Poor’s vs. Moody’s in debt ratings; and the list goes on.
There are some good reasons for this difference. Consumer information is almost always free and carries few switching costs. Consumers will switch to a new service at the drop of a hat if they believe it is somehow better. B-to-B information typically entails significant fees and is, therefore, a carefully-considered purchase. Switching can be expensive given potential costs to retrain users, adapt workflows to a new vendor, or make changes to internal systems that process such information.
With consumer information, the user and the buyer are the same person. By contrast, in a business, users may be different from buyerd – and their interests may be different. Consider a research analyst in a company who wants a particular information service for convenience regardless of cost, versus a manager whose primary interest may be in meeting a budget even if it means selecting a service that may be only “good enough” rather than the best. In addition, contracts put the breaks on rapid, whimsical switches and also give vendors time to compensate with product upgrades, better pricing, or other ways to retain customers.
Few B-to-B information services benefit from network effects — the increasing value that accrues as more users adopt a specific information service. For example, Facebook gets more valuable to individual users as its overall network grows; the same holds for Skype and PayPal. LinkedIn is one of the few B-to-B services that have enjoyed such a network effect. Others are trading networks and online communities (the majority of which are non-commercial) that may serve specific business and professional communities. But on the whole, few B-to-B information services enjoy such benefits.
The bottom line: Winner-take-all success is most likely to occur when individual users can make their own purchase decisions outside of any institutional constraints and that, in turn, is more likely to happen when services are free.
Steve Jobs has spoken and publishers are pissed off. Apple’s announced deal for publishers who sell subscriptions through its App Store has given publishers a rotten taste in their mouths. Typical of today’s content wars, the issue isn’t just about economics. It’s about control.
At first glance, the deal looks decent for publishers: Apple let’s them keep 70% of their subscription price, which is the same cut that Amazon offers on subscriptions sold in its Kindle store. Furthermore, Apple let’s publishers set the price of their subscriptions. The trouble is in the not-so-fine print. Apple prohibits apps sold through its store from being used to purchase content from anywhere else, except on publishers’ own sites. Worse, while Apple does allow publishers to sell subscriptions through other sites, the terms must be the same or better than those offered to App Store subscribers. Many publishers see these restrictions as trampling on their commercial freedom, especially at a time when the market is exploding with non-Apple devices, such as Honeycomb tablets, Android phones, Blackberries, and PCs. A deal with Apple will restrict publishers’ options in creating different deals for versions on other devices. Another problem is that it’s not clear what information Apple is going to share with publishers about their subscribers. These are, after all, the publisher’s subscribers, even if they purchased through Apple, but Apple says it will share data only if subscribers consent.
Meanwhile, as if on cue, Google has announced its One Pass payment system, which it is positioning as a better alternative to Apple’s App subscription store. For starters, Google will keep just 10% of the subscription revenue as opposed to Apple’s 30% bite. One Pass is more flexible in letting publishers offer not just subscriptions, but sales of single copies, metered access, or freemium deals. One Pass also solves one of the most vexing problems: allowing consumers to access content they’ve purchased through a single sign-on with any device (PC, web, phone, tablet). Most importantly, Google isn’t restricting how publishers offer their content anywhere else.
But One Pass falls short for now in one key area: It doesn’t come with the established strength of Apple’s App Store and its 350,000 apps. Despite their grumbling, it will be hard for most publishers to resist the allure of Apple’s massive reach — more than 160 million devices worldwide. For now Apple might just call the tune.
A recent Wall Street Journal article questioned whether Amazon makes money selling the Kindle and e-books. It’s an interesting question, but probably doesn’t matter much, least of all to Amazon itself. For argument’s sake, let’s assume that Amazon makes little if any profit on the device, given the costs of manufacturing, licenses for some of its software, distribution, and the network that customers use to order and download digital content. It’s pretty obvious that Amazon is pursuing a razor-razor blade strategy. Seeding the market with inexpensive devices, primes the market for recurring digital content sales. Since lowering the Kindle’s price to $139 last year, Amazon has seen Kindle sales skyrocket. More importantly, Amazon’s e-book sales now exceed its print book sales. So far, so good. But what about Amazon’s margins on digital content sales? Amazon doesn’t tell, but it’s a good guess that even at the lower prices it charges for digital content, Amazon earns a profit, given the much lower (i.e., nearly negligible) costs of delivery. Furthermore, there’s growing evidence that e-book customers buy more because of the ease of ordering and consuming content.
Amazon built its businesses by moving aggressively, starting with books and then expanding into new areas of commerce, securing large market share, and not worrying about short-term margins. It spent years losing money before becoming a profitable (and then very profitable) business. It knows that it can translate market share into competitive advantage and, ultimately profits. Another reason that Amazon doesn’t care about the immediate profitability of e-books is that it has to be in the business anyway. With customers rapidly gravitating to e-books as well as digital newspapers and magazines, Amazon has to evolve with customer demand or risk losing all of its content business. A final reason that Amazon must maintain its market-leading position: Co-opting publishers. Amazon has become many publishers’ leading retail outlet for physical books. However, publishers have long dreamed that the digital world might enable them to break their dependence on middlemen like Amazon and build their own direct-to-customer sales channels. Amazon’s best defense is to demonstrate that it’s still the destination for customers to find and purchase content, whether print or digital. Amazon’s advice to publishers: Resistance is futile.
Technology is a continuous game-changer for publishers, even those in seemingly traditional sectors. Recent events in scholarly journal publishing are notable examples of what can happen over time. This week’s announcement by John Wiley & Sons, one of the biggest journal publishers, that it would start publishing open-access journals is an important domino to fall. “Open access” is an alternative, internet-based business model in which subscriptions are free, but authors are charged fees to publish, typically ranging from $500-$1500 per article. This flips the traditional business model for which commercial publishers have long been criticized for charging high subscription prices to what are captive audiences – academic and research institutions that must have access to the leading research in a myriad of fields. While the open access model generates less revenue than the traditional, subscription-based model, it can compensate to some degree by being born digital – therefore never being saddled with the costs of publishing in print.
Wiley’s move is an acknowledgement that open access journals have succeeded. In biology and medicine alone, the number of articles published in open access journals has exploded over the last decade from only about 1,800 articles in 2000 to over 56,000 articles in 2010, according to the National Institutes of Health, National Library of Medicine, and National Center for Biotechnology Information. Though they haven’t driven any established journals out of business, new open access journals have taken hold, especially in the sciences, where there is a rapid expansion of knowledge.
Another key element of journal publishing, peer review, may be radically changed, also thanks to the internet. Peer review has been the sine qua non of all high-quality journals, whether traditional or open access. Peer review panels, typically comprised of a handful of experts, vet journal articles before they are accepted for publication. The reviewers’ identities and comments are usually kept confidential. Now there’s a nascent movement to replace peer review with an open process in which anyone – researchers, authors, and readers — can comment openly on an article. This is a radical idea, and it has plenty of critics. But one reason to take it seriously is that it’s being promoted by Vitek Tracz, a veteran innovator and successful serial entrepreneur in scholarly publishing, and one of the drivers behind the success of open-access publishing. He has organized a growing network of leading scientists called the Faculty of 1000 or F1000 that has 10,000 researchers and clinicians who rate and comment on articles in biology and medical research. If successful, F1000 could transform peer review from a closed, one-time, pre-publication event into a continuous, open discussion among researchers, authors, and readers. Some people have called it “the Facebook of science” and it just might be. Right now F1000’s reviewers cover just 2% of all published articles in the biological and medical sciences, but every successful information business starts small.
Clients sometimes ask us to help them assess their brands. Usually what they are asking is whether they have an effective name for their business or product line. In general, a product or company name really doesn’t matter, though there is a tendency in our mega-hyped society to see brands as determinative – to think that Facebook, Google, and Twitter have succeeded because they have evocative, recognizable, and distinctive names. But lots of companies succeed in spite of undistinguished or even silly brand names. (Think International Business Machines, Hewlett-Packard, Automatic Data Processing, and Apple Computer.) We advise clients not to waste their money trying to rename products or companies without good reason because re-branding carries high costs: devising and testing new names, avoiding conflicts with existing brands, changing collateral, and then educating the market about the brand change. Often concern about the brand name is a signal that something more fundamental is amiss — typically that the product or company itself isn’t keeping up with customer needs and/or competition.
There are, however, some good reasons to change brand names. One is to avoid confusion, such as when a company or product name misrepresents or understates what it does. For example, Chicago Public Radio renamed itself Chicago Public Media to reflect its broader use of media beyond radio broadcasting, including web publishing, podcasting, streaming, and even live events. International Harvester became Navistar when its product line had grown far beyond farm equipment, and similarly General Electric became GE, American Telephone & Telegraph became AT&T, and International Business Machines became IBM to reflect their broader businesses.
Another reason for a name change is that the market has already changed the name informally. Michael Bloomberg’s company was founded as “Innovative Market Systems” and sold a financial terminal called “Market Master,” but it became popularly known as “the Bloomberg.” There’s an apocryphal story that what eventually forced the name change was that Michael Bloomberg went on a sales call and was left waiting in the customer’s reception area for over an hour. It turned out that the customer was waiting for someone from “Bloomberg” and ignored the guy from “Innovative Market Systems.”
Another good reason for changing brand names is to unify products in the minds of customers. Large information conglomerates, such as Thomson Reuters and Wolters Kluwer, have re-branded their various niche products under their uber-brands. The desired effect is to convey a degree of product integration (or at least cohesiveness) as well as corporate solidity. One tricky part, however, is that many of their products already exist under prominent, well-respected brand names, such as Westlaw, CCH, and Lippincott Williams & Wilkins. Imposing an uber-brand, such as Thomson Reuters or Wolters Kluwer, on top of — or instead of — existing brands may confuse customers loyal to the existing brand names. It’s like the popular girl from high school who gets married. Long after she’s taken on her married name, she’s still remembered by her maiden name.