A Tale of New Standards

The US healthcare system is going through a wrenching change as it nears the deadline for implementing ICD-10, the latest version of its standard classification for medical diagnoses. This story illustrates the difficulties of establishing industry standards even when they have enormous long-term benefits. Officially known as the International Classification of Diseases, earlier versions have been in use since the 1960s, but ICD-10 represents a quantum leap in comprehensiveness and complexity. The current version (ICD-9) contains approximately 13,000 codes, whereas the new version contains more than five times as many codes. For inpatient hospital procedures, there will be 87,000 codes as compared to just 4,000 under the current system. A single disease or injury may now carry different elements to describe more detail, such as cause, severity, or anatomic site of the problem.

The changeover is a major headache for physicians, hospitals and anyone else who bills insurance companies or government payers, such as Medicare and Medicaid. Most other industrialized countries have already implemented the standard, but in the US, the government delayed the original 2013 implementation deadline several times. The balkanized structure of US healthcare has been one of the main impediments. Hundreds of millions of dollars have been spent bringing billing systems up to date and more importantly, retraining personnel on how to code medical bills under the new schema. Among other things, the new system requires that coders know more human anatomy. For example, a condition previously coded simply as “broken forearm” must now be coded more precisely as “fracture of lower end of radius.” Bills that are miscoded may get rejected by insurance companies, so physicians and hospitals worry that such delays will cause delays in payment, driving some to obtain lines of credit to tide them over. For patients, miscoded paperwork could delay their insurance companies’ approval of treatment.

Against these challenges are the long-term benefits of new standards. Industries that implement standards gain opportunities for more transparency and new insights from applying analytics to a richer database. In this case, payers, such as insurance companies and Medicare/Medicaid, will be able to get a much more precise understanding of what they are actually paying for. Similarly, hospitals and healthcare networks will gain a new tool for understanding the economics of treating patients. The impact of the ICD-10 goes beyond the codes themselves. The government is requiring that the coding scheme be applied to virtually all patients, whereas ICD-9 applied just to Medicare and Medicaid patients, only about one-tenth of patients. The resulting database of patient illnesses and injuries will provide a broader and more representative picture of healthcare in our country.


Tagged , , , , | Comments Off on A Tale of New Standards

Why Win-Loss Analysis Belongs in Your Diagnostic Arsenal

Sports teams use game films to review their performance, understand their strengths and weaknesses, and identify ways to improve. Business-to-business vendors rarely have the same opportunities because their sales processes take place in a decentralized environment where such scrutiny is rarely possible. Ask a CEO or VP of Sales why they really won or lost each of their last 50 customers, and he or she probably cannot give an informed explanation. One problem is that because sales result from a combination of factors — a compelling product, offer, and engagement with the customer — it is often difficult to identify which elements really drove (or killed) the sale.

Perhaps the most vexing challenge to understanding why customers bought or did not buy is that customers often do not tell the full truth. Especially when they have decided not to buy, customers look for an easy way to shut down the conversation and not to offer any ammunition for a vendor to try to re-open the sales process. For management, the true situation becomes even more obscured by a form of distortion that occurs when sales forces self-report an outcome, sometimes explaining the outcome in ways that simplify the causes (“It was a budget problem”) or seek to deflect blame (“Our product was too expensive”).

Win-loss analyses can cut through this fog by providing an objective, in-depth understanding of the reasons that current or potential customers have selected specific products or services. While sometimes assumed to focus primarily on elements of the sales process, an effective win-loss analysis should be a broad-based assessment that considers all internal as well as external factors affecting a sale: product capabilities and underlying technology; pricing and other commercial terms; sales processes; technical and end-user support; competitive alternatives; and customer decision-making processes.

Learn more and read case studies in our Guide to Successful Win-Loss Analysis.


Tagged | Comments Off on Why Win-Loss Analysis Belongs in Your Diagnostic Arsenal

Retailing and the Internet of You

Walk into a retail store and you may be vaguely aware that you are being counted. Many retailers have long used systems from companies such as Irisys, Nomi, Sensormatic, and ShopperTrak to measure foot traffic entering their stores by time of day, but the growing sophistication of traffic-counting in shaping the customer experience is an evolving story of information aggregation, integration, and scale. The story starts with the counting devices themselves, typically mounted over doorways, which have become increasingly sophisticated so that some can distinguish adults from children or people from baby carriages. After all, retailers are interested in the people carrying the wallet. Unfortunately, knowing how many shoppers entered a store by time of day is not very useful on its own. What retailers really want to know is conversion, the percentage of shoppers who actually made a purchase and the size of those purchases. To analyze conversion, retailers marry their traffic counts to their cash-register data. One of the key factors affecting conversion is the availability of sales help. Many of us have had the experience of walking into a store, taking a quick look, and starting to make a u-turn to leave when a salesperson intercepts us and asks if we need help. That conversation can often result in a sale that otherwise would have been lost. Therefore, retailers now integrate traffic data into their scheduling systems to ensure that they have adequate staffing for their projected traffic levels at different times of day (and are not overspending for staff when it is not needed).

Benchmarking is one more step in the evolution of retail traffic data. Retailers now use traffic, conversion, and staffing data to compare performance across their stores and identify ways to improve their laggards. Some vendors, such as ShopperTrak, now have large enough installed bases to benchmark a retailer against a peer group of stores in the same market (anonymously, of course). For example, a mobile phone retailer might want to compare traffic at its stores in downtown Chicago to other mobile phone stores in the same area. ShopperTrak now also sells market indices that track foot traffic in various retail segments, such as apparel, shoes, and phones, which can be used by investors as early indicators of retail trends. FootFall, a UK-based player, provides similar indices of retail traffic by various countries.

The lesson here, as in other information businesses, is that aggregation, integration, and scale increase the utility and value of simple information. Think about that on your trip to the store.


Tagged , , , , , , | Comments Off on Retailing and the Internet of You

The New Face of Crowdsourcing

Crowdsourcing is evolving from exotic to mainstream as it becomes increasingly incorporated into the operations of major information companies, such as Thomson Reuters, Bloomberg, and others, to build and enhance their content collections. In the first phase of crowdsourced labor, services like Amazon’s Mechanical Turk sprang up as marketplaces for relatively low-skilled labor to do relatively simple tasks at pennies per task. These services demonstrated the potential value of crowdsourced labor, but lacked mechanisms to make it feasible on an industrial scale.

Now a new generation of service providers is providing tools that enable large companies to more easily take advantage of crowd labor within the context of their content operations. One of these players, WorkFusion, a company founded out of MIT research, illustrates the larger potential for crowdsourcing.

WorkFusion provides a platform in which crowdsourced labor is part of a larger toolkit enabling companies to manage internal as well as external sources of talent globally (e.g., Amazon’s Mechanical Turk, Elance/oDesk, and uSamp, among many others). Using the WorkFusion platform, companies can define a project by specifying each step, and then take advantage of both crowd labor and tools from WorkFusion’s library to automate some manual tasks. One of the potential powers of WorkFusion is that it can apply statistical techniques to the work executed by crowd labor to analyze which workers are best suited to certain types of tasks. Also, It can use its scale of pattern data to analyze work steps and rapidly create applications to replace some human labor with automation. WorkFusion’s approach has attracted customers from financial services, ecommerce, healthcare, and consumer packaged goods.

This more evolved form of crowdsourcing holds the promise of giving information companies alternatives to their own internal offshore operations or outsourcing firms. Both types of offshore operations often lack flexibility to handle peak loads and/or new types of work. Furthermore as labor costs change in the developing world, companies have needed to seek new outsourcing relationships in new countries with cheaper labor. New crowdsourcing platforms such as WorkFusion’s could allow information companies to take advantage of crowd labor on a flexible basis, to accommodate peaks in their work levels and/or take on short-term projects.

As crowdsourcing becomes both cheaper and more reliable, its real value will be not just in cutting the cost of existing content operations, but also in enabling vendors to improve their offerings in at least three ways. First, information companies will be able to collect and organize content that was previously too difficult or expensive to collect. Second, this more advanced crowdsourcing will make it possible to enhance content, such as through tagging, to an extent that was previously economically unfeasible. Third and perhaps most powerfully, crowdsourcing will make it easier for information companies to link information in separate databases, thereby driving entirely new applications. Many information vendors today face fundamental challenges when trying to uniquely identify and normalize entities from multiple sources. (Is the company called “General Motors” the same as the company identified as “GM?” Is the company called “IDC” in one source the same as the “IDC” from another source?)

Today, crowdsourcing tends to be restricted to simple, repetitive tasks. But more advanced crowdsourcing from companies like WorkFusion shows the potential for it to play a larger, more valuable role.


Tagged , , , , , , , , , , , , , | Comments Off on The New Face of Crowdsourcing

The Future of MOOCs

I recently heard a great talk about the future of education by Sebastian Thrun, one of the founders of online course provider Udacity.  Thrun is a Stanford professor who made headlines when 160,000 students from 195 countries signed up for his MOOC on artificial intelligence.  Whether correct or not – and it’s mostly too early to know – Thrun offers compelling views, many based on his experience teaching online courses.  You can listen here to the entire one-hour talk, which he delivered to the Commonwealth Club of California, America’s oldest and largest public affairs forum.

Although “only” 23,000 students of the 160,000 students who registered actually completed his artificial intelligence course, Thrun points out that this number is more than the total number of all Stanford students.  Nevertheless, Thrun concedes that the typical 3% retention rate for MOOCs is a problem.  More interesting is that the top 413 students in his online course achieved a higher grade than any of the 200 Stanford students who completed it.

Thrun is excited by online courses as a way to change education’s delivery model — having a learned person talk to other people synchronously.  That model, which hasn’t changed in a thousand years, might have made sense before recording technology, but it isn’t effective in an age of rapid changes in knowledge.  Nor does it address the differences in the way that individuals learn.  The traditional education model forces everyone to learn together at the same speed.  While smaller classes, especially those that group similar students, may work better, they are inherently uneconomical. The strength of online education is that students can work at their own pace, review instructional materials as many times as they need, and be assessed at any time rather than on the same day.  Thrun points out that this is the model of videogames, which allow people to progress at their own speed, get constant feedback and assessment, and get rewarded in points, recognition, and satisfaction when they get the right answer or master an element of the game.  Similarly, the goal of education, Thrun says, should be mastery, not timing.

The biggest revolution in learning, says Thrun, is not MOOCs, but Google and Wikipedia because they have turned us into on-demand learners.  This form of learning is profoundly different from conventional education, which imposes highly-curated content and an exact path for how students learn.

The quality of students graduating from America’s top colleges and universities is unmatched in the world, but Thrun says, these elite institutions graduate the best students because they admit the best students.  Unlike manufacturing, our educational system does not assess the value of finished products versus the value of the raw materials.  Furthermore, he says, these institutions are extremely exclusive and expensive.  Online education, he argues, can increase access to knowledge beyond the few high-achieving students fortunate enough to be able to attend elite institutions.  He also argues that online courses will create a more transparent system in which to judge the quality of individual professors.

Thrun is especially bullish on the potential positive impact of online education on  community colleges.  In California, which has the largest community college system, nearly half a million students are wait-listed because the state lacks funds for classroom space.  Thrun believes that online education could help address that problem as well as the problem that 60% of incoming state university students fail the entrance exams and must go into remedial classes.  Furthermore, 40,000 college students in the California state system fail and must retake classes each year at enormous financial cost to themselves and the educational system.  With that in mind, Udacity is piloting a college readiness program in three subjects for high school students.  The price of the courses is just 10% of a comparable remedial course in the California university system.  Thrun points out, however, that Udacity’s solution for these students is more than just online classes; it also includes intensive mentoring.

Thrun acknowledges that online education must still resolve issues, including those related to credit and cheating, although he feels that these aren’t as difficult as those involving the acceptance of online education by students and society at large.  America, which was built on trust in the individual, Thrun argues, must apply those values to education and trust the individual to excel. The result, he predicts, will be that many students left behind by the rigidity of traditional education will become engaged learners in the new online environment.


Tagged , , , , , , , , , | Comments Off on The Future of MOOCs

Is Acxiom Really Being Transparent with its Consumer Data?

Amid some fanfare, consumer data powerhouse Acxiom just launched AboutTheData.com, a service that lets consumers look at the data the company collects about them.  The company’s official rationale, as spelled out on its website and reiterated in a Sunday New York Times article, is to be more transparent in response to the public’s growing concerns about the personal data that companies and the government collect about them. But the cynic in me wonders whether AboutTheData isn’t so much about transparency as crowdsourcing.  Because the service lets individuals edit Acxiom’s information about them, maybe what Acxiom really wants is for all of us to serve as editors and improve the completeness and accuracy of its database.

I used AboutTheData to look at Acxiom’s data about me and my wife.  I was surprised that much data was either missing or inaccurate, given the fact that we are middle-aged folks who have left a trail of publicly-available information throughout our adult lives. For example, Acxiom’s records say that we are each single.  (We’ve been married for over 30 years.)  Acxiom has no information on our homeownership status, even though we have owned five homes over the course of 30 years, including our current residence for more than 10 years.   Acxiom’s individual and household income data was relatively accurate for my wife, but completely off the mark for me.  Acxiom doesn’t know my political party (I’ve been a registered Democrat for over 30 years and vote in every election), but does know my wife’s party affiliation.  Acxiom doesn’t know that I own a car (I’ve owned my current car for 10 years), though it does know that my wife has an automobile insurance policy.  Acxiom believes that my wife’s ethnicity is French.  (At best, she’s a nice Jewish Francophile.)  Among her household interests, Acxiom lists cooking, home decorating, reading (all relatively true), but also lists computers, sports memorabilia, and reading magazines (none of which is true).  My interests, according to Acxiom, include boating, sailing, and boat ownership.  (I have never have been interested in any of these activities and get sea sick very easily.)  Acxiom lists home furnishing, hunting, and shooting as other interests, none of which are interests in the least.  It also lists me as a “mail order responder.” (I do shop online frequently, but almost never in response to a catalog.  In fact, I’ve made it a personal crusade to remove our household from the mailing list of any catalog that arrives in the mail.)  According to Acxiom, I am interested in magazine reading. (I subscribe to almost no magazines.) Pretty much the only information that Acxiom has gotten right, other than my name, address, and birthday, is that I am interested in cooking (true) and own a cell phone.  (Who doesn’t?)

I am interested in other people’s experiences with AboutTheData.com.  Check out your own data on the site and let me know how complete and accurate it is.


Tagged , , | Comments Off on Is Acxiom Really Being Transparent with its Consumer Data?

How to Fix a Leaky Heart Valve

Journal publishers have long allowed authors to submit videos to supplement their articles, but an emerging model makes video the main attraction.  Last week Elsevier launched a video journal for gastrointestinal endoscopy, joining a small group of other publishers with video journals in such fields as orthopedics, physics, and psychotherapy.  Most of the video journal publishers also publish conventional written articles as companions to the video.  Like print journals, video journals use various business models.  Most, but not all of the journals are peer reviewed, and most are subscription based.  Unlike most print journals, however, video journals publish on a continuous basis as new “articles” are produced, rather than on a fixed schedule.

Another notable difference is the production process.  When a publisher accepts an author’s submission, the publisher then collaborates with the author to develop a script and then shoots, edits, and produces a professional quality video.  Despite the cost of video production, the economics of video journals looks promising, at least in part because they are born-digital and have no printing or physical distribution costs.  The first video journal, the Journal of Visualized Experiments (JoVE), became profitable within a year of its launch in 2008, and revenues were reported to have grown from $3 million to $5 million from 2010 to 2011.  The company now publishes over 60 articles per month, has over 550 subscribing institutions, and receives around 170,000 unique monthly visitors.  JoVE has also demonstrated that its model is scalable:  It has steadily expanded into different domains (currently eight and growing), including neuroscience, bio-engineering, chemistry, clinical medicine, and behavior, among others.

Demand for video journals appears especially strong outside the US, probably because many parts of the world lack access to live training in the latest laboratory or clinical techniques.  JoVE, for example, gets about two-thirds of its traffic from overseas and just one-third from the US, a proportion that would typically be reversed for most professional journals published in the US.

Because video is so effective at communicating how-to knowledge, publishers in a wide range of business, professional, scientific, clinical, and industrial domains are prime candidates to start video journals.  Publishers themselves do not need to make large investments as they can easily contract out to video production firms.  At the same time, the change to a new medium may open the market to new players.  For example, the Video Journal of Ophthalmology and the Video Journal of Orthopaedics are published by companies with no experience in journal publishing.  However, these newcomers will have to demonstrate that their content has legitimacy and quality of content produced through traditional publishers’ peer-review processes.

Video journals are likely to lead to the creation of educational materials and reference databases.  Across education, compilations of videos are growing rapidly as supplements to lectures and live demonstrations, especially because students can view them over and over on their own time.  This ability to apply video journal content to education could open new revenue opportunities for journal publishers to expand from their traditional research market focus.


Tagged , , , | Comments Off on How to Fix a Leaky Heart Valve

Transforming Information Companies

Today’s surprise announcement that the Washington Post Company is selling its flagship newspaper along with several other newspapers to Jeff Bezos is only part of a wider, ongoing story of transformation.  Earlier this year, the company took a sharp turn away from publishing and media with the purchase of Celtic Healthcare, a provider of home healthcare and hospice services, and last month it bought Forney Corporation, a supplier of systems for power and industrial boilers.  These moves follow the sale of Newsweek in 2010, and subsequent divestiture of some smaller publishing properties.  The company still owns Slate, The Root, and Foreign Policy magazine as well as Kaplan, the for-profit education company, but it has clearly decided that its future lies outside of publishing and media.

Other companies have made equally dramatic transformative moves through recent and not-so-recent M&A activity:

  • McGraw-Hill divested its education division in March as part of an effort to re-shape itself around financial information (or so the company’s new name says).
  • Thomson was years ahead of the market in divesting its large newspaper business in the 1990s — a step that laid the foundation for giant moves into financial information and then legal information businesses.  Along the way, Thomson divested its educational publishing business in 2007 and healthcare information business in 2012.
  • LexisNexis’ formed its risk analysis division by purchasing Seisant in 2004, a move that was bolstered by the much larger acquisition of ChoicePoint in 2008.
  • Google paid a whopping $1.65 billion for YouTube in 2006, anticipating the importance of video for both consumer and businesses applications.

Transformational transactions such as these are characterized by their focus on optimizing a company’s whole portfolio by moving into or out of entire businesses.  That distinguishes these transactions from the more routine M&A transactions that companies large and small undertake for a variety of reasons, such as rolling up competitors, adding complementary products and capabilities to existing product lines, and expanding into new markets.  From our experience in helping companies transform themselves, we observe that successful transformations share certain characteristics:

A transformational culture:  Companies that integrate M&A into their ongoing strategic planning are more likely to be more successful in transforming themselves than those that think about M&A primarily as an opportunistic activity.  That means that operating executives, not just a company’s professional strategists, need to think about M&A as much as ongoing operations.

Rigorous analysis:  The foundation of any transformation is portfolio analysis, a systematic review of a company’s business units to assess their future growth prospects, resource requirements, and likely returns.  Portfolio analysis must consider such factors as underlying market growth, competition, and the impact of new technologies.    Since operating executives typically have the keenest understanding of their markets, the fusion of their knowledge with the analytical skills of professional strategists often generates critical insights and decision-making.

Nimbleness:  Successful transformation often result when companies are willing to act early, even paying a premium for an asset before it can be fully justified by financials or divesting before a business shows signs of distress.  Thomson was able to realize an attractive price for its newspaper businesses because it recognized the need to divest well before anyone else, and Google was willing to pay a large multiple for YouTube to secure an important new asset.

Of course, not all M&A-driven transformations work out well.  In the 1990s, Knight Ridder made the wrong bet by divesting its information businesses to focus on newspapers, spending over $1 billion in acquisitions before ultimately unloading the entire company in 2006.


Tagged , , , , , , , , , | Comments Off on Transforming Information Companies

Relationship Science: LinkedIn for the One Percent?

Relationship Science, a new company that bills itself as “the ultimate business development tool,” has been in the news because it recently raised $90 million from a set of marquee investors and, perhaps even more significantly, it is the follow-on act for Neal Goldman, whose first company, Capital IQ, was bought by McGraw-Hill for $200 million in 2004.  RelSci (as the company nicknames itself) helps you find relevant business contacts through people you know.  At first glance, it looks like LinkedIn, but there are some big differences:

  • Focus on the upper crust:  Like LinkedIn, RelSci allows you to see how a target contact or institution may be linked to people you know.  But in contrast to LinkedIn’s 225 million members, RelSci’s database currently contains information on only 2.2 million people that it deems “influential” based on such factors as their executive levels and board memberships in corporations as well as non-profits.
  • A reference database, not a social network:  RelSci is a highly-structured source of relationship information, but offers no social networking.  Unlike LinkedIn, there is no facility for linking or communicating with potential contacts or asking existing contacts to provide introductions to the people they know.  RelSci claims that 40% of the people in its database are not LinkedIn users anyway and therefore are not reachable through online social networks.  There is, however, one social-network-like aspect:  As part of the set-up process, users can upload their contacts from Outlook, LinkedIn, or other sources.  These relationships then get added to the user’s virtual database, enabling the system to include those contacts privately in the pathways that it displays for reaching targeted contacts.
  • Curated content: In an age of user-contributed content, RelSci built and maintains its database the old fashioned way with hundreds of data analysts (mostly in India) using publicly-available sources from the web, company documents, and news articles as well as from commercial databases, including FactSet, LexisNexis, Morningstar, GuideStar, and Noza. (The latter two databases provide information on non-profit organizations, boards, and contributors.)  Not surprisingly, this is the same approach used by Capital IQ in its database of relationships for the institutional investment community.
  • Subscription business model:  LinkedIn uses a freemium model, providing a free service for everyone as well as premium tiers for people who need extra information or functionality.  Advertising is also an important revenue source.  By contrast, RelSci operates entirely on a subscription model which starts at $9,000 annually for a minimum three-seat license.
  • No approval required for links.  LinkedIn’s relationships rely on users agreeing to link to each other, whereas RelSci assigns connections based on known (or implied) relationships from its database.  This more restrictive approach means that RelSci has fewer connections overall and lacks the serendipitous connections that occur in LinkedIn (e.g., friends and neighbors).  At the same time, RelSci’s approach mitigates the noise in social networks from the high number of weak connections that results from “promiscuous connecting.”   It also means that users do not have to do anything in order to start using the product.

In all likelihood, curated products like RelSci will co-exist with social networks like LinkedIn.  Meanwhile, RelSci is off to a fast start.  The company is targeting law firms, accounting firms, consulting firms, and others that do high-level prospecting, and already claims to have 1,000 users from 175 companies since its launch at the end of February.


Tagged , , , , | Comments Off on Relationship Science: LinkedIn for the One Percent?

Firestorm Over Elsevier Acquisition of Mendeley

Elsevier’s long-rumored acquisition of Mendeley was finally announced a week ago, touching off a reaction that demonstrates the ongoing tension between large academic publishers and their users as well as the pitfalls of open, networked communities.  For Elsevier, Mendeley is a strategic gem and well worth the $70 million unofficially-reported price.  Mendeley launched in 2009 by a group of PhD students as an open, cloud-based system for academic researchers to download, annotate, and manage the articles and papers they use for their research and also to manage their references – the articles and papers they cite in their own published works.  Reference management is a fundamental part of the research process and had been served for many years by software products, such as EndNote and Reference Manager, owned by Thomson Reuters.  Mendeley has caught on quickly because its cloud-based architecture combines reference management with social networking. For example, users can share their lists of citations, search for potential collaborators, share their own professional information and papers, and develop applications that tap into the content users store in the system – features that scholars have found very compelling.

Another appeal of Mendeley is that it was designed and run by scholars and wasn’t beholden to any publishers or large corporate interests.  And that’s where the firestorm starts.  Since the announced acquisition, many Mendeley users have spoken out in harsh terms about both Elsevier and Mendeley.  Using terms like “evil empire,” “repulsive,” and “sellout,” many users have taken to blogs and Twitter to denounce the acquisition and many, including some high-profile academics, have said that they will no longer use Mendeley.  For some critics, it’s a matter of principle but some say that they are also concerned about trusting their data (i.e., their personal lists of most-important articles) to Elsevier.

Critics may find it more difficult to leave than their protests suggest.  Mendeley already has over 2 million users in 180 countries and to the extent that these users are taking advantage of the open, social aspects of the platform, it will make it hard for individual users to exile themselves.  Furthermore, Mendeley, which has been built on a freemium model, has been moving from individual memberships to institutional memberships that cover entire universities or at least whole departments.  While it is still early in these efforts, institutional memberships make it harder for individual scholars to opt out when their colleagues are all using a common platform.  Elsevier brings an extensive worldwide institutional salesforce that can help accelerate institutional adoption.  Finally, separatists have few alternatives.  Mendeley’s competitors, such as ResearchGate and Zotero, offer similar functionality, but they have much smaller communities of users and lack Mendeley’s market momentum.  It’s not unprecedented for new online services to overcome the head start of established players (witness Google vs. incumbent search engines), but services that have successfully exploited network effects are much more difficult to overtake.


Tagged , , , , , , | Comments Off on Firestorm Over Elsevier Acquisition of Mendeley
© 2015 GREENHOUSE ASSOCIATES