Looking Beyond Bitcoin

In Beyond the Bitcoin Bubble in the NY Times Magazine, journalist and entrepreneur Steven Johnson masterfully explains the potential of the blockchain to fundamentally re-shape the internet, break the stranglehold of mega-players like Google, Facebook, Amazon et al, and return the internet to its decentralized and democratic roots.  The key is the current wave of development of new protocols facilitating openness and decentralization.  Whether cryptocurrencies survive, their greater lasting impact will be the underlying blockchain technologies which could power an almost unlimited set of new applications that operate outside the control of any central authority or company.


Comments Off on Looking Beyond Bitcoin

Washington Post Strategy Takes Page from Amazon Web Services

Ever since Jeff Bezos purchased the Washington Post Company in 2013, there has been much speculation on how he intends to put the newspaper on a more self-sustaining economic footing in the face of the existential challenges confronting the entire industry.  WaPo’s recent announcement of its second year of profitability sheds some light on its strategy.  In a move that strongly resembles Amazon’s growth from an ecommerce destination to the largest provider of cloud computing services, it has become a software provider to other publishers through its Arc platform, a suite of content management and monetization tools.  Built initially for its own newsroom, circulation and advertising purposes, the company has successfully licensed it to over 50 other publishers in the US and abroad.  Like Amazon Web Services (AWS), Arc is operated as a separate company and clearly benefits from Amazon’s access to technology expertise, reputation, and scale.  While AWS generated an estimated $14 billion in 2017, Arc’s CEO has predicted that it could eventually reach $100 million.  At rates ranging from $10,000 to $150,000 per month, Arc has a long way to go, but the math could easily work if the company reaches beyond just serving newspapers to other types of publishers.  And at $100 million, Arc would more than cover the losses that might be sustained by running a first-class newspaper.


Tagged , , , | Comments Off on Washington Post Strategy Takes Page from Amazon Web Services

How Data and Software Businesses are Rescuing Hearst

There are some notable pearls about Hearst’s evolving business in CEO Steven R. Swartz’s year-end letter to employees. Most notable is the company’s continued expansion of its Business Media group, which provides data and software to the healthcare, financial, automotive, and aviation industries. Despite flat revenue of $10.8 billion, Hearst was able to achieve record profits, thanks to the Business Media group, which accounted for 28% of total profit, a three-fold increase over the past decade. Sales of some investments also helped, but how much is not disclosed. Hearst has been pursuing diversification into business information and data for decades, slowly and steadily making acquisitions, and Swartz’s letter notes disappointment that the company was unsuccessful in making any acquisitions in this space in 2017. Nevertheless, the success of its diversification is underscored by Swartz’s candid appraisal of the challenges facing Hearst’s TV, magazine, and newspaper businesses in the age of Google, Facebook et. al.: “2017 was a great year to be a consumer of media products but less so to be a provider of that content.” Hearst isn’t the only media company to have pursued business information. Thomson (prior to its merger with Reuters) divested its sizable newspaper holdings to successfully pursue financial, legal, and other business information. The UK’s Daily Mail Group, noted for its newspapers, has over several decades built a portfolio of information businesses serving the insurance, property, energy, education and finance sectors that now accounts for half of its revenues and two-thirds of its profits. Regardless of whether better times return for Hearst’s consumer media, business data and software appear destined to increase their overall importance at the company.


Tagged , , | Comments Off on How Data and Software Businesses are Rescuing Hearst

A Tale of New Standards

The US healthcare system is going through a wrenching change as it nears the deadline for implementing ICD-10, the latest version of its standard classification for medical diagnoses. This story illustrates the difficulties of establishing industry standards even when they have enormous long-term benefits. Officially known as the International Classification of Diseases, earlier versions have been in use since the 1960s, but ICD-10 represents a quantum leap in comprehensiveness and complexity. The current version (ICD-9) contains approximately 13,000 codes, whereas the new version contains more than five times as many codes. For inpatient hospital procedures, there will be 87,000 codes as compared to just 4,000 under the current system. A single disease or injury may now carry different elements to describe more detail, such as cause, severity, or anatomic site of the problem.

The changeover is a major headache for physicians, hospitals and anyone else who bills insurance companies or government payers, such as Medicare and Medicaid. Most other industrialized countries have already implemented the standard, but in the US, the government delayed the original 2013 implementation deadline several times. The balkanized structure of US healthcare has been one of the main impediments. Hundreds of millions of dollars have been spent bringing billing systems up to date and more importantly, retraining personnel on how to code medical bills under the new schema. Among other things, the new system requires that coders know more human anatomy. For example, a condition previously coded simply as “broken forearm” must now be coded more precisely as “fracture of lower end of radius.” Bills that are miscoded may get rejected by insurance companies, so physicians and hospitals worry that such delays will cause delays in payment, driving some to obtain lines of credit to tide them over. For patients, miscoded paperwork could delay their insurance companies’ approval of treatment.

Against these challenges are the long-term benefits of new standards. Industries that implement standards gain opportunities for more transparency and new insights from applying analytics to a richer database. In this case, payers, such as insurance companies and Medicare/Medicaid, will be able to get a much more precise understanding of what they are actually paying for. Similarly, hospitals and healthcare networks will gain a new tool for understanding the economics of treating patients. The impact of the ICD-10 goes beyond the codes themselves. The government is requiring that the coding scheme be applied to virtually all patients, whereas ICD-9 applied just to Medicare and Medicaid patients, only about one-tenth of patients. The resulting database of patient illnesses and injuries will provide a broader and more representative picture of healthcare in our country.


Tagged , , , , | Comments Off on A Tale of New Standards

Why Win-Loss Analysis Belongs in Your Diagnostic Arsenal

Sports teams use game films to review their performance, understand their strengths and weaknesses, and identify ways to improve. Business-to-business vendors rarely have the same opportunities because their sales processes take place in a decentralized environment where such scrutiny is rarely possible. Ask a CEO or VP of Sales why they really won or lost each of their last 50 customers, and he or she probably cannot give an informed explanation. One problem is that because sales result from a combination of factors — a compelling product, offer, and engagement with the customer — it is often difficult to identify which elements really drove (or killed) the sale.

Perhaps the most vexing challenge to understanding why customers bought or did not buy is that customers often do not tell the full truth. Especially when they have decided not to buy, customers look for an easy way to shut down the conversation and not to offer any ammunition for a vendor to try to re-open the sales process. For management, the true situation becomes even more obscured by a form of distortion that occurs when sales forces self-report an outcome, sometimes explaining the outcome in ways that simplify the causes (“It was a budget problem”) or seek to deflect blame (“Our product was too expensive”).

Win-loss analyses can cut through this fog by providing an objective, in-depth understanding of the reasons that current or potential customers have selected specific products or services. While sometimes assumed to focus primarily on elements of the sales process, an effective win-loss analysis should be a broad-based assessment that considers all internal as well as external factors affecting a sale: product capabilities and underlying technology; pricing and other commercial terms; sales processes; technical and end-user support; competitive alternatives; and customer decision-making processes.

Learn more and read case studies in our Guide to Successful Win-Loss Analysis.


Tagged | Comments Off on Why Win-Loss Analysis Belongs in Your Diagnostic Arsenal

Retailing and the Internet of You

Walk into a retail store and you may be vaguely aware that you are being counted. Many retailers have long used systems from companies such as Irisys, Nomi, Sensormatic, and ShopperTrak to measure foot traffic entering their stores by time of day, but the growing sophistication of traffic-counting in shaping the customer experience is an evolving story of information aggregation, integration, and scale. The story starts with the counting devices themselves, typically mounted over doorways, which have become increasingly sophisticated so that some can distinguish adults from children or people from baby carriages. After all, retailers are interested in the people carrying the wallet. Unfortunately, knowing how many shoppers entered a store by time of day is not very useful on its own. What retailers really want to know is conversion, the percentage of shoppers who actually made a purchase and the size of those purchases. To analyze conversion, retailers marry their traffic counts to their cash-register data. One of the key factors affecting conversion is the availability of sales help. Many of us have had the experience of walking into a store, taking a quick look, and starting to make a u-turn to leave when a salesperson intercepts us and asks if we need help. That conversation can often result in a sale that otherwise would have been lost. Therefore, retailers now integrate traffic data into their scheduling systems to ensure that they have adequate staffing for their projected traffic levels at different times of day (and are not overspending for staff when it is not needed).

Benchmarking is one more step in the evolution of retail traffic data. Retailers now use traffic, conversion, and staffing data to compare performance across their stores and identify ways to improve their laggards. Some vendors, such as ShopperTrak, now have large enough installed bases to benchmark a retailer against a peer group of stores in the same market (anonymously, of course). For example, a mobile phone retailer might want to compare traffic at its stores in downtown Chicago to other mobile phone stores in the same area. ShopperTrak now also sells market indices that track foot traffic in various retail segments, such as apparel, shoes, and phones, which can be used by investors as early indicators of retail trends. FootFall, a UK-based player, provides similar indices of retail traffic by various countries.

The lesson here, as in other information businesses, is that aggregation, integration, and scale increase the utility and value of simple information. Think about that on your trip to the store.


Tagged , , , , , , | Comments Off on Retailing and the Internet of You

The New Face of Crowdsourcing

Crowdsourcing is evolving from exotic to mainstream as it becomes increasingly incorporated into the operations of major information companies, such as Thomson Reuters, Bloomberg, and others, to build and enhance their content collections. In the first phase of crowdsourced labor, services like Amazon’s Mechanical Turk sprang up as marketplaces for relatively low-skilled labor to do relatively simple tasks at pennies per task. These services demonstrated the potential value of crowdsourced labor, but lacked mechanisms to make it feasible on an industrial scale.

Now a new generation of service providers is providing tools that enable large companies to more easily take advantage of crowd labor within the context of their content operations. One of these players, WorkFusion, a company founded out of MIT research, illustrates the larger potential for crowdsourcing.

WorkFusion provides a platform in which crowdsourced labor is part of a larger toolkit enabling companies to manage internal as well as external sources of talent globally (e.g., Amazon’s Mechanical Turk, Elance/oDesk, and uSamp, among many others). Using the WorkFusion platform, companies can define a project by specifying each step, and then take advantage of both crowd labor and tools from WorkFusion’s library to automate some manual tasks. One of the potential powers of WorkFusion is that it can apply statistical techniques to the work executed by crowd labor to analyze which workers are best suited to certain types of tasks. Also, It can use its scale of pattern data to analyze work steps and rapidly create applications to replace some human labor with automation. WorkFusion’s approach has attracted customers from financial services, ecommerce, healthcare, and consumer packaged goods.

This more evolved form of crowdsourcing holds the promise of giving information companies alternatives to their own internal offshore operations or outsourcing firms. Both types of offshore operations often lack flexibility to handle peak loads and/or new types of work. Furthermore as labor costs change in the developing world, companies have needed to seek new outsourcing relationships in new countries with cheaper labor. New crowdsourcing platforms such as WorkFusion’s could allow information companies to take advantage of crowd labor on a flexible basis, to accommodate peaks in their work levels and/or take on short-term projects.

As crowdsourcing becomes both cheaper and more reliable, its real value will be not just in cutting the cost of existing content operations, but also in enabling vendors to improve their offerings in at least three ways. First, information companies will be able to collect and organize content that was previously too difficult or expensive to collect. Second, this more advanced crowdsourcing will make it possible to enhance content, such as through tagging, to an extent that was previously economically unfeasible. Third and perhaps most powerfully, crowdsourcing will make it easier for information companies to link information in separate databases, thereby driving entirely new applications. Many information vendors today face fundamental challenges when trying to uniquely identify and normalize entities from multiple sources. (Is the company called “General Motors” the same as the company identified as “GM?” Is the company called “IDC” in one source the same as the “IDC” from another source?)

Today, crowdsourcing tends to be restricted to simple, repetitive tasks. But more advanced crowdsourcing from companies like WorkFusion shows the potential for it to play a larger, more valuable role.


Tagged , , , , , , , , , , , , , | Comments Off on The New Face of Crowdsourcing

The Future of MOOCs

I recently heard a great talk about the future of education by Sebastian Thrun, one of the founders of online course provider Udacity.  Thrun is a Stanford professor who made headlines when 160,000 students from 195 countries signed up for his MOOC on artificial intelligence.  Whether correct or not – and it’s mostly too early to know – Thrun offers compelling views, many based on his experience teaching online courses.  You can listen here to the entire one-hour talk, which he delivered to the Commonwealth Club of California, America’s oldest and largest public affairs forum.

Although “only” 23,000 students of the 160,000 students who registered actually completed his artificial intelligence course, Thrun points out that this number is more than the total number of all Stanford students.  Nevertheless, Thrun concedes that the typical 3% retention rate for MOOCs is a problem.  More interesting is that the top 413 students in his online course achieved a higher grade than any of the 200 Stanford students who completed it.

Thrun is excited by online courses as a way to change education’s delivery model — having a learned person talk to other people synchronously.  That model, which hasn’t changed in a thousand years, might have made sense before recording technology, but it isn’t effective in an age of rapid changes in knowledge.  Nor does it address the differences in the way that individuals learn.  The traditional education model forces everyone to learn together at the same speed.  While smaller classes, especially those that group similar students, may work better, they are inherently uneconomical. The strength of online education is that students can work at their own pace, review instructional materials as many times as they need, and be assessed at any time rather than on the same day.  Thrun points out that this is the model of videogames, which allow people to progress at their own speed, get constant feedback and assessment, and get rewarded in points, recognition, and satisfaction when they get the right answer or master an element of the game.  Similarly, the goal of education, Thrun says, should be mastery, not timing.

The biggest revolution in learning, says Thrun, is not MOOCs, but Google and Wikipedia because they have turned us into on-demand learners.  This form of learning is profoundly different from conventional education, which imposes highly-curated content and an exact path for how students learn.

The quality of students graduating from America’s top colleges and universities is unmatched in the world, but Thrun says, these elite institutions graduate the best students because they admit the best students.  Unlike manufacturing, our educational system does not assess the value of finished products versus the value of the raw materials.  Furthermore, he says, these institutions are extremely exclusive and expensive.  Online education, he argues, can increase access to knowledge beyond the few high-achieving students fortunate enough to be able to attend elite institutions.  He also argues that online courses will create a more transparent system in which to judge the quality of individual professors.

Thrun is especially bullish on the potential positive impact of online education on  community colleges.  In California, which has the largest community college system, nearly half a million students are wait-listed because the state lacks funds for classroom space.  Thrun believes that online education could help address that problem as well as the problem that 60% of incoming state university students fail the entrance exams and must go into remedial classes.  Furthermore, 40,000 college students in the California state system fail and must retake classes each year at enormous financial cost to themselves and the educational system.  With that in mind, Udacity is piloting a college readiness program in three subjects for high school students.  The price of the courses is just 10% of a comparable remedial course in the California university system.  Thrun points out, however, that Udacity’s solution for these students is more than just online classes; it also includes intensive mentoring.

Thrun acknowledges that online education must still resolve issues, including those related to credit and cheating, although he feels that these aren’t as difficult as those involving the acceptance of online education by students and society at large.  America, which was built on trust in the individual, Thrun argues, must apply those values to education and trust the individual to excel. The result, he predicts, will be that many students left behind by the rigidity of traditional education will become engaged learners in the new online environment.


Tagged , , , , , , , , , | Comments Off on The Future of MOOCs

Is Acxiom Really Being Transparent with its Consumer Data?

Amid some fanfare, consumer data powerhouse Acxiom just launched AboutTheData.com, a service that lets consumers look at the data the company collects about them.  The company’s official rationale, as spelled out on its website and reiterated in a Sunday New York Times article, is to be more transparent in response to the public’s growing concerns about the personal data that companies and the government collect about them. But the cynic in me wonders whether AboutTheData isn’t so much about transparency as crowdsourcing.  Because the service lets individuals edit Acxiom’s information about them, maybe what Acxiom really wants is for all of us to serve as editors and improve the completeness and accuracy of its database.

I used AboutTheData to look at Acxiom’s data about me and my wife.  I was surprised that much data was either missing or inaccurate, given the fact that we are middle-aged folks who have left a trail of publicly-available information throughout our adult lives. For example, Acxiom’s records say that we are each single.  (We’ve been married for over 30 years.)  Acxiom has no information on our homeownership status, even though we have owned five homes over the course of 30 years, including our current residence for more than 10 years.   Acxiom’s individual and household income data was relatively accurate for my wife, but completely off the mark for me.  Acxiom doesn’t know my political party (I’ve been a registered Democrat for over 30 years and vote in every election), but does know my wife’s party affiliation.  Acxiom doesn’t know that I own a car (I’ve owned my current car for 10 years), though it does know that my wife has an automobile insurance policy.  Acxiom believes that my wife’s ethnicity is French.  (At best, she’s a nice Jewish Francophile.)  Among her household interests, Acxiom lists cooking, home decorating, reading (all relatively true), but also lists computers, sports memorabilia, and reading magazines (none of which is true).  My interests, according to Acxiom, include boating, sailing, and boat ownership.  (I have never have been interested in any of these activities and get sea sick very easily.)  Acxiom lists home furnishing, hunting, and shooting as other interests, none of which are interests in the least.  It also lists me as a “mail order responder.” (I do shop online frequently, but almost never in response to a catalog.  In fact, I’ve made it a personal crusade to remove our household from the mailing list of any catalog that arrives in the mail.)  According to Acxiom, I am interested in magazine reading. (I subscribe to almost no magazines.) Pretty much the only information that Acxiom has gotten right, other than my name, address, and birthday, is that I am interested in cooking (true) and own a cell phone.  (Who doesn’t?)

I am interested in other people’s experiences with AboutTheData.com.  Check out your own data on the site and let me know how complete and accurate it is.


Tagged , , | Comments Off on Is Acxiom Really Being Transparent with its Consumer Data?

How to Fix a Leaky Heart Valve

Journal publishers have long allowed authors to submit videos to supplement their articles, but an emerging model makes video the main attraction.  Last week Elsevier launched a video journal for gastrointestinal endoscopy, joining a small group of other publishers with video journals in such fields as orthopedics, physics, and psychotherapy.  Most of the video journal publishers also publish conventional written articles as companions to the video.  Like print journals, video journals use various business models.  Most, but not all of the journals are peer reviewed, and most are subscription based.  Unlike most print journals, however, video journals publish on a continuous basis as new “articles” are produced, rather than on a fixed schedule.

Another notable difference is the production process.  When a publisher accepts an author’s submission, the publisher then collaborates with the author to develop a script and then shoots, edits, and produces a professional quality video.  Despite the cost of video production, the economics of video journals looks promising, at least in part because they are born-digital and have no printing or physical distribution costs.  The first video journal, the Journal of Visualized Experiments (JoVE), became profitable within a year of its launch in 2008, and revenues were reported to have grown from $3 million to $5 million from 2010 to 2011.  The company now publishes over 60 articles per month, has over 550 subscribing institutions, and receives around 170,000 unique monthly visitors.  JoVE has also demonstrated that its model is scalable:  It has steadily expanded into different domains (currently eight and growing), including neuroscience, bio-engineering, chemistry, clinical medicine, and behavior, among others.

Demand for video journals appears especially strong outside the US, probably because many parts of the world lack access to live training in the latest laboratory or clinical techniques.  JoVE, for example, gets about two-thirds of its traffic from overseas and just one-third from the US, a proportion that would typically be reversed for most professional journals published in the US.

Because video is so effective at communicating how-to knowledge, publishers in a wide range of business, professional, scientific, clinical, and industrial domains are prime candidates to start video journals.  Publishers themselves do not need to make large investments as they can easily contract out to video production firms.  At the same time, the change to a new medium may open the market to new players.  For example, the Video Journal of Ophthalmology and the Video Journal of Orthopaedics are published by companies with no experience in journal publishing.  However, these newcomers will have to demonstrate that their content has legitimacy and quality of content produced through traditional publishers’ peer-review processes.

Video journals are likely to lead to the creation of educational materials and reference databases.  Across education, compilations of videos are growing rapidly as supplements to lectures and live demonstrations, especially because students can view them over and over on their own time.  This ability to apply video journal content to education could open new revenue opportunities for journal publishers to expand from their traditional research market focus.


Tagged , , , | Comments Off on How to Fix a Leaky Heart Valve
© 2015 GREENHOUSE ASSOCIATES