Algorithms and Litigation Risk in the Mobile Economy
Advances in digital technology mean that the use of algorithms is on the rise for automating such business activities as pricing, programmatic advertising, offer personalization, and content curation.
Sophisticated algorithms underlie the artificial intelligence (AI) and machine learning (ML) applications driving digital businesses, and they are also driving many of the thousands of decisions that affect consumers and competitors every day. As a result, competition authorities and litigators around the world are increasingly interested in understanding the impact algorithms have on markets and competition.
To provide more insight into the litigation and antitrust risks associated with automated algorithmic decision making, Analysis Group partners Rebecca Kirk Fair and Mark Gustafson spoke with Analysis Group affiliate Anindya Ghose, the Heinz Riehl Chair Professor of Business at the NYU Stern School of Business. Professor Ghose has done pioneering work in many areas of the digital economy, including digital marketing, data science, business analytics, and data privacy, and is the author of the book Tap: Unlocking the Mobile Economy.
Anindya Ghose: Heinz Riehl Chair Professor of Business; Leonard Stern Faculty Scholar; Director, Masters of Business Analytics Program, New York University Stern School of Business
Q: How are algorithms being employed in digital marketplaces today?
Professor Ghose: In my work on the mobile economy, I have analyzed in particular the drivers of consumer behavior, factors that impact both online and offline purchase decisions, and how businesses use technology and data to tailor their offers to reach the right consumers at the right moment and at the right place. In digital markets, these types of decisions require businesses to process millions, or even billions, of data points collected from a vast array of sources.
Increased automation coupled with increased availability of extremely large amounts of data means that, in many instances, human-only decision making is no longer sustainable, at least not at the same scale and with the same level of efficiency. The sheer volume of individualized data generated by smartphones, wearables, internet of things devices, internet platforms, and our digital footprints requires the use of sophisticated AI and ML algorithms that can automate data processing and analysis.
Essentially, algorithms are programmed and “trained” by data scientists so that they can eventually learn on their own from the massive swaths of data – hence, “machine learning.” And as algorithms have become prevalent across most of our daily activities, they have exponentially increased in complexity.
Q: What impact are algorithms having?
Professor Ghose: Algorithms can accomplish tasks that could never have been achieved using human labor. From creating a platform for hailing rides, to optimizing our choices in online shopping, to prioritizing information on social media, to targeted advertising on digital platforms, algorithms allow us to sort through an overwhelming number of options and possibilities, and quickly identify the most relevant ones. Algorithms might be used, for example, to match a seller with a buyer in an e-commerce marketplace, match two individuals on a dating website, match an employer with a potential employee, or match an advertiser with an online shopper.
Q: The press and competition authorities alike have been giving a lot of attention to the ways that algorithms influence our decisions. For example, we have seen that digital business models have been investigated by the DOJ [US Department of Justice], DG Comp in Europe [Directorate-General for Competition], and the Australian Competition and Consumer Commission. They have also been the subject of intense debate among antitrust practitioners and academics. What have been the primary concerns in these debates?
Professor Ghose: These types of digital business models rely on the ability to collect, or at least have access to, highly individualized data on a scale previously unimaginable, including transaction records, online behavior, and personally identifiable information such as name and address. This access to data allows businesses to provide more personalized and better targeted offers to consumers.
At the same time, these activities can raise questions about privacy and an individual’s control over his or her personal information, such as biometric data, transaction data, behavioral data, and so on. How much personal data are firms collecting without the individual’s knowledge, and how at risk are these data given the current connectedness of the internet? How far does a business’s responsibility extend for keeping the data it uses secure? How are businesses using the data to create competitive advantage, and are they doing it using fair means?
These are the kinds of questions that are beginning to be sorted out in courtrooms, by legislative bodies, and in front of regulatory authorities around the world.
Q: Are privacy rights the only issue?
Professor Ghose: Not at all. Beyond the privacy issues, the use of algorithms can also have unintended consequences. For example, because AI algorithms are often designed to “mimic” the real world, any conscious or unconscious biases introduced by humans into the datasets used to train the algorithms will be reflected in future algorithmic decisions.
This poses the risk of creating situations that can lead either to further discrimination or to the favoring of selected groups in the marketplace. For example, there have been instances where automated resume screening, teacher evaluations, mortgage financing decisions, or criminal risk assessment resulted in business decisions that were alleged to have perpetuated existing racial or gender biases.
Q: How does the use of algorithms translate into competition concerns?
Professor Ghose: Algorithms give rise to competition concerns particularly when they take the form of a “black box.” Many algorithms are not explainable or interpretable by design. In addition, digital platforms like Google or Facebook might not be in a position to disclose how their algorithms actually work because such disclosure would lead bad actors to manipulate and game the system. They also operate in an adversarial world where increased transparency can be used against the platforms, which is known as the transparency paradox.
These kinds of restrictions on transparency have raised questions about whether firms might use algorithms to reduce or restrain competition rather than enhance consumer welfare. If you can’t observe the decision-making process, it potentially becomes more difficult to identify anticompetitive behavior.
So a key question is whether digital platforms allow consumers to truly have free choice among suppliers’ offerings, or whether these companies instead are engaging in exclusionary conduct or collusion, either explicit or implicit. Are consumers allowed to choose from the set of all possible products, or are they only shown a restricted choice set offered to them either on the home page or the search engine page (which is referred to as the digital shelf space)?
For example, businesses with access to personal information or purchase history could potentially use dynamic pricing and dynamic assortments to maximize profits at the cost of consumer welfare. But the other side of the coin is that showing a curated set of content or products – for example, through recommender systems or personalized advertising – may actually increase, rather than diminish, consumer welfare because it reduces consumers’ search costs.
"In today’s digital economy, the notion of benefits and costs to consumers is no longer a black and white problem; it comes with many shades of gray."
Q: From an economist’s viewpoint, what questions do you think we should be asking to determine whether these pricing algorithms are a net positive or a risk for consumers?
Professor Ghose: The question becomes, what is the line between unilateral price setting and collusion? We need to keep in mind that unilateral pricing algorithms are not automatically a cause for concern. A firm seeking to optimize its profitability by analyzing internal sales data, loyalty program data, or other internally generated data may very well rely on algorithms to set prices. Similarly, firms that are re-optimizing their pricing for a vertically integrated retail site are following practices that retailers and manufacturers have been employing for decades using less sophisticated methods.
Even when competitive pricing is considered, in many instances there is no immediate antitrust concern. For example, it is not uncommon for a gas station on one side of the street to monitor and respond to the price set by a competitor across the street, without engaging in any anticompetitive conduct.
Concerns over implicit or tacit collusion, however, could arise if algorithms overlap between competitors, or third-party algorithms are supplying consumer data to competing companies. The abuse of algorithmic pricing theoretically could also lead to perfect price discrimination that eliminates all consumer surplus. In one of the first cases investigated by the DOJ in 2015, an online seller of wall posters was fined for using a pricing algorithm to fix prices on Amazon.
Q: Under what circumstances might a firm’s ability to steer customers to a particular supplier or product be a concern to consumers or competitors?
Professor Ghose: In general, as in traditional markets, the legality of a firm’s competition processes or contractual arrangements in the mobile marketplace varies with the firm’s market position. For example, consider market share and competition for a company like Amazon. Should antitrust regulators be looking at Amazon as an e-commerce company? Or should they be looking at Amazon as a retail company?
The analysis of antitrust and competition issues such as market shares is not as straightforward for the “new economy” firms as it used to be for “old economy” firms such as AT&T and the Baby Bells. In the context of e-commerce, Amazon does have the largest market share in the US. In the context of general retail, however, Amazon has a tiny market share since e-commerce in the US accounts for only 12% of all retail.
And despite Amazon’s leadership in online retail, Walmart is more than double Amazon’s size. Specialty retailers, like Walgreens and CVS in the pharmacy space and Kroger in groceries, also dwarf Amazon’s presence in their respective categories.
So this is a very complex and nuanced problem. When assessing the potential competitive effects of an algorithm, one must not only consider the nature of the algorithm – what it does, and how it does it – but also the competitive position of the firm utilizing it. Is the market configured in such a way that the use of an algorithm might result in an abuse of dominance or market power?
Q: As an economist, what approach do you think regulators and litigators should pursue to analyze these more nuanced cases?
Professor Ghose: In today’s digital economy, the notion of benefits and costs to consumers is no longer a black and white problem; it comes with many shades of gray. Was an algorithm designed on purpose to behave in an anticompetitive manner? Or did it simply evolve and learn on its own over time how to create value for individual consumers by developing new services and marketplaces or improving the quality of existing ones?
Thus, anybody who is concerned about market power in digital markets, including regulators and litigators, will need to examine the impact of marketplace design on both buyers and sellers, and consider how value gets distributed among platforms, buyers, and sellers. ■