“Show or Tell? Improving Agent Decision Making in a Tanzanian Mobile Money Field Experiment” and “When Transparency Fails: Bias and Financial Incentives in Ridesharing Platforms”
2019-04-05 (Creation date: 2019-04-05)
- Main contributor
Show or Tell? Improving Agent Decision Making in a Tanzanian Mobile Money Field Experiment: When workers make operational decisions, the firm's global knowledge and the worker's domain-specific knowledge complement each other. Oftentimes workers have the final decision-making power. Two key decisions a firm makes when designing systems to support these workers are: 1) what guidance to deliver, and 2) what kind of training (if any) to provide. We examine these choices in the context of mobile money platforms?systems that allow users in developing economies to deposit, transfer, and withdraw money using their mobile phones. Mobile money has grown quickly, but high stockout rates of currency persist due to sub-optimal inventory decisions made by contracted employees (called agents). In partnership with a Tanzanian mobile money operator, we perform a randomized controlled trial with 4,771 agents over eight weeks to examine how differing types of guidance and training impact the agents' inventory management. We find agents who are trained in person and receive an explicit, personalized, daily text message recommendation of how much electronic currency to stock are less likely to stock out. These agents are more likely to alter their electronic currency balance on a day (rebalance). In contrast, agents trained in person but who receive summary statistics of transaction volumes or agents who are notified about the program and not offered in-person training do not experience changes in stockouts or rebalances. We observe no evidence of learning or fatigue. Agent-level heterogeneity in the treatment effects shows that the agents who handle substantially more customer deposits than withdrawals benefit most from the intervention. || When Transparency Fails: Bias and Financial Incentives in Ridesharing Platforms: Passenger discrimination in transportation systems is a well-documented phenomenon. With the advent and success of ridesharing platforms, such as Lyft, Uber and Via, there has been hope that discrimination against under-represented minorities may be reduced. However, early evidence has suggested the existance of bias in ridesharing platforms. Several platforms responded by reducing operational transparency through the removal of information about the rider's gender and race from the ride request presented to drivers. However, following this change, bias may still manifest after a request is accepted, at which point the rider's picture is displayed, through driver cancelation. Our primary research question is to what extent a rider's gender, race, and perception of support for lesbian, gay, bisexual, and transgender (LGBT) rights impact cancelation rates on ridesharing platforms. We investigate this through a large field experiment using a major ridesharing platform in North America. By manipulating rider names and profile pictures, we observe drivers' patterns of behavior in accepting and canceling rides. Our results confirm that bias at the ride request stage has been eliminated. However, at the cancelation stage, racial and LGBT biases are persistent, while biases related to gender appear to have been eliminated. We also explore whether dynamic pricing moderates (through increased pay to drivers) or exacerbates (by signaling that there are many riders, allowing drivers to be more selective) these biases. We find a moderating effect of peak pricing, with consistently lower biased behavior.
IU Workshop in Methods
Dr. Christopher Parker is an Assistant Professor in the Smeal College of Business at Pennsylvania State University.
This item is accessible by: the public.