Making Big Data Smaller: Two Strategies to Analyze the Opioid Epidemic in the U.S.
- Date
2020-02-28 (Creation date: 2020-02-28)
- Main contributor
Byungkyu (BK) Lee
- Summary
-
In the era of “big data” revolution, social scientists face different types of challenges that we think are more technical, rather than theoretical. While it is certainly a challenge to analyze bigger than tera-byte data, the analysis of big data is not just a matter of solving computational problems. Big data provides a unique opportunity to solve society’s big problems if and only if it is analyzed through careful research designs and strong theoretical frameworks. This talk introduces two practical strategies for social scientists — parallel aggregation and matching — to make big data smaller so that we can overcome technical difficulties while making robust statistical inference. I will illustrate them based on my own trial and error during the analysis of large-scale medical claims data under the context of the US opioid epidemic. This talk also presents several tips for the effective management of big data.
- Publisher
Indiana University Workshop in Methods
- Collection
Workshop in Methods
- Unit
Social Science Research Commons
- Related Item
Accompanying materials on IUScholarWorks
- Notes
Performers
Byungkyu (BK) Lee is an Assistant Professor of Sociology at Indiana University. His research interests lie in the area of social networks, political sociology, medical sociology, biosociology, and quantitative methods.
Access Restrictions
This item is accessible by: the public.