The practice of text mining in digital humanities is phallogocentric. Text mining, a particular kind of data mining in which predictive methods are deployed for pattern discovery in texts is primarily focused on pre-assumed meanings of The Word. In order to determine whether or not the machine has found patterns in text mining, we begin with a “ground truth” or labels that signify the presence of meaning. This work typically presupposes a binary logic between lack and excess (Derrida, Dissemination, 1981). There is meaning in the results or there is not. Sound, in contrast, is aporetic. To mine sound is to understand that ground truth is always indeterminate. Humanists have few opportunities to use advanced technologies for analyzing sound archives, however. This talk describes the HiPSTAS (High Performance Sound Technologies for Access and Scholarship) Project, which is developing a research environment for humanists that uses machine learning and visualization to automate processes for analyzing sound collections. HiPSTAS engages digital literacy head on in order to invite humanists into concerns about machine learning and sound studies. Hearing sound as digital audio means choosing filter banks, sampling rates, and compression scenarios that mimic the human ear. Unless humanists know more about digital audio analysis, how can we ask, whose ear we are modeling in analysis? What is audible, to whom? Without knowing about playback parameters, how can we ask, what signal is noise? What signal is meaningful? To whom? Clement concludes with a brief discussion about some observations on the efficacy of using machine learning to facilitate generating data about spoken-word sound collections in the humanities.