Voice as Biomarker

'You sound a bit depressed' we might say to a friend
Not only because of what they say but how they say it
Perhaps their speech is duller than usual, tailing off between words
Lacking their usual lively intonation

There are many ways to boil a voice down into data points
Low-level spectral features, computed from snippets as short as twenty milliseconds 1
That quantify the dynamism of amplitude, frequency and energy
And those longer range syllabic aspects that human ears are tuned to
Such as pitch and intensity

A voice distilled into data
Becomes the training material for machine learning algorithms
And there are many efforts being made to teach machines
To deduce our mental states from voice analysis

The bet is that the voice is a source of biomarkers
Distinctive data features that correlate to health conditions 2
Especially the emergence of mental health problems
Such as depression, PTSD, Alzheimers and others

And of course there's the words themselves
We've already trained machines to recognise them
Thanks to the deep neural network method called Long short-term memory (LSTM) 3
We can command our digital assistants to buy something on Amazon

Rules-based modelling never captured the complexity of speech
But give neural networks enough examples
They will learn to parrot and predict any complex pattern
And voice data is plentiful

So perhaps machines can be trained to detect symptoms
Of different kinds of stress or distress 4
And this can be looped into an appropriate intervention
To prevent things from getting worse

As data, the features of speech become tables of numbers
Each chunk of sound becomes a row of digits
Perhaps sixteen numbers from a Fourier Transform
And others for types of intensity and rhythmicity

For machine learning to be able to learn
Each row must end in a classification; a number that tags a known diagnosis
Presented with enough labelled examples it will produce a probabilistic model 5
That predicts the likelihood of a future speaker developing the same condition

It's very clever to model the hair cells in the human ear as forced damped oscillators 6
And to apply AI algorithms that learn models through backpropagation 7
But we should ask why we want machines to listen out for signs of distress
Why go to all this trouble when we could do the listening ourselves

One reason is the rise in mental health problems
At the same time as available services are contracting 8
Bringing professional and patient together costs time and money
But we can acquire and analyse samples of speech via our network infrastructures

Machine listening offers the prospect of early intervention
Through a pervasive presence beyond anything that psychiatry could have previously imagined
Machine learning's skill at pattern finding means it can be used for prediction
As Thomas Insel says, "We are building digital smoke alarms for people with mental illness" 9

Disruption

Insel is a psychiatrist, neuroscientist and former Director of the US National Institute of Mental Health
Where he prioritised the search for a preemptive approach to psychosis
By "developing algorithms to analyze speech as an early window into the disorganization of thought" 10
He jumped ship to Google to pursue a big data approach to mental health, then founded a startup called Mindstrong
Which uses smartphone data to 'transform brain health' and 'detect deterioration early' 11

The number of startups looking for traction on mental states
Through the machine analysis of voice
Suggests a restructuring of the productive forces of mental health
Such that illness will be constructed by a techno-psychiatric complex

HealthRhythms, for example, was founded by psychiatrist David Kupfer 12
Who chaired the task force that produced DSM-5, the so-called 'bible of psychiatry'
Which defines mental disorders and the diagnostic symptoms for them
The HealthRhythms app uses voice data to calculate a "log of sociability" to spot depression and anxiety 13

Sonde Health screens acoustic changes in the voice for mental health conditions
With a focus on post-natal depression and dementia
"We're trying to make this ubiquitous and universal" says the CEO 14
Their app is not just for smartphones but for any voice-based technology

Meanwhile Sharecare scans your calls and reports if you seemed anxious 15
Founder Jeff Arnold describes it as 'an emotional selfie' 16
Like Sonde Health, the company works with health insurers
While HealthRhythms' clients include pharmaceutical companies

It's hardly a surprise that Silicon Valley sees mental health as a market ripe for Uber-like disruption
Demand is rising, orthodox services are being cut, but data is more plentiful than it has ever been
There's a mental health crisis that costs economies millions
So it must be time to 'move fast and break things' 17

But as Simondon and others have tried to point out 18
The individuation of subjects, including ourselves, involves a certain technicity
Stabilising a new ensemble of AI and mental health
Will change what it is to be considered well or unwell

Samaritans Radar

There's little apparent concern among the startup-funder axis
That all this listening might silence voices
Their enthusiasm is not haunted by the story of the Samaritans Radar
When an organisation which should have known better got carried away by enthusiasm for tech surveillance

This was a Twitter app developed in 2014 by the Samaritans 19
The UK organisation which runs a 24 hour helpline for anyone feeling suicidal
You signed up for the app and it promised to send you email alerts
Whenever someone you follow on Twitter appeared to be distressed
If any of their tweets matched a list of key phrases
It invited you to get in touch with them

In engineering terms, this is light years behind the sophistication of Deep Learning
But it's a salutory tale about unintended impacts
Thanks to the inadequate involvement of service users in its production 20
It ignored the fact that the wrong sort of well-meaning intervention at the wrong time might actually make things worse
Or that malicious users could use the app to target and troll vulnerable people 21

Never mind the consequences of false positives
When the app misidentified someone as distressed
Or the concept of consent 22
Given that the people being assessed were not even made aware that this was happening
All riding roughshod over the basic ethical principle of 'do no harm'

Although Twitter is a nominally public space
People with mental health issues had been able to hold supportive mutual conversations
With a reasonable expectation that this wouldn't be put in a spotlight
Allowing them to reach out to others who might be experiencing similar things 23

One consequence of the Samaritans Radar was that many people with mental health issues
Who had previously found twitter a source of mutual support
Declared their intention to withdraw
Or simply went silent

As with the sorry tale of the Samaritans Radar
Without the voices of mental health users and survivors
The hubris that goes with AI has the potential to override the Hippocratic oath

Fairness and Harm

The ubiquitous application of machine learning's predictive power
In areas with real world consequences, such as policing and the judicial system 24
Is stirring an awareness that its oracular insights
Are actually constrained by complexities that are hard to escape

The simplest of which is data bias
A programme that only knows the data it is fed
And which is only fed data containing a racist bias
Will make racist predictions 25

This should already be red flag for our mental health listening machines
Diagnoses of mental health are already skewed with respect to race 26
A high proportion of people from black and ethnic minority backgrounds get diagnosed
And the questions about why are still very much open and contested

But surely, proponents will say, one advantage of automation in general
Is to encode fairness and bypass the fickleness of human bias
To apply empirical and statistical knowledge directly
And cut through the subjective distortions of face-to-face prejudice

Certainly, as the general dangers of reproducing racism and sexism have become clear
There have been conscientous efforts from engineers in one corner of machine learning
To automate ways to de-bias datasets 27

But here's the rub
Even when you know there's the potential for discrimination
It's mathematically impossible to produce all-round fairness 28

If you're designing a parole algorithm to predict whether someone will reoffend
You can design it so that the accuracy for high risk offenders is the same for white and black
But if the overall base rates are different
There will be more false positives of black people, which can be considered a harm
Because more black people who would not go on to reoffend will be refused bail than white people 29

Machine learning's probabilistic predictions are the result of a mathematical fit
The parameters of which are selected to optimise on specific metrics
The are many mathematical ways to define fairness (perhaps twenty-one of them) 30
And you can't satisfy them all at the same time

Proponents might argue that with machinic reasoning
We should be able to disentangle the reasons for various predictions
So we can make policy choices
About the various trade-offs

But there's a problem with neural networks
Which is that their reasoning is opaque 31
Obscured by the multiplicity of connections across their layers
Where the weightings are derived from massively parallel calculations

If we apply this deep learning to reveal what lies behind voice samples
Taking different tremors as proxies for the contents of consciousness
The algorithm will be tongue-tied
If asked to explain its diagnosis

And we should ask who these methods will be most applied to
Since to apply machinic methods we need data
Data visibility is not evenly distributed across society
Institutions will have much more data about you if you are part of the welfare system
Than from a comfortable middle class family

What's apparent from the field of child protection
Where algorithms are also seen as promising objectivity and pervasive preemption
Is that the weight of harms from unsubstantiated interventions 32
Will fall disproportionately on the already disadvantaged
With the net effect of 'automating inequality' 33

If only we could rely on institutions to take a restrained and person-centred approach
But certainly, where the potential for financial economies are involved
The history of voice analysis is not promising
Local authorities in the UK were still applying Voice Stress Analaysis to detect housing benefit cheats
Years after solid scientific evidence showed that its risk predictions were 'no better than horoscopes' 34

Machine learning is a leap in sophistication from such crude measures
But as we've seen it also brings new complexities
As well as an insatiable dependency on more and more data

Listening Machines

Getting mental health voice analysis off the ground faces the basic challenge of data
Most algorithms only perform well when there's a lot of it to train on
They need voice data labelled as being from people who are unwell and those who are not
So that the algorithm can learn the patterns that distinguish them

The uncanny success of Facebook's facial recognition algorithms
Came from having huge numbers of labelled faces at hand
Faces that we, the users, had kindly labelled for them
As belonging to us, or by tagging our friends
Without realising we were also training a machine
"if the product is free, you are the training data"

One approach to voice analysis is the kind of clever surveillance trick
Used by a paper investigating 'The Language of Social Support in Social Media
And its Effect on Suicidal Ideation Risk'35
Where they collected comments from Reddit users in mental health subreddits like
R/depression, r/mentalhealth, r/bipolarreddit, r/ptsd, r/psychoticreddit
And tracked how many could be identified as subsequently posting in
A prominent suicide support community on Reddit called r/SuicideWatch

Whether or not the training demands of voice algorithms
Are solved by the large scale collection of passive data
The strategies of the Silicon Valley startups make it clear
That the application of these apps will have to be pervasive
To fulfill the hopes for scaling and early identification

Society is already struggling to grapple
With the social effects of pervasive facial recognition
Whether mainstreamed in China's system of social credit
Or marketed to US police forces by Amazon 36
Where it has at least led to some resistance from employees themselves 37

The democratic discourse around voice analysis seems relatively hushed
And yet we are increasingly embedded in a listening environment
With Siri and Alexa and Google Assistant and Microsoft's Cortana
And Hello Barbie and My Friend Cayla 38 and our smart car
And apps and games on our smartphones that request microphone access

Where might our voices be analysed for signs of stress or depression
In a way that can be glossed as legitimate under the General Data Protection Regulation
On our work phone? our home assistant? while driving? when calling a helpline?

When will using an app like HealthRhythms, which 'wakes up when an audio stream is detected'
Become compulsory for people receiving any form of psychological care?
Let's not forget that in the UK we already have Community Treatment Orders for mental health 39

Surveillance is the inexorable logic of the data-diagnostic axis
Merging with the benificent idea of Public Health Surveillance
With its agenda of epidemiology and health & safety
But never quite escaping the long history of sponsorship of speech recognition research
By the Defense Advanced Research Projects Agency (DARPA)

A history that Apple doesn't often speak of
That it acquired Siri from SRI International 40
Who'd developed it through a massive AI contract with DARPA

As the Samaritans example made clear
We should pause before embedding ideas like 'targeting' in social care
Targeting people for preemptive intervention is fraught with challenges
And forefronts the core questions of consent and 'do no harm'

Before we imagine that "instead of waiting for traditional signs of dementia and getting tested by the doctor
The smart speakers in our homes could be monitoring changes in our speech as we ask for the news, weather and sports scores
And detecting the disease far earlier than is possible today" 41
We need to know how to defend against the creation of a therapeautic Stasi

Epistemic Injustice

It might seem far fetched to say that snatches of chat with Alexa
Might be considered as signficant as a screening interview with a psychatrist or psychologist
But this is to underestimate the aura of scientific authority
That comes with contemporary machine learning 42

What algorithms offer is not just an outreach into daily life
But the allure of neutrality and objectivity
That by abstracting phenomena into numbers that can be statistically correlated
In ways that enable machines to imitate humans
Quantitative methods can be applied to areas that were previously the purview of human judgement

Big data seems to offer senstive probes of signals beyond human perception
Of vocal traits that are "not discernable to the human ear" i44
Nor 'degraded' by relying on self-reporting 43
It doesn't seem to matter much that this use of voice
Pushes the possibility of mutual dialogue further away
Turning the patient's opinions into noise rather than signal

Machinic voice analysis of our mental states
Risks becoming an example of epistemic injustice 45
Where an authoritative voice comes to count more than our own
The algorithm analysis of how someone speaks causing others to "give a deflated level of credibility to a speaker's word"

Of course we could appeal to the sensitivity and restraint of those applying the algorithms
Context is everything when looking at the actual impact of AI
Knowing whether it is being adopted situations where existing relations of power
Might indicate the possibility of overreach or arbitrary application

The context of mental health certainly suggest caution
Given that the very definition of mental health is historically varying
The asymmetries of power are stark, because treatment can be compulsory and detention is not uncommon
And the life consequences of being in treatment or missing out on treatment can be severe

Mental health problems can be hugely challenging for everyone involved
And in the darkest moments of psychosis or mania
People are probably not going to have much to say about how their care should be organised
But, in between episodes, who is better placed to help shape specific ideas for their care
Than the person who experiences the distress
They have the situated knowledge 46

The danger with all machine learning
Is the introduction of a drone-like distancing from messy subjectivities 47
With the danger that this will increase thoughtlessness
Through the outsourcing of elements of judgement to automated and automatising systems

The voice as analysed by machine learning
Will become a technology of the self in Foucault's terms 48
Producing new subjects of mental health diagnosis and intervention
Whose voice spectrum is definitive but whose words count for little

User Voice

The lack of user voice in mental health services has been a bone of contention since the civil right's struggles of the 1960s 49
With the emergence of user networks that put forward alternative views
Seeking to be heard over the stentorian tones of the psychiatric establishment
Forming alliances with mental health workers and other advocates
Groups like Survivors Speak Out, The Hearing Voices Network, the National Self-Harm Network and Mad Pride 50

Putting forward demands for new programmes and services
Proposing strategies such as 'harm minimization' and 'coping with voices'
Making the case for consensual, non-medicalised ways to cope with their experiences
And forming collective structures such as Patients' Councils

While these developments have been supported by some professionals
And some user participation has been assimilated as the co-production of services
The validity of user voice, especially the collective voice, is still precarious within the mental health system
And is undermined by coercive legislation and reductionist biomedical models

The introduction of machinic listening
That dissects voices into quantifiable snippets
Will tip the balance of this wider apparatus
Towards further objectification and automaticity
Especially in this era of neoliberal austerity 51

And yet, ironically, it's only the individual and collective voices of users
That can rescue machine learning from talking itself into harmful contradictions
That can limit its hunger for ever more data in pursuit of its targets
And save classifications from overshadowing uniquely significant life experiences

Designing for justice and fairness not just for optimised classifications
Means discourse and debate have to invade the spaces of data science
Each layer of the neural networks must be balanced by a layer of deliberation
Each datafication by caring human attentiveness

If we want the voices of the users to be heard over the hum of the data centres
They have to be there from the start
Putting the incommensurability of their experiences
Alongside the generalising abstractions of the algorithms

And asking how, if at all
The narrow intelligence of large-scale statistical data-processing machines
Could support more Open Dialogue, where speaking and listening aim for shared understanding 52
More Soteria type houses based on a social model of care 53
The development of progressive user-led community mental health services
And an end to the cuts 54

Computation and Care

As machine learning expands into real world situations
It turns out that interpretability is one of its biggest challenges
Even DARPA, the military funder of so much research in speech recognition and AI
Is panicking that targeting judgements will come without any way to interrogate the reasoning behind them 55

Experiments to figure out how AI image recognition actually works
Probed the contents of intermediary layers in the neural networks
By recursively applying the convolutional filters to their own outputs
Producing the hallucinatory images of 'Inceptionism' 56

We are developing AI listening machines that can't explain themselves
That hear things of significance in their own layers
Which they can't articulate to the world but that they project outwards as truths
How would these AI systems fare if diagnosed against DSM-5 criteria?

And if objectivity, as some post-Relativity philosphers of science have proposed
Consists of invariance under transformation
What happens if we transform the perspective of our voice analysis
Looking outwards at the system rather than inwards at the person in distress

To ask what our machines might hear in the voices of the psychiatrists who are busy founding startups
Or in the voices of politicians justifying cuts in services because they paid off the banks
Or in the voice of the nurse who tells someone forcibly detained under the Mental Health Act
"This ain't a hotel, love" 57

It's possible that prediction is not a magic bullet for mental health
And can't replace places of care staffed by people with time to listen
In a society where precarity, insecurity and austerity don't fuel generalised distress
Where everyone's voice is not analysed but heard
In a context which is collective and democratic

The dramas of the human mind have not been scientifically explained
And the nature of consciousness still slips the net of neuroscience
Still less should we restructure the production of truths about the most vulnerable
On computational correlations

The real confusion behind the Confusion Matrix
That table of machine learning accuracy that includes percentages of false positives and negatives 58
Is that the impact of AI in society doesn't pivot on the risk of false positives
But on the redrawing of boundaries that we experience as universal fact

The rush towards listening machines tells us a lot about AI
And the risk of believing it can transform intractable problems
By optimising dissonance out of the system

If human subjectivities are intractably co-constructed with the tools of their time 59
We should ask instead how our new forms of calculative cleverness
Can be stiched into an empathic technics
That breaks with machine learning as a mode of targeting
And wreathes computation with ways of caring.


  1. Vergyri, Dimitra, Bruce Knoth, and Mitchell McLaren. 'Speech-Based Assessment of PTSD in a Military Population Using Diverse Feature Classes | SRI International'. Dresden, Germany: N.p., 2015. Web. 13 Aug. 2018. https://www.sri.com/work/publications/speech-based-assessment-ptsd-military-population-using-diverse-feature-classes

  2. Robbins, Rebecca. 'The Sound of Your Voice May Diagnose Disease'. Scientific American. N.p., n.d. Web. 21 Aug. 2018. https://www.scientificamerican.com/article/the-sound-of-your-voice-may-diagnose-disease/

  3. Yan, Shi. 'Understanding LSTM and Its Diagrams'. Understanding LSTM and its diagrams 13 Mar. 2016. Web. 13 Aug. 2018. https://medium.com/mlreview/understanding-lstm-and-its-diagrams-37e2f46f1714

  4. Mullin, Emily. 'Voice Analysis Tech Could Diagnose Disease'. MIT Technology Review. N.p., n.d. Web. 13 Aug. 2018. https://www.technologyreview.com/s/603200/voice-analysis-tech-could-diagnose-disease/

  5. Geitgey, Adam. 'Machine Learning Is Fun!' Adam Geitgey 5 May 2014. Web. 20 Sept. 2017. https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471

  6. Vergyri, Dimitra, Bruce Knoth, and Mitchell McLaren. 'Speech-Based Assessment of PTSD in a Military Population Using Diverse Feature Classes | SRI International'. Dresden, Germany: N.p., 2015. Web. 13 Aug. 2018. https://www.sri.com/work/publications/speech-based-assessment-ptsd-military-population-using-diverse-feature-classes

  7. 3Blue1Brown. What Is Backpropagation Really Doing? | Deep Learning, Chapter 3. N.p., 2017. Film. https://www.youtube.com/watch?v=Ilg3gGewQ5U 

  8. 'Study Paints Somber Picture of US Mental Health Status and Access to Care'. EurekAlert!. N.p., n.d. Web. 21 Aug. 2018. https://www.eurekalert.org/pub_releases/2017-04/nlmc-sp041117.php

  9. Singer, Natasha. 'How Companies Scour Our Digital Lives for Clues to Our Health'. The New York Times 25 Feb. 2018. NYTimes.com. Web. 21 Aug. 2018. https://www.nytimes.com/2018/02/25/technology/smartphones-mental-health.html

  10. Regalado, Antonio. 'Q&A with Tom Insel on His Decision to Join Alphabet'. MIT Technology Review. N.p., n.d. Web. 21 Aug. 2018. https://www.technologyreview.com/s/541446/why-americas-top-mental-health-researcher-joined-alphabet/

  11. 'About Mindstrong'. Mindstrong Health. N.p., n.d. Web. 21 Aug. 2018. https://mindstronghealth.com/about/

  12. Biegler, Paul. 'Tech Support: How Our Phones Could Save Our Lives by Detecting Mood Shifts'. The Sydney Morning Herald. N.p., 10 Nov. 2017. Web. 21 Aug. 2018. https://www.smh.com.au/technology/tech-support-how-our-phones-could-save-our-lives-by-detecting-mood-shifts-20171106-gzfrg5.html

  13. 'HealthRhythms'. HealthRhythms. N.p., n.d. Web. 21 Aug. 2018. https://www.healthrhythms.com/ "HealthRhythms is creating the world's largest mental health API... Smartphone sensor data automatically measures patterns of behavior... Dynamic AI tailors measurement to each person" 

  14. Mullin, Emily. 'Voice Analysis Tech Could Diagnose Disease'. MIT Technology Review. N.p., n.d. Web. 13 Aug. 2018. https://www.technologyreview.com/s/603200/voice-analysis-tech-could-diagnose-disease/

  15. 'How Does the Sharecare App's Voice Analysis Work? | Stress'. Sharecare. N.p., n.d. Web. 21 July 2018. https://www.sharecare.com/health/stress-reduction/how-sharecare-app-voice-work

  16. Singer, Natasha. 'How Companies Scour Our Digital Lives for Clues to Our Health'. The New York Times 25 Feb. 2018. NYTimes.com. Web. 21 Aug. 2018. https://www.nytimes.com/2018/02/25/technology/smartphones-mental-health.html

  17. Kushner, David. 'Facebook Philosophy: Move Fast and Break Things'. IEEE Spectrum: Technology, Engineering, and Science News. N.p., 1 June 2011. Web. 21 Aug. 2018. https://spectrum.ieee.org/at-work/innovation/facebook-philosophy-move-fast-and-break-things

  18. Simondon, Gilbert, Cecile Malaspina, and John Rogove. On the Mode of Existence of Technical Objects. Minneapolis, MN: University Of Minnesota Press, 2017. Print. Full pdf available (at the time of writing) from http://www.heavysideindustries.com/wp-content/uploads/2011/01/SimondonGilbert.OnTheModeOfExistence.pdf 

  19. Dredge, Stuart. 'Samaritans Radar Analyses Twitter to Identify Users at Risk for Suicide'. The Guardian 29 Oct. 2014. www.theguardian.com. Web. 21 Aug. 2018. https://www.theguardian.com/technology/2014/oct/29/samaritans-radar-twitter-suicide-risk

  20. Brown, Mark. 'The Uproar Over the New Samaritans Radar Twitter App'. World of Psychology. N.p., 30 Oct. 2014. Web. 21 Aug. 2018. https://psychcentral.com/blog/the-uproar-over-the-new-samaritans-radar-twitter-app/

  21. Short, Adrian. 'Samaritans Radar: Paved with Good Intentions'. N.p., 30 Oct. 2014. Web. 21 Aug. 2018. https://www.adrianshort.org/posts/2014/samaritans-radar/

  22. Mendel, Jon. 'Samaritans Radar: Questions from a Research Ethics Point of View (Post 1 of 2)'. jonmendel 1 Nov. 2014. Web. 21 Aug. 2018. https://jonmendel.wordpress.com/2014/11/01/samaritans-radar-questions-from-a-research-ethics-point-of-view-post-1-of-2/

  23. Bernal, Paul. 'Samaritans Radar: Misunderstanding Privacy and "Publicness"'. Paul Bernal's Blog 1 Nov. 2014. Web. 21 Aug. 2018. https://paulbernal.wordpress.com/2014/11/01/samaritans-radar-misunderstanding-privacy-and-publicness/

  24. Brayne, Sarah. 'Big Data Surveillance: The Case of Policing'. American Sociological Review 82.5 (2017): 977–1008. SAGE Journals. Web. 

  25. Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. 'Semantics Derived Automatically from Language Corpora Contain Human-like Biases'. Science 356.6334 (2017): 183–186. science.sciencemag.org. Web. 

  26. O'Hara, Mary. 'Black and Minority Ethnic Mental Health Patients "marginalised" under Coalition'. The Guardian 17 Apr. 2012. www.theguardian.com. Web. 21 Aug. 2018. https://www.theguardian.com/society/2012/apr/17/bme-mental-health-patients-marginalised

  27. Feldman, Michael et al. 'Certifying and Removing Disparate Impact'. arXiv:1412.3756 [cs, stat] (2014): n. pag. arXiv.org. Web. 21 Aug. 2018. http://arxiv.org/abs/1412.3756

  28. Chouldechova, Alexandra. 'Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments'. arXiv:1703.00056 [cs, stat] (2017): n. pag. arXiv.org. Web. 21 Aug. 2018. http://arxiv.org/abs/1703.00056

  29. Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. 'Inherent Trade-Offs in the Fair Determination of Risk Scores'. arXiv:1609.05807 [cs, stat] (2016): n. pag. arXiv.org. Web. 21 Aug. 2018. http://arxiv.org/abs/1609.05807

  30. Narayanan, Arvind. Tutorial: 21 Fairness Definitions and Their Politics https://www.youtube.com/watch?v=jIXIuYdnyyk 

  31. Knight, Will. 'There's a Big Problem with AI: Even Its Creators Can't Explain How It Works'. MIT Technology Review. N.p., n.d. Web. 21 Aug. 2018. https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

  32. Keddell, Emily. 'Predictive Risk Modelling: On Rights, Data and Politics.' Re-Imagining Social Work in Aotearoa New Zealand 4 June 2015. Web. 21 Aug. 2018. http://www.reimaginingsocialwork.nz/2015/06/predictive-risk-modelling-on-rights-data-and-politics/

  33. Eubanks, Virginia. 'A Child Abuse Prediction Model Fails Poor Families'. Wired 15 Jan. 2018. www.wired.com. Web. 21 Aug. 2018. https://www.wired.com/story/excerpt-from-automating-inequality/

  34. Ramesh, Randeep, and social affairs editor. 'Councils Using Controversial Lie Detector Tests to Catch Benefit Fraudsters'. The Guardian 10 Mar. 2014. www.theguardian.com. Web. 21 Aug. 2018. https://www.theguardian.com/society/2014/mar/10/councils-use-lie-detector-tests-benefits-fraudsters

  35. Choudhury, Munmun De, and Emre Kiciman. 'The Language of Social Support in Social Media and Its Effect on Suicidal Ideation Risk'. Microsoft Research (2017): n. pag. www.microsoft.com. Web. 21 Aug. 2018. https://www.microsoft.com/en-us/research/publication/language-social-support-social-media-effect-suicidal-ideation-risk/

  36. Cagle, Matt, and Nicole Ozer. 'Amazon Teams Up With Government to Deploy Dangerous New Facial Recognition Technology'. American Civil Liberties Union. N.p., 22 May 2018. Web. 21 Aug. 2018. https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazon-teams-government-deploy-dangerous-new

  37. Conger, Kate. 'Amazon Workers Demand Jeff Bezos Cancel Face Recognition Contracts With Law Enforcement'. Gizmodo. N.p., 21 June 2018. Web. 21 Aug. 2018. https://gizmodo.com/amazon-workers-demand-jeff-bezos-cancel-face-recognitio-1827037509

  38. 'German Parents Told to Destroy "spy" Dolls'. BBC News 17 Feb. 2017. www.bbc.co.uk. Web. 21 Aug. 2018. https://www.bbc.com/news/world-europe-39002142

  39. 'Community Treatment Order - Mental Health Law Online'. Mental Health Law Online. N.p., n.d. Web. 21 Aug. 2018. http://www.mentalhealthlaw.co.uk/Community_Treatment_Order

  40. 'Siri - SRI International'. N.p., n.d. Web. 21 Aug. 2018. https://www.sri.com/work/timeline-innovation/timeline.php?timeline=computing-digital&_escaped_fragment_=%26innovation=siri

  41. 'Five Ways AI Will Make Health Transformation Truly Global'. Web. 21 Aug. 2018. https://www.futurehealthindex.com/2018/04/03/five-ways-ai-will-make-health-transformation-truly-global/

  42. McQuillan, Dan. 'Data Science as Machinic Neoplatonism'. Philosophy & Technology (2017): 1–20. link.springer.com. Web. 

  43. Vergyri, Dimitra, Bruce Knoth, and Mitchell McLaren. 'Speech-Based Assessment of PTSD in a Military Population Using Diverse Feature Classes | SRI International'. Dresden, Germany: N.p., 2015. Web. 13 Aug. 2018. https://www.sri.com/work/publications/speech-based-assessment-ptsd-military-population-using-diverse-feature-classes

  44. Mullin, Emily. 'Voice Analysis Tech Could Diagnose Disease'. MIT Technology Review. N.p., n.d. Web. 13 Aug. 2018. https://www.technologyreview.com/s/603200/voice-analysis-tech-could-diagnose-disease/

  45. Fricker, Miranda. Epistemic Justice and the Medical Expert - Professor Miranda Fricker - Medical Humanities Sheffield. N.p. Film. 

  46. Haraway, Donna. 'Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective'. Feminist Studies 14.3 (1988): 575–599. JSTOR. Web. 

  47. McQuillan, Dan. 'People's Councils for Ethical Machine Learning'. Social Media + Society 4.2 (2018): 2056305118768303. SAGE Journals. Web. 

  48. Foucault, Michel. 'Technologies of the Self - Lectures at University of Vermont in October 1982'. Michel Foucault, Info. N.p., 1982. Web. 21 Aug. 2018. https://foucault.info/

  49. Rogers, Anne, and David Pilgrim. '"Pulling down Churches": Accounting for the British Mental Health Users' Movement'. Sociology of Health & Illness 13.2 (1991): 129–148. Wiley Online Library. Web. 

  50. 'Psychiatric Survivors Movement'. Wikipedia 8 June 2018. Wikipedia. Web. 21 Aug. 2018. https://en.wikipedia.org/w/index.php?title=Psychiatric_survivors_movement&oldid=844920335

  51. Croft, Hazel. 'The Politics of Mental Health'. rs21 29 July 2017. Web. 21 Aug. 2018. https://www.rs21.org.uk/2017/07/29/the-politics-of-mental-health/

  52. 'Open Dialogue National Conference - an Interview with Dr Russell Razzaque'. NELFT NHS Foundation Trust. N.p., 15 Feb. 2018. Web. 21 Aug. 2018. https://www.nelft.nhs.uk/news-events/open-dialogue-national-conference-an-interview-with-dr-russell-razzaque-2818/

  53. 'Soteria (Psychiatric Treatment)'. Wikipedia 18 June 2018. Wikipedia. Web. 21 Aug. 2018. https://en.wikipedia.org/w/index.php?title=Soteria_(psychiatric_treatment)&oldid=846452259

  54. Mills, China. 'Suicides Linked to Austerity: From a Psychocentric to a Psychopolitical Autopsy'. Discover Society. N.p., 1 Feb. 2017. Web. 21 Aug. 2018. https://discoversociety.org/2017/02/01/suicides-linked-to-austerity-from-a-psychocentric-to-a-psychopolitical-autopsy/

  55. 'Explainable Artificial Intelligence (XAI) - DARPA Information Innovation Office'. Federal Grants. N.p., 10 Aug. 2016. Web. 21 Aug. 2018. http://www.federalgrants.com/Explainable-Artificial-Intelligence-XAI-61138.html

  56. Mordvintsev, Alexander, Christopher Olah, and Mike Tyka. 'Inceptionism: Going Deeper into Neural Networks'. Google AI Blog 17 June 2015. Web. 21 Aug. 2018. http://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

  57. Brown, Mark. 'What Is It like to Be Sectioned?' ShortList. N.p., 30 July 2018. Web. 21 Aug. 2018. https://www.shortlist.com/news/what-is-it-like-to-be-sectioned/367733

  58. Markham, Kevin. ‘Simple Guide to Confusion Matrix Terminology’. Data School. N.p., 26 Mar. 2014. Web. 22 Aug. 2018. https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/

  59. Haraway, Donna J. Modest_Witness@Second_Millennium.FemaleMan_Meets_OncoMouse: Feminism and Technoscience. First Edition. Routledge, 1997. Print.