Artificial Intelligence In Mental Healthcare

Artificial intelligence (AI) is the scientific study and engineering of intelligent machines.

(graphic illustration provided by T2 Telehealth and Technology)

(graphic illustration provided by T2 Telehealth and Technology)

AI technology can be designed to accomplish specialized intelligent tasks, such as speech or facial recognition, or to emulate complex human-like intelligent behavior such as reasoning and language processing. AI systems that are capable of interacting with and making autonomous actions within their environment are called artificial intelligent agents.

An emerging application of AI technology in the mental healthcare field is the use of artificial intelligent agents to provide training, consultation, and treatment services. Researchers at the USC’s Institute for Creative Technologies, for example, are currently developing virtual mental health patients that converse with human trainees.

One application is the design of “virtual veterans” with depression and suicidal thoughts who can be used to help train military clinicians and other personnel on how to detect the risk for suicide.

The continual advances of AI technologies and their application in mental healthcare lead to a concept that I call the “Super Clinician”. The “Super Clinician” concept is an artificial intelligent agent system that could either be in the form of a virtual reality simulation or a humanoid robot.

The system design entails the integration of several advanced technologies and capabilities, including natural language processing, computer vision, facial recognition, olfactory sensors, and even thermal imaging to detect temperature changes in patients (i.e., changes in arousal states such as anger or anxiety).

The system would also have access to patient medical records and all available digitized medical knowledge. The “Super Clinician” concept is not outside of the realm of possibility: the Defense Advanced Research Projects Agency (DARPA), for example, is developing a system that that can detect psychological states, and IBM is developing a version of “Watson” (the AI system that won on Jeopardy in 2011) that has learned the entire medical literature.

As both a clinical psychologist and technologist, I am particularly interested in the interaction between humans and artificially intelligent beings.

In the context of mental health care, some questions that come to my mind are whether caring and empathetic connections between humans and artificially intelligent care providers are possible.

I’m also interested in whether there may be issues with trust or suspicion regarding the motives of advanced AI systems, such as in the case of a “Super Clinician.” What do you think are important issues regarding artificial intelligent agents who provide psychological, counseling, or other medical care services?

Guest blog written by David D. Luxton, PhD.
Dr. Luxton is a Research Psychologist and Program Manager at the DoD’s National Center for Telehealth & Technology (T2)

———-

Disclaimer: The appearance of hyperlinks does not constitute endorsement by the Department of Defense of this website or the information, products or services contained therein. For other than authorized activities such as military exchanges and Morale, Welfare and Recreation sites, the Department of Defense does not exercise any editorial control over the information you may find at these locations. Such links are provided consistent with the stated purpose of this DoD website.

This entry was posted in Education & Culture, Robots and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.