Depression-Detection-Through-Multi-Modal-Data
Conventionally depression detection was done through extensive clinical interviews, wherein the subjectβs re- sponses are studied by the psychologist to determine his/her mental state. In our model, we try to imbibe this approach by fusing the 3 modalities i.e. word context, audio, and video and predict an output regarding the mental health of the patient. The output is divided into a binary yes/no denoting whether the patient has symptoms of depression. Weβve built a deep learning model that fuses these 3 modalities, assigning them appropriate weights, and thus gives an output.