Development and Implementation of a Functional Module for the Classification of Facial Expressions of Pain Using Supervised Artificial Intelligence Models
Keywords:
Facial expressions, pain, artificial intelligence, convolutional neural networks, automatic classificationAbstract
Introduction:
The objective evaluation of pain based on facial expressions represents a significant challenge in health and technology, as pain perception is subjective and its clinical quantification is complex. Facial expressions contain key information about the pain experience, and their automatic analysis facilitates decision-making in clinical monitoring, rehabilitation, or human-machine interfaces. Convolutional Neural Networks (CNNs) have shown high potential in the classification of images, including facial patterns associated with emotions and pain.
Objective:
To develop and implement a functional module based on CNNs for the automatic classification of facial expressions of pain, integrating normalization, data augmentation, and advanced performance evaluation.
Methodology:
The OSF Facial Expression of Pain dataset was used, consisting of 1,200 images from 60 participants labeled as pain and no pain. Images were loaded in MATLAB, resized to [224 224 3] px, and organized into a training set with 70% of images per label and a validation set with the remaining images. Data augmentation included random rotation (-10° to 10°), X and Y translation (-5 to 5 px), and horizontal reflection to enhance model robustness. The architecture consisted of a CNN with three convolutional blocks containing 8, 16, and 32 filters respectively, each followed by batch normalization and ReLU activation, with max pooling in the first two blocks, and a fully connected layer with two neurons for binary classification, followed by softmax and classification layers. Training was performed using the Stochastic Gradient Descent with Momentum (SGDM) optimizer, with an initial learning rate of 0.01, for 18 epochs and validation every 30 iterations. Accuracy, recall, and F1-score were calculated for each class. The entire process ran on a single CPU (AMD Ryzen™ Z1 Extreme) in approximately 2 minutes and 29 seconds.
Results:
The model achieved a global validation accuracy of 97.81%, showing a stable upward trend during training with convergence of the loss function toward zero. The average processing time was approximately 0.115 seconds per image, demonstrating feasibility for real-time applications. For the Pain class, recall was 99.10% and F1-score was 96.49%, while for the No Pain class, recall was 96.58% and F1-score was 99.12%.
Conclusions:
The developed functional module demonstrates adequate performance in the automatic classification of facial expressions of pain using supervised AI, integrating data augmentation and advanced evaluation metrics. This development shows potential for integration into clinical monitoring systems, rehabilitation, and assistive devices, contributing to technological innovation in digital health for objective pain assessment.
Publication Facts
Reviewer profiles N/A
Author statements
Indexed in
- Academic society
- N/A
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra

This work is licensed under a Creative Commons Attribution 4.0 International License.
© Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra under a Creative Commons Attribution 4.0 International (CC BY 4.0) license which allows to reproduce and modify the content if appropiate recognition to the original source is given.

