Research

MakingAITransparent&Trustworthy

PhD work in explainable AI, bridging machine learning accuracy with real-world interpretability in sports and health.

My ongoing PhD research bridges two critical gaps: between machine learning accuracy and real-world interpretability, and between technical innovation and human responsibility. Through doctoral research at Universidad San Jorge, I'm developing methodologies for making AI transparent and trustworthy — applied in both healthcare and sports science.

PhD Research

Explainable AI for
Biomechanical Analysis

In Progress Health Sciences — Explainable AI
UNLOC Research Group Universidad San Jorge, Zaragoza

XAI for Clinical Pre-diagnosis

Gait analysis + interpretable ML for biomechanical assessment

Focus Areas

Research Domains

01

Explainable AI

Making ML models transparent for clinical professionals. SHAP for quantifying feature contributions, LIME for local interpretability, Grad-CAM for visualizing important regions in medical images.

02

Biomechanical Analysis

Analyzing human movement patterns to detect pathology and assess clinical risk. Gait analysis, kinematic pattern recognition, feature extraction from motion capture data, anomaly detection.

03

Clinical Decision Support

Developing AI that enhances rather than replaces clinical judgment. Understanding clinician workflows, designing interfaces that promote expertise, validating against clinical standards.

04

Healthcare AI Ethics

Responsible development in sensitive contexts. Fairness and bias detection, regulatory compliance, patient privacy and data governance, accountability frameworks.

Process

Research Methodology

01

Problem Definition

Identify clinical challenges where explainability is critical. Partner with healthcare professionals to understand decision-making processes.

02

Data Acquisition

Collect biomechanical data (gait analysis, motion capture, medical imaging) from diverse patient cohorts. Exploratory analysis for relevant features.

03

Model Development

Build ML models focused on interpretability-friendly architectures combined with post-hoc explanation techniques.

04

XAI Implementation

Apply SHAP, LIME, Grad-CAM to make model decisions transparent. Validate explanations are clinically meaningful and actionable.

05

Clinical Validation

Assess performance against clinical standards. User studies with clinicians to evaluate whether explanations improve confidence.

06

Dissemination

Share findings with research community and practitioners. Contribute to standards and best practices for healthcare AI.

Insights

Key Findings

01

Explainability Increases Adoption

Healthcare professionals are significantly more likely to act on AI recommendations when they understand the reasoning behind them.

02

One Technique Isn't Enough

Different stakeholders benefit from different explanation types. Clinicians, patients, and regulators each need tailored approaches.

03

Context Matters Enormously

The same model can be trustworthy in one clinical context and problematic in another. Explanations must be tailored to workflows.

04

XAI Reveals Hidden Biases

Explainability techniques help identify when models rely on clinically irrelevant features or show systematic demographic biases.

Current Focus

AI in Sports
& Performance

Building on my research in explainability and healthcare AI, my current focus applies these principles to sports performance and athlete development. This involves biomechanical analysis for injury prevention, data-driven training optimization, multidisciplinary athlete monitoring systems, and interpretable AI that coaches and medical staff actually trust.

The goal is technology that empowers the people behind high performance — coaches, sports scientists, physiotherapists, and the athletes themselves — with AI they can understand and act on.

Collaboration

Research
partnerships?

Open to collaborating on AI, explainability, sports technology, and applied research.

Email Me