top of page

Operationalizing Responsible AI: From Principles to Practice

Mission
This research explores how ethical principles can be effectively integrated into real AI development environments. While Responsible AI is widely discussed through policies, frameworks, and guidelines, there is limited empirical evidence showing how these principles influence the way AI systems are actually designed, built, and deployed. The mission of this research is to move Responsible AI from theory to practice by generating measurable evidence on how operationalizing ethics impacts AI system development and performance
Outcomes
The research aims to produce empirical insights into how integrating Responsible AI practices affects development speed, system quality, and engineering decision making. By evaluating realistic development scenarios, the study will provide organizations with practical guidance on how to implement ethical AI without slowing innovation. The findings will support stronger governance frameworks, improved engineering practices, and more evidence based approaches to Responsible AI deployment
top-view-of-chess-on-chessboard-with-shadow-banne-2026-01-05-23-58-46-utc.jpg
Research Overview
This doctoral research, conducted through Golden Gate University with institutional support from Aiforgood Asia, investigates how ethical considerations influence the design, development, and deployment of AI systems within realistic organizational environments. As artificial intelligence becomes embedded in digital services, decision systems, and automated workflows, organizations face growing pressure to ensure that these technologies are deployed responsibly while maintaining performance

RESEARCH COLLABORATORS

golden-gate-university_edited_edited.png
tropical-travel-in-the-philippines-2021-09-02-02-05-04-utc.jpg

Get In Touch

bottom of page