ORBii.Academy
Module M6 · AI Ethics & Augmented Decision-Making · 2026
Module 6 · ½ day
AI Ethics &
Augmented Decision-Making
Bias · Right to explanation · Human responsibility · Concrete dilemmas
AI makes decisions that affect real people — access to credit, risk assessment, candidate selection. These decisions can be biased, opaque, and non-contestable. This module trains banking professionals to exercise their ethical responsibility in the face of AI: recognizing biases, demanding explanations, and maintaining human judgment where it is irreplaceable.
Learning Objectives
01Understand how algorithmic biases form and how to detect them
02Know the right to explanation (EU AI Act Art. 13-14) and its operational implications
03Distinguish decisions where AI can assist from those requiring pure human judgment
04Apply an ethical decision framework to concrete banking dilemmas
C-Suite
Compliance & Risk
HR Department
Data Owners
Prerequisites: M3 + M4
Pejman Gohari · CDO · Chief AI Officer · ORBii
DataLab SG (credit scoring, churn, fraud) · Bpifrance 15+ ML products · Agentic AI governance BPCE SI · Author DUNOD · Professor IESEG
academy.orbii.tech
ORBii.Academy · M6 · AI Ethics & Augmented Decision-MakingConfidential · 202601
ORBii.Academy
M6 · AI Ethics & Augmented Decision-Making · 02
Section 1
AI ethics is not philosophical — it is operational
"When I deployed the first credit scoring model at the DataLab of Societe Generale, we discovered after the fact that the model was overweighting postal codes correlated with the social origin of clients. There was no discriminatory intent. But the effect was real and measurable. That is what an algorithmic bias is."
— Pejman Gohari · DataLab SG 2012-2018 · CDO · Chief AI Officer · ORBii
Why banking is on the front line
Banking institutions use AI to make or prepare decisions that have direct consequences on individuals: access to credit, insurance risk assessment, recruitment, transaction monitoring. These decisions are subject to specific legal obligations that the EU AI Act reinforces.
4
categories of AI systems
classified "high risk" in banking by the EU AI Act Annex III
30M€
or 6% of global revenue
maximum EU AI Act fine for high-risk system non-compliance
100%
of individual decisions
must be explainable to the affected person (Art. 14)
August 2026
EU AI Act fully applicable
for high-risk systems in production
⚖️
The question is not "is AI ethical?" but "how do I ensure that its use within my scope respects the rights of individuals and the obligations of the organization?". This is a matter of concrete professional responsibility.
What AI ethics means in practice — 4 dimensions
1 · Fairness — Do not discriminate
An AI system must not produce systematically unfavorable results for a group of people based on protected characteristics (origin, gender, age, disability). No direct discrimination, nor indirect discrimination through proxy variables.
2 · Transparency — Be able to explain
Any person affected by an AI decision has the right to understand on what basis the decision was made, which variables were decisive, and how to challenge it if they consider the decision unfair.
3 · Accountability — Maintain the human
AI does not make decisions — it proposes. The responsibility for the final decision always remains with an authorized human. This protects the organization, and protects the client.
4 · Proportionality — Adapt the level of oversight
The higher the potential impact of a decision, the stronger the level of human oversight and audit must be. A high-volume credit scoring model requires continuous monitoring; an internal translation tool can operate with minimal oversight.
ORBii.Academy · M6 · AI Ethics & Augmented Decision-MakingConfidential · 202602
Protected Content
You have viewed the preview of this module (first 2 pages).
To access the full content, enter your access code or request access.
6 pages remaining
Personal link · Valid 24h