Module D3 · Augmented Decision-Making & Ethics · Executive Leaders Track · 2026
Module D3 · Half-day · Executive Leaders Track

Augmented
Decision-Making
& AI Ethics

Where to keep control — and how to avoid exposure

AI does not decide — it recommends. But when an executive signs a document based on an AI recommendation without questioning it, they have decided. This module draws the precise boundary between what can be delegated to AI and what the executive must retain, and provides the ethical framework to stay beyond reproach.

Executive objectives
01Distinguish augmented decision-making from delegated decision-making to AI
02Master the 8-question ethical checklist before any sensitive deployment
03Identify decisions that executives can never delegate to AI
04Navigate ethical dilemmas with no obvious answer
Executive Committee All C-Suite executives EU AI Act Art.13-14
Pejman Gohari · CDO · Chief AI Officer · ORBii
25 years in the field · DataLab SG · Data Factory Bpifrance · BPCE SI · DUNOD Author · IESEG Professor
academy.orbii.tech
ORBii.Academy · D3 · Augmented Decision-Making & AI Ethics · Executive Leaders TrackConfidential · 202601
D3 · Augmented Decision-Making · 02
Section 1

Decision matrix — Human vs AI by type of stakes

The question is not "can AI decide?" but "in this context, is an AI decision acceptable from ethical, regulatory, and strategic standpoints?" This matrix guides the arbitration.

Decision typeAI can recommendAI can decide aloneHuman requiredRegulatory reference
Standard credit scoring✅ Yes⚠️ PartiallyWhen denial or high amountEU AI Act Art.13 · GDPR Art.22
Fraud detection✅ Yes✅ Preventive blockingValidation before final actionEU AI Act · DORA
Recruitment · CV screening✅ Initial filtering❌ NoAll final decisionsEU AI Act high risk · GDPR
Dismissal or HR sanctions⚠️ Factual elements only❌ NeverAlways — criminal liabilityLabor law · EU AI Act
Crisis communications⚠️ First draft only❌ NeverCEO approval and signatureExecutive liability
Personalized pricing✅ Yes✅ Within defined limitsWhen potential discriminationGDPR · Right to explanation
Strategic M&A arbitration✅ Analysis, scenarios❌ NeverAlways — commits the companyExecutive liability · Board
"Augmented decision-making is not delegated decision-making. It is a better-informed human decision. An executive who signs an action based on an AI recommendation without questioning it has decided — and bears full responsibility."
— Pejman Gohari · CDO · Chief AI Officer · ORBii
Diagram · The augmented decision flow — 3 zones
DATA · CONTEXT 📥 Structured inputs 📄 Business rules 🔗 Client history analysis AI ENGINE Analysis & modeling Scoring · Prediction Recommendation generation recommends HUMAN OVERSIGHT 👁 Reviews the recommendation ❓ Challenges the variables ✍️ Decides and records decides FINAL DECISION ✅ Validated and recorded Named accountability Art.14 EU AI Act compliant escalate if doubt · incomplete data · high-risk case

What "human oversight" concretely means (EU AI Act Art.14)

  • An identified human can understand and challenge the AI recommendation
  • They have access to input variables and the decision explanation
  • They can disable or override the recommendation without technical constraints
  • Their final decision is recorded and attributed to them by name
  • They have received training on the limitations of the AI system involved

The 4 situations where executives must always retain control

  • !Irreversible individual impact — dismissal, credit denial, access to care
  • !Potential criminal liability — any act committing the legal entity
  • !Crisis or emergency situation — AI fluency can mask serious errors
  • !Unprecedented decision — AI cannot reason beyond its training data
ORBii.Academy · D3 · Augmented Decision-Making & Ethics · Executive Leaders TrackConfidential · 202602
Protected Content

You have viewed the preview of this module (first 2 pages).
To access the full content, enter your access code or request access.

3 pages remaining Personal link · Valid 24h