ORBii.Academy
Module M5 · Responsible GenAI · Prompting, Risks & Best Practices · 2026
Module 5 · 1 day
Using GenAI
Responsibly
Structured prompting · Real risks · Shadow AI · Banking AI policy
Generative AI is already in your organizations — often without anyone knowing. This module provides concrete tools to use it effectively and securely: how to build a prompt that works, how to avoid regulatory pitfalls, and how to contribute to a responsible AI policy rather than circumventing it.
Learning Objectives
01Master the CARE structured prompting method for reliable outputs
02Identify and neutralize the 5 critical GenAI risks in a banking context
03Understand Shadow AI — why it develops and how to prevent it
04Know the 6 components of a responsible AI policy in a financial institution
Business Managers
Compliance & Risk
Internal AI Champions
C-Suite
Prerequisites: M3 + M4
Pejman Gohari · CDO · Chief AI Officer · ORBii
GenAI Roadmap BPCE SI 2026-2030 (with BCG) · 6 GenAI Executive Committee pilots · Copilot KPMG · Data Factory Bpifrance · Author DUNOD
academy.orbii.tech
ORBii.Academy · M5 · Responsible GenAI · Prompting, Risks & Best PracticesConfidential · 202601
ORBii.Academy
M5 · Responsible GenAI · 02
Section 1
Why "responsible" — What is already happening in your organization
"During every engagement, I systematically discover employees using ChatGPT, Claude, or Gemini with client data to work faster. This is not malicious intent — it is unmanaged pragmatism. And that is the exact definition of Shadow AI."
— Pejman Gohari · CDO · Chief AI Officer · ORBii · Engagements BPCE SI, KPMG, Bpifrance
The GenAI paradox in banking — 2026
Banking institutions are investing massively in structured GenAI programs — roadmaps validated by the Executive Committee, supervised pilots, dedicated teams. In parallel, hundreds of employees are already using, today, unapproved GenAI tools in their daily work.
REAL SITUATION OBSERVED ON ENGAGEMENT
Compliance Department of a major banking group, 2025: 12 out of 18 employees were using ChatGPT to summarize regulatory notes. None of them knew that prompts sent to ChatGPT.com were, under the terms of use at the time, potentially used for model training. Confidential internal data was exposed.
REAL SITUATION OBSERVED ON ENGAGEMENT
Commercial Department, retail network, 2024: Advisors were using a public LLM to draft client follow-up emails, including the client's name, account balance, and real estate project. A direct violation of GDPR Art. 5 (data minimization) and banking secrecy.
💡
Responsible GenAI is not an additional restriction. It is the condition for AI to be durably useful — without exposing the institution to a CNIL sanction, without violating banking secrecy, without creating a regulatory precedent that would block all future deployment.
The 4 reasons why Shadow AI develops
1
The official tool does not exist yet
The organization's AI program is being deployed. In the meantime, employees find their own solutions. This is rational — and predictable.
2
The official tool is less capable than the public LLM
If the approved internal copilot produces results inferior to GPT-4, employees will switch to the better tool — even if it is prohibited. Quality must deliver.
3
No one explained the risks to them
This module exists precisely for this reason. Without awareness, employees cannot weigh the risks they are taking — for themselves and for the organization.
4
The AI policy is not communicated or accessible
If the AI policy exists but no one knows about it or can easily access it, it serves no purpose. Governance must be visible and understandable.
Definition
Shadow AI
Use of generative AI tools outside the framework approved by the organization — without IT validation, without a data processing agreement, without compliance with the AI policy. Shadow AI is not necessarily malicious: it is often a symptom of a legitimate need poorly addressed by the organization.
ORBii.Academy · M5 · Responsible GenAI · Prompting, Risks & Best PracticesConfidential · 202602
Protected Content
You have viewed the preview of this module (first 2 pages).
To access the full content, enter your access code or request access.
6 pages remaining
Personal link · Valid 24h