CeFPro Connect

Event Q&A
Who Is Accountable When AI Advises the Board? Governing Agentic AI in Corporate Decision-Making
As AI evolves from analytical tools to systems that recommend strategic decisions, corporate accountability becomes more complex. Boards must retain ultimate responsibility while managing risks tied to opaque models, vendor dependence, and emerging regulatory frameworks. Effective governance will require stronger oversight, human judgment, continuous monitoring, and integration of AI-specific risks into enterprise risk management.
Mar 17, 2026
Gerry Kounadis
Gerry Kounadis, Head of Group Data Privacy, Technology & ESG Compliance Advisory, National Bank of Greece
Tags: Vendor and Third Party Risk
Who Is Accountable When AI Advises the Board? Governing Agentic AI in Corporate Decision-Making
The views and opinions expressed in this content are those of the thought leader as an individual and are not attributed to CeFPro or any other organization
  • Boards retain ultimate accountability.

  • AI vendors may face product liability.

  • Single-vendor dependence creates risk.

  • AI outputs must be explainable and challengeable.

  • Governance must move to continuous oversight.

  • AI risks belong in enterprise risk management.

Ahead of Vendor and Third Party Risk Europe, we spoke with Gerry Kounadis about how AI evolves from analytical tools to systems that recommend strategic decisions, corporate accountability becomes more complex. Boards must retain ultimate responsibility while managing risks tied to opaque models, vendor dependence, and emerging regulatory frameworks. Effective governance will require stronger oversight, human judgment, continuous monitoring, and integration of AI-specific risks into enterprise risk management.

 

As agentic AI systems evolve from summarizing information to recommending strategic decisions, who should ultimately bear responsibility when those recommendations lead to harm—the company’s board, management, the AI vendor, or some combination?

 

Ultimate accountability must remain with the board and senior management, because fiduciary duties of oversight and judgment are non-delegable and cannot be transferred to an AI vendor. That does not eliminate vendor responsibility: under the revised EU Product Liability Directive, which takes effect in December 2026 under national transposition, AI systems are treated as products, so providers may face strict liability for defects, non-compliance, or security failures. The practical allocation tends to follow a natural division: the company bears primary responsibility for deployment decisions, while the vendor carries significant responsibility for the model's integrity and compliance — though these boundaries may overlap and will likely be tested as the framework matures.

 

 

If a company’s strategic planning relies heavily on a single vendor’s proprietary “black box” AI model, what governance structures or oversight mechanisms should boards implement to mitigate concentration and transparency risks?

 

Heavy reliance on a single proprietary AI model should be treated as a critical concentration and third-party dependency risk, especially in sectors such as financial services, where resilience expectations under DORA are already well established. Boards should therefore require independent model validation, robust contractual audit and information rights, decision logging for material outputs, and a credible, tested exit or substitution strategy. The governing principle should be simple: if an organisation cannot explain, challenge, or replace a model, it should not rely on it for strategic decision-making.

 

Where should regulators and courts draw the line between legitimate AI-based decision support and the illegal outsourcing of managerial judgment under EU corporate law?

 

The line should be drawn where AI stops informing managerial judgment and starts replacing it in practice. AI-based decision support remains legitimate when directors and executives can understand the basis of an output in practical terms, test it against alternatives, and exercise a genuine right of override. When humans merely endorse opaque recommendations, they cannot meaningfully interrogate or defend them. The company moves from decision support to an impermissible abdication of fiduciary responsibility under EU corporate law.

 

 

As autonomous AI systems become more integrated into executive decision-making, what new regulatory or governance models might emerge to ensure accountability while still enabling innovation?

 

As agentic AI becomes more integrated into executive decision-making, governance is likely to shift from periodic review to continuous oversight, because autonomous systems operate in real time, and governance must keep pace. This points toward dedicated AI governance structures, clearer accountability across business, risk, technology, and compliance functions, and stronger monitoring and escalation mechanisms. The broader regulatory direction is equally clear: the AI Act, together with sector-specific frameworks such as DORA, signals that high-impact AI will increasingly be governed more like critical infrastructure than ordinary software.

 

 

How might organizations evolve their risk management frameworks to proactively address emerging AI governance challenges—such as advanced hallucinations, systemic bias, or large-scale data leakage—in high-stakes corporate decision systems?

 

Organisations should embed AI-specific risks such as hallucinations, model drift, systemic bias, and data leakage directly into enterprise risk management rather than treating them as isolated technology issues. In practice, that means structured pre-deployment testing, continuous post-deployment monitoring, defined points for human intervention, and containment measures that can suspend a high-stakes system when it moves outside approved parameters — reflecting the AI Act's explicit requirements for high-risk systems to include human oversight and the ability to override, stop, or revert. For financial institutions, this must also be integrated with ICT and third-party risk governance, because under supervisory frameworks such as DORA, AI risk is inseparable from operational resilience and outsourcing oversight.

Gerry Kounadis Bio

Gerry is the Head of the Group Data Privacy, Technology & ESG Compliance Advisory Division at National Bank of Greece (NBG). He has a strong track record in conduct and digital regulation, with a focus on designing and implementing compliance and risk management frameworks across the financial services sector. His expertise includes third-party risk management, vendor governance, and complex outsourcing arrangements under evolving regulatory frameworks such as DORA and the EU AI Act. Gerry previously served as a senior manager in Deloitte’s risk advisory practice and holds an LL.M. in Finance (Goethe University Frankfurt) and an M.Sc. in Capital Markets, Regulation and Compliance (Henley Business School).

Gerry Kounadis
Sign in to view comments
You may also like...
ad
Related insights