Artificial Intelligence is no longer a pilot or lab experiment. In many organizations, it is already embedded in customer service workflows, finance operations, HR processes, software development pipelines, and (ironically) security itself. The productivity gains are real, but so is the pressure to move fast. That combination is exactly where risk starts to increase – look no further than the high profile professional services firms getting caught with high value projects citing non-existent references.
Decisions about how, why, and where to use AI are not IT decisions. They are business decisions with operational, legal, risk management, and financial consequences. Systems that read sensitive documents, generate code, move money, or interact directly with customers become part of the organization’s critical infrastructure and need to be included in the risk assessment process. Approved or not, organizational data may be already being used with AI. For that reason alone, AI must be included in Business Impact Analysis, vendor risk management, and executive risk discussions.
What’s Different About Generative and Agentic AI
Generative AI creates outputs like text, code, or summaries. It generates the information based on probabilistic models trained on massive data sets (Large Language Models or LLMs). A good analogy is like asking the least impressive intern to help with a big assignment. They may put forth the effort, but it could still be incorrect.
Agentic AI goes a step further. These systems are designed to take actions: sending emails, updating records, triggering workflows, interacting with generative AI, or executing code. The risk profile changes materially when AI is no longer advising humans but acting on their behalf.
Key Cybersecurity Risks Executives Need to Address
Data Privacy and Data Control
AI systems routinely process sensitive information: contracts, financial data, internal communications, and source code. Logs, prompts, and training artifacts can quickly become shadow data repositories that fall outside existing retention, classification, and monitoring controls.
Executives should be asking straightforward questions: What data is allowed to flow into these tools? Where is it stored? Is it used for model training? Can the vendor prove segregation and compliance, or are we relying on marketing assurances? In practice, this often comes down to whether the vendor can provide independent evidence such as a SOC 2 report and whether leadership is willing to accept the residual risk if they cannot (just like standard vendor risk management processes).
Integrity of AI-Driven Decisions
AI systems can and do “hallucinate” generating answers that sound authoritative but are factually wrong. In a recent effort, we had AI generating fictional service efforts and experience in response to an RFP – based on nothing but a slightly off target prompt. That is inconvenient in low-risk use cases and unacceptable in areas like finance, legal review, regulatory compliance, medical decisions, or contractual commitments.
For high-impact decisions, we recommend “human in the loop” as an AI security requirement. Organizations need clear rules about where AI can inform decisions and where a human must approve – AIs are not currently capable of considering certain kinds of context or the ethics of certain decisions. Validation, reasonableness checks, and documented accountability still matter. If something goes wrong will the board or your executives accept “but the AI told me”?
Agentic AI and Privilege Risk
Agentic Ai systems can initiate actions including intelligent interfaces between solutions, or a series of steps to help with automation. When they are over-privileged or poorly constrained, the result is functionally no different than giving an employee too many permissions. Even if they (AI or person) do not intend harm, they can cause chaos with unrestricted access to critical systems.
Actions like injecting data into financial systems, modifying customer records, making medical decisions, or triggering external communications should always require human-in-the-loop oversight. We all love automation, but it needs to designed and implemented around an intelligent, risk-based, approach.
The control model here is familiar: least privilege, tiered approvals, separation of duties, and logging that supports real accountability. The same principles used for human access management apply to your AI model, whether the “user” is breathing electrons or oxygen.
Expanded Attack Surface
AI platforms introduce new attack paths. Data poisoning, prompt injection, and model manipulation are potential security threats to AI solutions. At the same time, adversaries are increasingly using AI to accelerate reconnaissance, phishing, and social engineering.
From a cybersecurity perspective, AI systems should be treated like any other externally exposed application. They belong in threat models. They require input validation, integrity checks, and testing. If an organization would not deploy a traditional application without these controls, there is no defensible reason to treat AI differently.
Governance and Regulatory Expectations
Regulators, insurers, and auditors are converging on a consistent expectation: organizations must demonstrate that AI risks are identified, assessed, and managed. Frameworks such as the NIST AI Risk Management Framework, EU AI Act, and some state level requirements differ in scope, but they emphasize similar themes: clear ownership, documented lifecycle management, data accountability, and traceability of decisions.
At a minimum, organizations should maintain an AI risk register, defined governance policies, continuous monitoring, and executive-level reporting. If leadership cannot explain where AI is used, the data accessed by the AI, and how risk is managed, that gap will eventually be exposed. Unfortunately, the exposure will often be during an incident or regulatory inquiry.
The Bottom Line
AI is neither inherently safe nor inherently dangerous. What creates risk is informal adoption without governance and discipline. Executives do not need to ask “Should we use AI?” but “Where does AI create real value, and what controls make that use acceptable given our risk tolerance?”
Organizations that approach AI with the same rigor they apply to other enterprise systems will be able to scale its use responsibly. Those that do not will repeat history: high cost, high effort, evolving threats, enterprise solutions that fall well short of expectations. Focus on governance, discipline, use cases that help business operations, and risk to the level acceptable to the organization.
RubinBrown Cyber Security and AI Services teams are dedicated to helping organizations identify risks, strengthen defenses, and build lasting cybersecurity resilience through proactive strategy, education, and technical expertise.
Published: 01/08/2026
Readers should not act upon information presented without individual professional consultation.
Any federal tax advice contained in this communication (including any attachments): (i) is intended for your use only; (ii) is based on the accuracy and completeness of the facts you have provided us; and (iii) may not be relied upon to avoid penalties.