Kieran Norton a principal (partner) at Deloitte & Touche LLP, is the US Cyber AI & Automation Leader for Deloitte. With over 25 years of extensive experience and a solid technology background, Kieran excels in addressing emerging risks, providing clients with strategic and pragmatic insights into cybersecurity and technology risk management.
Within Deloitte, Kieran leads the AI transformation efforts for the US Cyber practice. He oversees the design, development, and market deployment of AI and automation solutions, helping clients enhance their cyber capabilities and adopt AI/Gen AI technologies while effectively managing the associated risks.
Externally, Kieran helps clients in evolving their traditional security strategies to support digital transformation, modernize supply chains, accelerate time to market, reduce costs, and achieve other critical business objectives.
With AI agents becoming increasingly autonomous, what new categories of cybersecurity threats are emerging that businesses may not yet fully understand?
The risks associated with using new AI related technologies to design, build, deploy and manage agents may be understood—operationalized is a different matter.
AI agent agency and autonomy – the ability for agents to perceive, decide, act and operate independent of humans –can create challenges with maintaining visibility and control over relationships and interactions that models/agents have with users, data and other agents. As agents continue to multiply within the enterprise, connecting multiple platforms and services with increasing autonomy and decision rights, this will become increasingly more difficult. The threats associated with poorly protected, excessive or shadow AI agency/autonomy are numerous. This can include data leakage, agent manipulation (via prompt injection, etc.) and agent-to-agent attack chains. Not all of these threats are here-and-now, but enterprises should consider how they will manage these threats as they adopt and mature AI driven capabilities.
AI Identity management is another risk that should be thoughtfully considered. Identifying, establishing and managing the machine identities of AI agents will become more complex as more agents are deployed and used across enterprises. The ephemeral nature of AI models / model components that are spun up and torn down repeatedly under varying circumstances, will result in challenges in maintaining these model IDs. Model identities are needed to monitor the activity and behavior of agents from both a security and trust perspective. If not implemented and monitored properly, detecting potential issues (performance, security, etc.) will be very challenging.
How concerned should we be about data poisoning attacks in AI training pipelines, and what are the best prevention strategies?
Data poisoning represents one of several ways to influence / manipulate AI models within the model development lifecycle. Poisoning typically occurs when a bad actor injects harmful data into the training set. However, it’s important to note that beyond explicit adversarial actors, data poisoning can occur due to mistakes or systemic issues in data generation. As organizations become more data hungry and look for useable data in more places (e.g., outsourced manual annotation, purchased or generated synthetic data sets, etc.), the possibility of unintentionally poisoning training data grows, and may not always be easily diagnosed.
Targeting training pipelines is a primary attack vector used by adversaries for both subtle and overt influence. Manipulation of AI models can lead to outcomes that include false positives, false negatives, and other more subtle covert influences that can alter AI predictions.
Prevention strategies range from implementing solutions that are technical, procedural and architectural. Procedural strategies include data validation / sanitization and trust assessments; technical strategies include using security enhancements with AI techniques like federated learning; architectural strategies include implementing zero-trust pipelines and implementing robust monitoring / alerting that can facilitate anomaly detection. These models are only as good as their data, even if an organization is using the latest and greatest tools, so data poisoning can become an Achilles heel for the unprepared.
In what ways can malicious actors manipulate AI models post-deployment, and how can enterprises detect tampering early?
Access to AI models post-deployment is typically achieved through accessing an Application Programming Interface (API), an application via an embedded system, and/or via a port-protocol to an edge device. Early detection requires early work in the Software Development Lifecycle (SDLC), understanding the relevant model manipulation techniques as well as prioritized threat vectors to devise methods for detection and protection. Some model manipulation involves API hijacking, manipulation of memory spaces (runtime), and slow / gradual poisoning via model drift. Given these methods of manipulation, some early detection strategies may include using end point telemetry / monitoring (via Endpoint Detection and Response and Extended Detection and Response), implementing secure inference pipelines (e.g., confidential computing and Zero Trust principles), and enabling model watermarking / model signing.
Prompt injection is a family of model attacks that occur post-deployment and can be used for various purposes, including extracting data in unintended ways, revealing system prompts not meant for normal users, and inducing model responses that may cast an organization in a negative light. There are variety of guardrail tools in the market to help mitigate the risk of prompt injection, but as with the rest of cyber, this is an arms race where attack techniques and defensive counter measures are constantly being updated.
How do traditional cybersecurity frameworks fall short in addressing the unique risks of AI systems?
We typically associate ‘cybersecurity framework’ with guidance and standards – e.g. NIST, ISO, MITRE, etc. Some of the organizations behind these have published updated guidance specific to protecting AI systems which can be very helpful.
AI does not render these frameworks ineffective – you still need to address all the traditional domains of cybersecurity — what you may need is to update your processes and programs (e.g. your SDLC) to address the nuances associated with AI workloads. Embedding and automating (where possible) controls to protect against the nuanced threats described above is the most efficient and effective way forward.
At a tactical level, it is worth mentioning that the full range of possible inputs and outputs is often vastly larger than non-AI applications, which creates a problem of scale for traditional penetration testing and rules-based detections, hence the focus on automation.
What key elements should be included in a cybersecurity strategy specifically designed for organizations deploying generative AI or large language models?
When developing a cybersecurity strategy for deploying GenAI or large language models (LLMs), there is no one-size-fits-all approach. Much depends on the organization’s overall business objectives, IT strategy, industry focus, regulatory footprint, risk tolerance, etc. as well as the specific AI use cases under consideration. An internal use only chatbot carries a very different risk profile than an agent that could impact health outcomes for patients for example.
That said, there are fundamentals that every organization should address:
- Conduct a readiness assessment—this establishes a baseline of current capabilities as well as identifies potential gaps considering prioritized AI use cases. Organizations should identify where there are existing controls that can be extended to address the nuanced risks associated with GenAI and the need to implement new technologies or enhance current processes.
- Establish an AI governance process—this may be net new within an organization or a modification to current risk management programs. This should include defining enterprise-wide AI enablement functions and pulling in stakeholders from across the business, IT, product, risk, cybersecurity, etc. as part of the governance structure. Additionally, defining/updating relevant policies (acceptable use policies, cloud security policies, third-party technology risk management, etc.) as well as establishing L&D requirements to support AI literacy and AI security/safety throughout the organization should be included.
- Establish a trusted AI architecture—with the stand-up of AI / GenAI platforms and experimentation sandboxes, existing technology as well as new solutions (e.g. AI firewalls/runtime security, guardrails, model lifecycle management, enhanced IAM capabilities, etc.) will need to be integrated into development and deployment environments in a repeatable, scalable fashion.
- Enhance the SDLC—organizations should build tight integrations between AI developers and the risk management teams working to protect, secure and build trust into AI solutions. This includes establishing a uniform/standard set of secure software development practices and control requirements, in partnership with the broader AI development and adoption teams.
Can you explain the concept of an “AI firewall” in simple terms? How does it differ from traditional network firewalls?
An AI firewall is a security layer designed to monitor and control the inputs and outputs of AI systems—especially large language models—to prevent misuse, protect sensitive data, and ensure responsible AI behavior. Unlike traditional firewalls that protect networks by filtering traffic based on IP addresses, ports, and known threats, AI firewalls focus on understanding and managing natural language interactions. They block things like toxic content, data leakage, prompt injection, and unethical use of AI by applying policies, context-aware filters, and model-specific guardrails. In essence, while a traditional firewall protects your network, an AI firewall protects your AI models and their outputs.
Are there any current industry standards or emerging protocols that govern the use of AI-specific firewalls or guardrails?
Model communication protocol (MCP) is not a universal standard but is gaining traction across the industry to help address the growing configuration burden on enterprises that have a need to manage AI-GenAI solution diversity. MCP governs how AI models exchange information (including learning) inclusive of integrity and verification. We can think of MCP as the transmission control protocol (TCP)/internet protocol (IP) stack for AI models which is particularly useful in both centralized, federated, or distributed use cases. MCP is presently a conceptual framework that is realized through various tools, research, and projects.
The space is moving quickly and we can expect it will shift quite a bit over the next few years.
How is AI transforming the field of threat detection and response today compared to just five years ago?
We have seen the commercial security operations center (SOC) platforms modernizing to different degrees, using massive high-quality data sets along with advanced AI/ML models to improve detection and classification of threats. Additionally, they are leveraging automation, workflow and auto-remediation capabilities to reduce the time from detection to mitigation. Lastly, some have introduced copilot capabilities to further support triage and response.
Additionally, agents are being developed to fulfill select roles within the SOC. As a practical example, we have built a ‘Digital Analyst’ agent for deployment in our own managed services offering. The agent serves as a level one analyst, triaging inbound alerts, adding context from threat intel and other sources, and recommending response steps (based on extensive case history) for our human analysts who then review, modify if needed and take action.
How do you see the relationship between AI and cybersecurity evolving over the next 3–5 years—will AI be more of a risk or a solution?
As AI evolves over the next 3-5 years, it can help cybersecurity but at the same time, it can also introduce risks. AI will expand the attack surface and create new challenges from a defensive perspective. Additionally, adversarial AI is going to increase the viability, speed and scale of attacks which will create further challenges. On the flip side, leveraging AI in the business of cybersecurity presents significant opportunities to improve effectiveness, efficiency, agility and speed of cyber operations across most domains—ultimately creating a ‘fight fire with fire’ scenario.
Thank you for the great interview, readers may also wish to visit Deloitte.