CISO Masterclass: Navigating Today’s Cyber Landscape with Ahmed Nabil Mahmoud
Welcome to the PCI Security Standards Council’s blog series, The AI Exchange: Innovators in Payment Security. This special, ongoing feature of our PCI Perspectives blog offers a resource for payment security industry stakeholders to exchange information about how they are adopting and implementing artificial intelligence (AI) into their organizations.
In this edition of The AI Exchange, Jscrambler’s CTO and Co-Founder, Pedro Fortuna, offers insight into how his company is using AI, and how this rapidly growing technology is shaping the future of payment security.
How have you most recently incorporated artificial intelligence within your organization?
At Jscrambler, we’ve recently embedded AI directly into our client-side security platform – particularly to support merchant compliance with PCI DSS requirements 6.4.3 and 11.6.1. We’ve launched the first AI Assistant purpose-built for script inventory and authorization, a daily operational challenge that many merchants struggle with due to the volume of scripts, how frequently they change, and the limited resources available to analyze them.
This assistant doesn’t just classify JavaScript; it understands script behavior, origin, and context – and explains its reasoning. It’s especially valuable in bridging the knowledge gap that often exists when those tasked with approving scripts are not seasoned security analysts. With the assistant, they gain the insight and context they’re missing, enabling them to make smarter, more confident, and faster decisions and focus human effort where it matters most.
We’ve also adopted AI internally across our engineering and product teams, leveraging code assistants for secure software development, LLMs for reverse-engineering analysis, and multi-agent flows to support our research pipeline and content generation. But our biggest leap has been putting AI into our customers’ hands, in production, with full human accountability.
What is the most significant change you’ve seen in your organization since AI-use has become so much more prevalent?
There’s been a clear shift from tool thinking to system thinking. Instead of viewing AI as an assistant for one-off tasks, we’re now designing workflows that expect AI to be in the loop – from internal engineering pipelines to customer-facing interfaces.
One of the biggest changes is how AI has enabled us to accelerate innovation, particularly in shaping the customer experience of our products. Tasks that used to require deep technical expertise – like reviewing and authorizing third-party scripts – can now be guided by an AI assistant that simplifies complexity without sacrificing accuracy. This is fundamentally changing how our users interact with security processes: they feel more empowered, more confident, and more in control.
That said, we’re also deeply aware of AI’s limitations. That’s why we’ve built guardrails and ensured humans remain in the loop, especially when decisions have security implications. This human-AI collaboration helps prevent hallucinations from creating issues and ensures we maintain trust and accountability as AI becomes more embedded in our systems.
How do you see AI evolving or impacting payment security in the future?
The browser is becoming a critical front in the future of payment security – not just because it’s where humans interact, but because it’s where AI agents will increasingly transact on our behalf. As agentic AI systems mature, we’ll see a shift: browsers will no longer be just the “last mile” of the user experience – they’ll become the primary interface for autonomous buyers, capable of handling product discovery, cart management, and checkout entirely on their own.
Yes, many payments will move to more secure backend APIs, embedded wallets, and mobile-native flows – and those will benefit from stronger inherent protections. But the world doesn’t change overnight. The web remains vast, fragmented, and long tailed. For years to come, millions of websites will continue to behave as they always have, expecting a human to drive the shopping experience. In reality, it will often be an AI agent or “AI-browser” navigating those flows – clicking, scrolling, and making purchasing decisions in human-like ways.
This shift introduces both new opportunities and new risks. Skimming attacks will evolve, becoming more intelligent and evasive – not only avoiding traditional detections, but also tricking AI agents through subtle DOM manipulation or behavioral deception. Just as attackers today exploit human psychology, tomorrow they’ll exploit the behavioral assumptions of automated agents – and yes, plenty of prompt injection will come along for the ride.
On the defensive side, AI will go beyond fraud detection and anomaly spotting, playing a bigger role in identifying intent and orchestration – especially in client-side threats like web skimming, clickjacking, and iframe tampering. These attacks are highly dynamic and often appear benign at a glance. Human review doesn’t scale. AI fills that gap – not just by recognizing known patterns, but by detecting orchestrated changes in script behavior, origin, or context that point to compromise.
Ultimately, AI in payment security is not just about blocking threats – it’s about building systems that can adapt, explain, and defend in real time, in an environment where both users and attackers are increasingly non-human. The browser – once considered the weakest link — may soon become the most actively defended and dynamically instrumented layer in the payment stack. And as browsers themselves become AI-assisted, this evolution becomes not just likely, but inevitable.
What potential risks should organizations consider as AI becomes more integrated into payment security?
As AI becomes more integrated into payment security, several key risks emerge:
- Hallucinations and misclassifications can lead to incorrect decisions, especially when AI is used to analyze scripts, detect fraud, or classify behaviors.
- Prompt injection and other adversarial inputs can be used to manipulate AI behavior in unpredictable ways.
- Information leakage beyond prompt injection. AI systems that summarize logs, analyze code, or retain conversation context may unintentionally expose sensitive internal data – such as checkout configuration, telemetry, fraud scoring thresholds, or even API keys – through poorly scoped responses, caching, or inference. Multi-agent systems and plugins can also increase the risk of cross-context data exposure.
- Model drift over time may reduce the accuracy and reliability of AI systems if not continuously monitored and updated.
- Expanded attack surfaces, including the AI supply chain, prompts, plugins, APIs, and third-party integrations, introduce new vectors for compromise.
- Lack of explainability can hinder trust and make it difficult to validate or audit decisions made by AI systems.
- Over-reliance on automation may reduce human oversight in critical workflows, increasing the risk of undetected failure.
- Data governance and compliance challenges, especially under emerging regulations such as the EU AI Act and existing frameworks such as GDPR, can require organizations to document how AI systems operate, make decisions, and protect personal data.
What advice would you provide for an organization just starting their journey into using AI?
Start by focusing on problems, not models. Identify concrete pain points where your team is struggling with scale, ambiguity, or repetitive decisions – and explore how AI might assist, not replace, the humans in that loop.
Resist the temptation to automate everything from day one. Instead, begin with narrow, well-scoped use cases that can be tested, measured, and improved incrementally. Think of AI as a co-pilot, not a black box.
From the outset, plan for:
- Human oversight in workflows that involve risk, compliance, or customer impact.
- Governance and auditability – especially around data inputs, model behavior, and decision logic.
- Failure modes – not just accuracy, but what happens when the model is wrong, outdated, or manipulated.
Understand that AI-enabled systems are not static products; they require maintenance, drift detection, and feedback loops. That’s why it’s crucial to embed AI into your existing operational and security processes – not bolt it on as a side project.
And finally: document everything. If you can’t explain how the system works, how it was trained, and how it’s monitored, it will be difficult to defend its decisions – in the boardroom, or in front of an assessor or a regulator.
What AI trend (not limited to payments) are you most excited about?
I’m most excited about the rise of agentic AI systems – not just models that generate responses, but AI that can plan, reason, take action, and adapt across complex workflows. This has massive implications for how we build and operate security systems, especially in security critical environments like payments.
As someone who’s been developing security products for over 15 years, I’ve seen firsthand how overwhelming it can become for organizations. The sheer volume of alerts and micro-decisions that teams are expected to handle is flabbergasting – and far too often, it’s the small, seemingly low-risk choices that compound into real-world security failures.
That’s why I’ve always advocated for security by default – designing systems that remove decision-making friction wherever it’s safe to do so. But of course, you can only hardcode defaults for the decisions you fully understand.
This is where AI changes the equation. With the right use of agentic AI, we can confidently offload more of that decision burden, especially in noisy or repetitive workflows – while still preserving and even enhancing human oversight where it truly matters.
Beyond payments, I’m also encouraged by the growing emphasis on explainability and governance. The future isn’t just about smarter AI – it’s about AI we can trust, verify, and control. And in security, that’s non-negotiable.


