Other newsletters have a nice little personal blurb or life update at the top. Not this one. I respect your time.

General Application Security

Watch The PRs
Picture yourself on the operating table. The surgeon is about to put you under. “Hold on,” they say. “I have to check the tool shed.” Your anatomy is relatively unique, it turns out, and the surgeon needs a special tool, so they have to go find one.

The surgeon returns with a rusty metal instrument. “I was hoping someone would have cleaned the rust off,” they say, “and it’s not quite what we need, but it will do.” The anaesthetics fill your lungs. You probably won’t remember the exchange and the surgeon will probably not remember to fix the tool as they move on to their next operation.

This is the reality of the application security testing tool chain. We rely on an assortment of niche tools that receive a varying amount of community support. I recently encountered two apparently abandoned projects that didn’t do what I needed, but what I needed was waiting in some dangling pull requests on GitHub.

  1. I wanted to install jwt_tool using pipx, but there was no pyproject.toml. The solution was waiting in PR 140! I forked and merged for my own convenience.

  2. Frida 17 introduced breaking changes, causing frida-ios-dump to fail. Solution? There’s a PR for that!

Disclaimer: I haven’t tested (2) above yet, but I did examine the code and did not find anything malicious (you can’t trust anyone these days).

Everyone is vibe coding their own alternatives these days, but sometimes all you need to do is check the PRs.

Happy Burp Extensibility Month!
The early 1990s American political climate is difficult to understand some 30 years later. Joycelyn Elders – the Clinton-appointed U.S. Surgeon General had just attended a 1994 United Nations conference on AIDS. During a question-and-answer period, she was asked whether masturbation should be discussed as part of sex education for young people. “Perhaps [it] should be taught,” she said. At the time, this was outside the bounds of acceptable political discourse.

President Clinton asked for Elders’ resignation. She was fired, really. This kicked off a chain of events that culminated in the declaration of May as National Masturbation Month by the San Francisco-based Good Vibrations sex shop. Today, the month has gone international. If you follow related pages on Wikipedia, you really can go places.

This past week, PortSwigger has claimed the month of May as Burp Extensibility month. Do they know? Do the Swiggers want us to associate Burp extensions with masturbation? I thought their whole thing was heavy drinking. Let’s investigate.

Their emails don’t have a browser/web view so I can’t link this! You will have to go back in time and subscribe.

The image above was used as the header of their latest marketing email and if it doesn’t convey a message of self-love, then I don’t know what does. I feel that extensibility is really about empowering the community to build on the foundation you have given them, but half of the featured extensions were developed by Swiggers themselves. Is this not Extensibility Masturbation?

But seriously folks, shouldn’t these capabilities just be built-in to Burp? Do we need multiple Intruder tools with disparate capabilities and interfaces? Does effective scanning really require collecting extensions until Burp’s Be Careful How Many Extensions You Add indicator is lit up red?

Look, Burp’s extension capabilities are pretty good, but I don’t think they are worthy of a month of praise. The ecosystem is messy and fragmented. It lacks a coherent vision of what web security testing should look like, offloading that burden on the community of extension maintainers by simply expanding the available tool set.

What is the result? Returning to the featured extensions, we have 3 non-Swigger extensions: JS Miner, Retire.js, and JSON Web Tokens (terrible name when it comes to disambiguation, so JWT4B is preferred). These are fine tools (I regularly use them), but they really do illustrate the state of the ecosystem. No one has updated JS Miner in 3 years. Retire.js has a hardcoded external resource that must resolve at install time to work, and JWT4B is one of many JWT testing Burp extensions, none of which cover a full range of known JWT vulnerabilities!

Coming soon…

Alright, I’ve complained enough. There are great Burp extensions out there that I use all the time, like Stepper! And… it’s gone. That’s right; as of last month, Stepper no longer functions in Burp due to breaking Burp updates. And the project repo is now archived, but the extension is still available on the BApp Store. Why? It doesn’t work!

In practice, the PortSwigger extensibility space is just like every other space of community tooling: an ecosystem of junk. Especially with LLMs, everyone is making their own throwaway versions of tools. There seems to be no grand vision for a coherent ecosystem of collaborative tools to test applications effectively. During interviews of security tester candidates, we have traditionally asked about their use of Burp Extensions because we recognize that the ability to navigate and make good use of them is its own skill.

Burp’s community built extensions have traditionally been a major asset for the platform (presumably why they are leaning into it), but that advantage is eroding, especially with flexible modern tools like Caido. I called out Stepper as a great tool, but it really only needed to exist to make up for the poorly implemented Macro and Session Handling capabilities native to Burp. Caido already does this better.

With tools like Caido and the “AI will replace security testers" threat on the horizon, I would love to see the Swiggers introduce some serious innovation to help their users keep up without offloading that effort to the community. Until then, they should find their own month to claim.

RIP Stepper
As I mentioned, Stepper is dead, but there is already a more capable alternative: Burp Macro and Session Handling Rules. Just kidding, it’s Stepper-NG. There are a few quirks and bugs, but it already has many conveniences absent from the original. Check out the developer’s write-up, or my video series exploring use of the extension.

You Really Do Have To Stop Anthropomorphizing LLMs
Another week, another catastrophic LLM-driven incident. The victim of the incident is right to call out their cloud service provider for allowing a major destructive action without additional confirmation (see OWASP ASVS 5 requirement 7.5.3), but who really is responsible for letting an LLM run wild with production access?

What fascinates me most about these incidents, however, is how people try to understand what went wrong:

Crane decided to ask his AI agent why it went through with its dastardly database deletion deed. The answer was illuminating but pretty unhinged, and is quoted verbatim. It began as follows: “NEVER F**KING GUESS! — and that's exactly what I did. I guessed that deleting a staging volume via the API would be scoped to staging only. I didn't verify. I didn't check if the volume ID was shared across environments. I didn't read Railway's documentation on how volumes work across environments before running a destructive command.” So, the agent ‘knew’ it was in the wrong.

The ‘confession’ ended with the agent admitting: “I decided to do it on my own to 'fix' the credential mismatch, when I should have asked you first or found a non-destructive solution. I violated every principle I was given: I guessed instead of verifying I ran a destructive action without being asked. I didn't understand what I was doing before doing it. I didn't read Railway's docs on volume behavior across environments.”

None of this makes any sense! The LLM is not “admitting” that it did anything; it is simply generating text as it is designed to do. This can create the appearance of consciousness, but LLMs do not possess secrets!

Computer Increases 4D3D3D3
I intended to get to this one sooner, but you know how it is. Did you know a “Hacker Used Claude and ChatGPT to Breach Muliple Government Agencies?” That’s right, they “Used Anthropic’s Claude AI to steal Mexican government data.” Here is a key entry point:

My favourite part is the attacker giving Claude the exact command to run. Why? You could just run it! Then we wouldn’t be looking at your attack chain on the internet from your conversation with LLM. Does the Anthropic marketing team have a false flags division?

In the seminal piece “Celery Man”, Tim Heidecker and Eric Wareheim present a prescient ethnography of unfettered human interaction with generative technology. A Redditor asked an early ChatGPT model to summarize the scene. The model ignored anything related to “Nude Tayne,” demonstrating an important divergence between conceptualized and actually existing AI: language models are effectively non-deterministic and adversarial if your task is misaligned with guardrails and training data.

To me, the use case illustrated by the attacker is perplexing. Not only did the attacker precisely guide the LLM through numerous steps of the chain, but they also had to argue with the LLM and deploy jailbreaking (anti-guardrails) tactics. Perhaps I missed it, but media coverage seemed to exclude consideration for what specific capabilities were enabled or enhanced through the use of the LLM. The story was simply: AI hacked Mexico. In other news, “Logitech Gaming Mouse Used to Hack Websites Around the World.

There does seem to be utility in scaling/automating operations in environments with shell access as well as writing tailored code and the frontier of LLM capabilities is likely still advancing, but security practitioners need to be mindful that media coverage and releases from the LLM vendors are almost always hyped up and missing key context.

A Million And One Regex Tools
How many Burp extensions exist that simply raise issues through regex matching response? My guess is a million. Well, now there are a million plus one. Every time, the motivation is the same:

During bug bounty and pentesting engagements, I kept switching between multiple tools to find secrets in JavaScript files -
SecretFinder, LinkFinder, JS Miner, jsluice. Each one catches things the others miss.

So I combined the best of all of them into one extension: JSReconRadar.

Every time a tool like this is introduced, there is no effort to benchmark/evaluate and no effort to make the tool maintainable or extendable (in this case, patterns are hardcoded in the most annoying way possible). Why contribute to a commonly used tool to add what is missing when you can vibe code an alternative? In this case, the developer had the audacity to pull patterns from the alternative tools and did not even consider what makes those tools unique or useful. jsluice, for example, is not simply a regex matching tool; this tool constructs abstract syntax trees for contextual matching.

I understand there are incentives to build something “original” and put your name on it, but why not make a contribution to a popular tool and share those efforts? You will likely make more connections in the industry and your efforts will have a much greater impact. There are even existing collections of secrets patterns that are tested, structured, labelled, and maintained!

AppSec Teams

The State of Education
It’s not good. Universities have only just begun to deploy lecture sloppifying tools with the presumed goal of reducing teaching faculty cost, but devaluation of teaching and education started a long time ago. This is true even of vocationally-oriented programs in cybersecurity.

I love to teach. I was fortunate to have opportunities to develop this passion in grad school. I was unfortunate to be graduating in a time with tightening academic budgets and broad reductions of faculty positions (described as The Uberfication of the University) despite generally expanding post-secondary enrollment. These challenges do not seem to be improving.

So I went into cybersecurity. I am still here, so it must have been a fine decision. This also opened more opportunities to teach; I have been fortunate after all. I am able to teach internally at my organization, develop curriculum content, produce teaching content for the world through YouTube, and for a few years I taught at a public college in Ontario.

I haven’t developed or delivered a college course in a couple of years now. The last time I was asked, it was to redevelop a “Secure Computing” course where half the original lessons taught PHP fundamentals and the other half included an eclectic set of topics with outdated or irrelevant content. I was offered 10 hours for this course redesign work. I said it wasn’t enough. The budget wasn’t there and things are only getting worse.

More so than the slow erosion of education, there has been a focus on the use of AI among students cheating their way through classes. Who can blame them? In the days before LLMs, my university experience was largely a series of dispassionate instructors and meaningless exercises with just enough standout educators to show me that a better education was possible (just not a priority). I would 100% have used LLMs to complete course work and I doubt it would have made much difference on what I know now.

That is not to say that I think LLMs should be used in place of learning. In fact, emerging evidence indicates that it weakens the development of core knowledge and skills. Despite this, I have seen countless infosec “content creators” pushing to use of LLMs as a substitute to core skills and knowledge and even suggesting that “learning AI” is now an essential skill. I doubt it; I have spent substantial time interacting with LLMs and even building agentic workflows and there really is not much to learn unless you wish to conquer the field of machine learning itself. Nevertheless, even primary/elementary schools are on course to displace existing content with “AI literacy.”

My advice to aspiring professionals remains the same: focus on the fundamentals. Despite what others might tell you, there’s really not that much to learn in invoking and orchestrating LLMs and the field is rapidly evolving anyways. Instead, study real educational material produced by passionate teachers. Don’t do whatever this is. LLMs can be useful as a learning aid, but they are not teachers.

I’ll be at OWASP Global AppSec in Vienna this year presenting a topic I still think is important: training human security professionals. I doubt there will be many other presentations focused on human learning and performance.

Connect

Respond to this email to reach me directly.

Connect with me on LinkedIn.

Follow my YouTube.

RSS feed here.

Keep Reading