- Rigorous AppSec
- Posts
- Rigorous AppSec - April 29, 2025
Rigorous AppSec - April 29, 2025
The only non-LLM AppSec newsletter with mostly LLM content.
General Application Security
Tabnabbing is Still Possible in Modern Browsers
The OWASP resource for Reverse Tabnabbing presents an unambiguous message regarding the state of this vector: “Update 2023 - this is fixed in modern, evergreen, browsers”. This is actually true only of the <a>
, <area>
, and <form>
elements that now implement an implicit rel=”noopener”
. If cross-origin links are opened via window.open()
without specifying noopener
, the opened page will be present in the same Browsing Context Group (BCG) and have shared access to a Window
object, permitting modification of window.location
. This appears to be blocked by default in Firefox, but the attack can still occur in Chromium-based browsers. This is likely a pretty rare and insignificant issue in practice, but does make a further case for the implementation of Cross-Origin-Opener-Policy that can be used to prevent documents from sharing a BCG. This will be represented by requirement 3.4.8 in the coming ASVS V5. The web is complex and the best defense often seems to be as many defenses as you can reasonably implement.
Combining Burp Suite, AI, and Private Equity
PortSwigger has gone all in on AI capabilities. All in. They claim to not be “cashing in on the hype”, but it’s fairly apparent that that they are cashing in. Rather than building a flexible platform that lets organizations use internally hosted models, PortSwigger has used to opportunity to introduce an additional revenue stream (presumably) by requiring all of your sensitive testing data to pass through their LLM service while charging you credits for it.

What’s the profit margin on this? ChatGPT tokens for 4o are available through Azure at $2.50 per 1M input and $10.00 per 1M output, but we cannot be sure Credits = Tokens.
This might make sense if the newly integrated LLM capabilities consisted of more than passing prompts to GPT-4o or something. So how does this new functionality work? You pass prompts to GPT-4o. A sponsored demonstration by John Hammond illustrates just how quickly you can burn through tokens without accomplishing much of anything.
Claimed to be “A year in the Making”, I have a hard time believing that this approach was chosen to optimize the value and usability for customers working in highly sensitive environments, but perhaps there is more to come.

Currently, this option is unavailable.
Thanks to Burp’s extensibility, however, it is quite easy to implement alternative approaches to LLM integration. I detail such an approach in my Piper series on YouTube, which demonstrates how you can implement alternatives to extensions like AI HTTP Analyzer that instead call to your chosen LLM. It’s worth pointing out that the example use case I show and the AI HTTP Analyzer extension both seem to provide absolutely no value for experienced app testers (just look at the example used to promote the extension).
If PortSwigger really aims to drive revenue generation through token purchases, they have a real dilemma; to offer more features than user-built extensions, they ultimately have to maintain a barrier in capabilities between what extensions are capable of compared to native functionality.
So, who will win? Will it be Burp’s users, or their investors? Regrettably, I have lost the link, but I recently came across a claim that the reason user’s are moving away from Caido (and presumably to Burp) is due to the weak extension ecosystem. It will be interesting to see if PortSwigger are willing to hinder their major market advantage in pursuit of an additional (presumed) revenue stream. This sounds like the classic story of private equity.
MCP Driven Reverse Engineering
This is starting to look like an actually useful integration of LLMs into security testing pipelines. It used to be impossible to be a master of all app technologies. These days, it seems impossible to even be aware of them all, but LLMs certainly possess the context as they gobble up the entirety of the web. I do think that this form of semi-automated analysis could help save time in reverse engineering flows by removing tedious low level analysis burdens.
Standards
A Better Vision for Cyber Professionalization
My province (Ontario) has an unfortunate history with self-trained cosmetic surgeons. You would expect that a cosmetic surgeon must be a surgeon, but any Ontario physician can perform cosmetic surgery as long as they don’t call themselves surgeons. The regulators hoped that patients would learn to carefully evaluate titles to avoid getting surgery from someone who distinguishes themselves as a “cosmetic doctor”. Of course, in cyber security, we know that this is a perfect control because everyone is diligent and no one is ever deceived.
In spite of the various issues within the medical profession today, the profession is substantially more mature than cyber security, but if you look backwards in time, you will see a medical profession that is more similar to cyber today:
In the early days, no schools or training requirements existed.
As medical education evolved, schools were created to capitalize on the demand, resulting in a wide range of quality with no real standards (in some cases, schools resisted written evaluations because some candidates were illiterate).
Eventually, there was sufficient public and in-profession pressure that professional associations were formed to raise the standards of medical education.
Without regulation, however, there was no control to stop low quality for profit schools from graduating unskilled practitioners who could call themselves doctors.
Licensing and regulatory models were eventually adopted near universally to protect the public and the profession (from the profession’s point of view, no one wants to complete rigorous medical training only to reach the same outcome as someone graduating from a diploma mill).
Unfortunately, all of the historical issues in medicine are roughly the present reality in application security testing today:
There is no standardized training/curriculum.
The existing pathway into the field is largely driven by for-profit organizations that deliver low quality or very limited training and certifications.
Anyone can call themselves a “penetration tester” (secondary problem: there is no recognition of specialization, such as expertise in application security).
There is no in-profession effort to standardize training and practices.
In a profession that – similar to medicine – requires technical skills/knowledge to mitigate risk in high risk environments, this leads to many issues:
There are no simple and effective ways to evaluate testers and organizations. In addition to the challenge of finding competent vendors, this has secondary impacts on the structure of the market, where business relationships and reputation are primary drivers of sales, making it challenging for newcomers.
As a result of the above, outcomes are going to be substantially varied (I have seen many app test reports of such a low quality you might not believe it).
Skilled professionals have to compete against anyone, really.
There is no straightforward path to enter the field. HR/recruiters rely on a mix of certifications that have brand recognition. Aspiring professionals pay in excess for certs that teach them nothing and get them nowhere.
Existing training is largely composed of a set of for-profit organizations that compete on brand recognition, not performance against some standardized evaluation or curriculum (even not-for-profit orgs like CREST have terrible appsec certs).
Security testers themselves (the actual professionals here) do not have input into how the professional develops and what the adopted standards are.
There is a lack of standardization for processes, methodology, and reporting. This isn’t just good for customers of security tests, it’s good for testers as well. Oh, you want me to add CVSS to your reports in a completely incorrect way? Sorry, my professional standards won’t let me be negligent in such a way.
I could go on. Ultimately, we won’t solve these problems unless we organize and collaborate as a profession. OWASP is a great organization to make connections, but I don’t think it is fully suited to this project. The medical profession developed regional associations to influence regional standards. Perhaps we ought to do the same.
AppSec Teams
Ongoing Training and Education
The following is roughly true in application security testing:
Technology and the current state-of-the-art is constantly evolving.
There is a substantial amount of technical information relevant to the craft.
More knowledgeable security testers are more effective (this is perhaps the least proven/provable in the list, but I would think it is self-evidently true to an extent).
I have already written numerous times about how the certification industrial complex is failing to provide comprehensive and up-to-date application security training, especially for security testers. So how do testers continue their learning journey?
Here are some recommendations:
Newsletters, podcasts, technical blogs, and social media. Staying in tune with others in the industry is obviously important, but it can be difficult to find the signal in the noise, especially when many resources shift away from appsec, into areas like LLMs, fitness, or whatever.
Watching development and discussion around application security standards. Even if you don’t participate, consider watching issues on projects like the OWASP ASVS. You will learn much more from the debates and opinions shared than the final items that are included in the standard.
CTFs.. maybe? I am fairly skeptical of CTF exercises. I think they are a fun way to stay engaged (perhaps as a team-wide activity), but (in my experience) they are often very low value in terms of skills/knowledge you can take away to become a more effective tester. I see them mostly as novelty exercises where you explore something you would not normally encounter, but there are exceptions.
Ultimately, however, I think we lack a level of formalized ongoing learning and development. This is another area where we can stand to learn from more mature professions (such as medicine). While our industry has a number of conferences and workshops, these often lack the specificity and the depth to be highly valuable for professional practice. Further, there is a lack of organized effort for ongoing training. Instead, we have a mix of private organizations competing on entry-level training, which I presume is the biggest market (appealing to people trying to break into offensive security).
There is no obvious or easy path for us to mature, but if you’re interested in this challenge, please reach out. I am presently working on a curriculum project that will eventually make its way into the open source world.
Science and Security
Cite Your Damn Sources
In high school, I was compelled by the threat of bad grades to date my work and cite my sources. Apparently people writing in InfoSec did not attend high school? Have we evolved past the College dropout to B2B SaaS pipeline?
This problem is systemic, so it almost feels unfair to target a specific article, but I just encountered this one from Check Point:
Over the past three months, on average, one out of every 16 organizations were impacted weekly by Server-Side Template Injection (SSTI) attacks.
I am not particularly interested in the prevalence of SSTI, but I am interested in empirical claims like this because the data almost always does not support the claim. Often, I am curious and motivated enough to find out where the claim came from and why it’s wrong. Lately, I am not.
Cite your sources. While you are at it, please also date your articles. In a field that moves quickly, I need to know the temporal relevance of technical info. Also, the date is a useful input into my “was this generated by an LLM?” mental algorithm.
Connect
Respond to this email to reach me directly.
Connect with me on LinkedIn.
Follow my YouTube.