- Rigorous AppSec
- Posts
- Rigorous AppSec - November 28, 2025
Rigorous AppSec - November 28, 2025
No one reads anymore
Burp Suite Documentation Enshittified
I wrote previously about the disappearance of PortSwigger’s support threads. These would have been nice to have as I try to navigate new Burp features with broken documentation. It turns out that PortSwigger documents features that they deploy exclusively to Early Adopter versions without marking the documentation with any type of Early Adopter flag. So not only is the support forum gone, but the documentation does not reflect the actual standard version of the product. As a result, PortSwigger sent me on a search for a non-existent toggle.

Simply use the toggle.
For reference, I am on the latest Burp Suite Pro Stable (2025.10.6) and the version that contains the documented feature is Early Adopter 2025.11.1.
The Wrong Way Towards Securing Canadian Code
Tanya Janca is perhaps best described as an AppSec influencer. That’s not intended to be derogatory; who wouldn’t want to achieve the highest level of celebrity status in a niche field? That is also not to say that she does not possess a relevant background; she evidently has had a long, successful career in the AppSec space.
The challenges with being an influencer arise from the incentives and priorities. While I am sure most influencers in the AppSec space began their journeys off the momentum of their excellence in the field, many seem to end up distanced from the actual technical work and real concerns of practitioners. In the content creation space, many have shifted to producing low-value slop for the algorithms. Providing accurate and valuable technical content becomes secondary because this does not pay the bills.
Tanya is not in the slop content space, but produces content that does not reflect that of an expert practitioner. As an example, she very widely promotes her newsletter with the offer of a free security headers cheat sheet. Besides the fact that this content was already better presented by other sources (and perhaps would do best within an open source standard), the cheat sheet itself contains a number of errors (can you find them?). When the document was first published through Semgrep (no longer available), I offered a number of corrections. They do not appear to have been made, even though the document was republished again long after.
This is a rather petty grievance, you must think, but it’s in my nature. This is not what motivated this newsletter, however. Tanya is presently on a mission to promote secure coding standards for the Canadian government. This is laudable on the surface (and is something I have written about), but her approach is - I think - deserving of condemnation.
I consider it a great privilege to have worked among the contributors to OWASP’s ASVS project. I learned much about AppSec in the process of working on this project, but I learned much more about standards. Identifying the qualities of secure code are really only the first step of standards like the ASVS. Beyond these qualities, you must deeply understand the purpose of the document, its intended audience, its intended use cases, how it should be structured, what issues may arise from its use, and so on. The granularity of considerations would have us arguing over word choices that might seem inconsequential to an outsider.
This level of consideration is not present in Tanya’s Secure Coding Policy, but that hasn’t stopped the effort to promote its adoption by our government with a petition. The effort was even accepted for a talk at DEF CON 33.
So. Let’s examine the policy. There is no stated author (presumed Tanya), no versioning, no date, no structure, no intended audience or use cases, just . . . a list of items. What are these items? Are these requirements? Are they considerations or recommendations? What is there scope? Who should be reading and applying them? How is compliance documented/satisfied? I don’t think I need to state that this is perhaps the worst attempt at a policy/standard I have seen, but I did anyways.
I would love to dissect every item in the list; each is concerning in its own way. Did any of the professionals who signed the petition read the proposed policy? No one really reads anymore, I guess. Regardless, we will explore a few items.
2. If possible and reasonable, apply the following concepts when designing systems and/or access controls to your systems; least privilege, zero trust, defense in depth, security by obscurity, attack surface reduction. Apply even more secure design principals if possible and applicable.
Who decides if something is possible or reasonable and how? How are these concepts defined? Is security by obscurity really a “concept” that warrants a design effort? To what extent? How do we define and identify attack surface? What are the additional unmentioned secure design principles?
6. When designing security features, prioritize usability to ensure better compliance, user satisfaction, and competitive advantage. Test them for usability, just as you would any other feature.
Consideration of UX/UI and human psychology is important to design secure systems, but this item provides no usable guidance. What makes something a security feature? Why are user satisfaction and competitive advantage considered in a secure coding policy? Why is there a specific focus on testing these features from a usability perspective?
9. If your organization follows any security frameworks, ensure you follow those as well. If your organization does not, you may want to consider adopting one, such as the guidance outlined in this document or the book ‘Alice and Bob Learn Secure Coding’.
Wouldn’t it be great if the nationally adopted software security standard could be used to promote a book?
12. Threat model all important systems. Ideally, threat model all systems. Mitigate or eliminate all threats found to be disconcerting.
I like this one because we skip the baggage of risk/threat quantification, scoring, and prioritization. What should you mitigate? Whatever is disconcerting!
15. Follow a secure system development life cycle (S-SDLC). Ideally your organization has one set out for you already. If not, follow advice in section 3 of Alice and Bob Learn Secure Coding to create your own. It is likely that your security team will not object to you adding additional security activities to your SDLC, assuming they are not potentially damaging (specifically using DAST, a web proxy, or a fuzzer against production systems or arranging a penetration test without their approval). Most other security activities will be allowed, generally without supervision or approval. If you do not have a security team, talk to the head of IT or your boss. Cheat sheet available here: https://newsletter.shehackspurple.ca/c/SSDLC
Here is another book pitch. Bonus: sign up to the newsletter. Open source and community-driven SSDLC standards? Never heard of them. Also, let’s add in an external domain to make sure we have some dependency risk in our secure coding standard.
20. Perform output encoding on all output to the screen, and ideally also for data you are sending to other systems. This helps to avoid cross site scripting (XSS).
To its credit, this item includes the reason for its existence: XSS. Unfortunately, it’s still bad because you do not want to be encoding ALL output to “the screen and ideally also for data you are sending to other systems”. Consider all the relevant discussions from the OWASP ASVS.
24. User passwords should be salted, then hashed, before storing. Peppering is an optional, additional layer of security you can add, depending on the level of security assurance your organization needs. Performing hashing more than once (work factor) can decrease your chances of a rainbow table attack and is generally recommended. Salts are not secret but they are unique per user, while peppers are secret but all your system’s users will use the same pepper value. Store your pepper in your secret management tool, store your salts in the database.
For comparison to a robust standard, review V11.4 of the OWASP ASVS.
27. Use all applicable security headers. For more details on which ones to use, and how to configure them, download a cheat sheet here: https://newsletter.shehackspurple.ca/c/securityheaders
There is that cheat sheet again! Developers are going to love having to subscribe to a newsletter to get it.
33. If possible, always use memory-safe languages. If you must use a language that is not memory-safe, to avoid buffer, stack, string, and integer overflows, do the following: perform careful input validation, avoid using functions that are known to have bounds/overflow issues, perform security testing that focuses on these areas of your code, and the rest of the advice in chapter 3 of Alice and Bob Learn Secure Coding on this topic.
This is functionally a duplicate of 18. I am sure there is other overlap across the list, but I did not inspect thoroughly. One of the benefits of having a well-structured standard is that it is easier to avoid duplicates/conflicts. There is another memory management item at 76.
53. Use all available preventative measures to protect your work and home accounts which have elevated privileges, including turning on multi-factor authentication (MFA), using a long, unique, complex password, as well using a password manager. Register with haveibeenpwned.com as well for good measure. Reset passwords immediately if you suspect a breach.
All available preventative measures?
56. Add layers of abstraction and/or multiple defences whenever possible into your systems. Add more layers for more sensitive systems.
Layers upon layers upon layers upon layers…
70. Remember that insecure deserialization can happen in any language that allows serialization, not just Java. Do not accept serialized objects from untrusted sources, and if you must send one yourself across a network, encrypt it first with a private key, then decrypt it on the other end to ensure its integrity before deserializing it.
OK, I didn’t want to cover more duplicates, but this is effectively a duplicate of 34. Note also that both items implore you to not accept serialized objects from “untrusted” sources. What exactly is “untrusted”? The very first policy item states “Assume all other systems and data that your system must interact with are insecure.“ So why are we mentioning “untrusted” at all? By the policy’s own content, every source should be “untrsuted”, no?
74. If your language has a ‘strict’ mode, turn it on.
This could have major implications not just for security but the entire development process and code base. Good luck to anyone with legacy code.
77. Always initialize your variables. Do not leave it to chance.
78. Treat compiler warnings like errors and fix them.
Many of the requirements are not strictly secure coding practices, but code quality practices that may have security-related outcomes. Are these appropriate here? Almost certainly not, especially when security already creates friction.
Ultimately, developing standards and policies for highly technical content is not easy. I would never fault someone for attempting and falling short. This is different. This is a dogshit policy that is being promoted by a prominent authority in the field. It is being given attention by other industry authorities and being given space at major conferences.
Even worse, this is being promoted when alternative standards exist that are:
Community-driven, international efforts
Already adopted by major organizations, including government entities
Mature collections of refined security requirements
Well-scoped with clear purpose and use cases
Of course, I am thinking specifically of the OWASP ASVS. Why can’t the Canadian government adopt an existing, international standard? That would be too easy. And it wouldn’t sell any books.
Connect
Respond to this email to reach me directly.
Connect with me on LinkedIn.
Follow my YouTube.