- Rigorous AppSec
- Posts
- Rigorous AppSec - October 9, 2024
Rigorous AppSec - October 9, 2024
Don't use mice models in security research.
General Application Security
See What HTTP Cookies Are Up To
You would think that today’s applications would expose a single, global configuration for the security attributes associated with cookies used for user sessions (or other purposes). This is not the case. It is still common to see the same cookies set with distinct attributes (HTTPOnly, Secure, SameSite, and so on) depending on the path (or other condition) under which they are set. Potential exploitability really falls into extreme edge cases, but these issues are probably worth looking into more for the fact that they are likely indicators of poor configuration management or unnecessary complexity. Unfortunately, testing tools typically assume a single, global configuration or may flag issues in a way that is difficult to evaluate and report.
My team member Nadia Chigmaroff recently published a relevant Burp Suite extension: Cookie Attribute Table. The extension provides an improved interface to keep track of cookie security settings across a web application. Check it out.
Standards and Discussions
BOPLA
I have been critical of aspects of the OWASP API Security Top 10, but I think they were the first to nail some proper terminology for a class of issues: Broken Object Property Level Authorization (BOPLA). The fundamental issue is simple. Consider the following HTTP request:
POST /user/12/update HTTP/1.1
...
{
"email":"[email protected]",
"isAdmin":false
}
In the example above, the obvious capability of the method is to set a user’s email and whether they are an administrative user. Developers may want to expose this functionality to underprivileged users to permit them to update their own email address. A vulnerability arises when those users can also modify the isAdmin property in order to escalate their privileges within the application. This is a BOPLA (rather than its cousin BOLA) because the user should be authroized to update the email associated with their user object, but they should not be permitted to update the property isAdmin.
This is traditionally referred to as Mass Assignment, but I don’t think this is correct. Any source that you might consider as authoritative on the subject is clear to indicate that mass assignment is a specific mechanism relating to the binding of HTTP parameters to object properties. Yet this is not the only mechanism that may be responsible for the capability of an attacker to update a property they should not be authorized to update. Therefore, more general terminology is needed and I think BOPLA does the job just fine, but it doesn’t appear to have captured the industry just yet (the exception is probably every vendor in the “API Security” space).
AppSec Teams
Performance Review Season
I don’t enjoy performance reviews, but not for reasons you might assume. If I could give flawless and perfectly useful feedback, I would be much happier. I know this problem isn’t unique to the security testing space, but it feels like we are uniquely bad at it as an industry (or perhaps you have perfected it but don’t care to share?). This is an area I addressed to some extent in my last OWASP talk in Lisbon, but many of my future plans remain theoretical (as plans do).
You can pretty quickly identify gaps from first principles. As a security testing consultancy, we are a business, of course. Therefore, the most important factor for performance is the extent to which team members drive revenue. OK, so can we simply tie individual performance to revenue earned from a team member’s engagements over the fiscal year? No, obviously not.
The first issue is that testers do not control sale price, which varies according to a number of factors. Testers also have practically no control over the engagements they are assigned. OK, but how about we use time to complete? After all, application engagements are scoped to a determine the appropriate amount of testing time. To some extent, there is going to be an uneven distribution of engagements in terms of difficulty, disruptions/delays, and so on.
Further, when optimizing for time, major issues arise from incentives and quality control. What would happen to the quality of our work if we optimize only for time? Yet, time is important. So we aim to achieve a balance between time and quality. But what is quality? Let’s consider only the quality of final reports. We could consider:
The proportion of risks/findings identified against all that could be identified given the type of engagement/methodology
The accuracy of risk/severity/priority assessment against findings reported
The accuracy of finding classification and appropriateness of recommendations
The clarity of writing and number of non-technical errors (grammar, style, and so on)
The correct adherence to scope
The proper documentation of limitations and restrictions on testing
This list could go on, of course, but even a complete list of the attributes of reporting output alone fails to capture the many ways that team members contribute to create a effective consulting environment. Beyond determining what qualities are important, the more substantial challenge is figuring out exactly how to measure them (if that is even possible at all).
For this coming year, I hope to focus on better ways of measuring security tester performance and providing useful feedback. I will share my insights as I go, of course.
Science and Security
You Should Study Scientific Methodology
When I talk to academics in security, I am always surprised by the very small scope of what they consider to be in the domain of cybersecurity research. In my experience (ironic disclaimer that I lack data), there is interest in crypto systems and research to find flaws in the design and implementations of systems that aim to offer security assurances. With a few exceptions, there seems to be little interest in better understanding how systems produce secure (or insecure) software, especially where human elements play a role. Instead, this is left to vendors to solve on our beloved market.
Sadly, there is an attitude in vendor space that they don’t need to provide evidence for their tool’s efficacy. I have encountered this especially with vendors selling vulnerability scanning tools, for example, who act like I am speaking a different language when I ask them for evidence or metrics of performance. This is simply not how the cyber market works. It is built on trust, not evidence. Unfortunately, I think this is unlikely to change until the security world adopts an evidence-based mindset, but the problem is not exclusive to vendors.
There is a classic claim in software security that goes “the cost of removing an application security vulnerability during the design phase ranges from 30–60 times less than if removed during production” (or some variation). Look, it is even included in Andrew Hoffman’s book Web Application Security. I immediately suspected it was bullshit based on the imprecise and unscientific framing (and for more reasons I won’t get into here), but Laurent Bossavit put in the work to identify the origin and therefore the lack of foundation for such claims. So where do we go from here?
The first step is learning what evidence is bad, what evidence is good, and how to gather it. This is why I recommend developing at least a basic understanding of commonly employed scientific methods and their flaws. I promise you do not need a PhD to become an effective scientific critic. While - yes - I did get a PhD (I would not recommend this except in extreme cases), I actually learned substantially more about science through my passion projects.
I was fortunate to be recently given the opportunity to design a research methods course for a computer science program (I have always wanted to teach science partially because my own educational experience was so poor in this area). As part of this process, I am reading through Research Methods for Cyber Security, which appears to be generally well written and structured so far. I will leave a review when I am finished, but this may be a good place to start learning if you are new to science. It is also reasonably priced for a “textbook.” Another book written by an actual methodologist that I always recommend is Snake Oil Science. The book is somewhat narrowly focused, but I think it provides great insight into sources of bias in human trials.
If you’re not sold on learning science yet, let me broaden the pitch. Your entire life and the society around you constantly intersects with science. I would be surprised if you did not make personal decisions in life based on your understanding of the scientific evidence, even if that is not how you frame it. I frequently encounter people who make decisions I know are not only not backed by strong scientific evidence, but refuted by it. I even subscribe to security newsletters that for some reason share “interesting” findings from mouse models without vetting the original study to identify the obvious flaws that I expect anyone with a passing knowledge of scientific methods is capable of (look, here is a random Redditor who gets it).
Side note: you can never trust press releases from academic institutions publishing about their research and you can almost never trust reporting on science, but what the hell even is this website that was linked in the newsletter?
In another life, I would be a methodologist, but I blew my opportunity. At very least, I try to bring a scientific approach to security and I hope the same for you.
Connect
Respond to this email to reach me directly.
Connect with me on LinkedIn.
Follow my YouTube.