Hello. I’m trying a new format where I will share readings I would normally just share with my team at Digital Boundary Group.

Q: Won’t this become like every other AppSec newsletter?

A: No, because:

  1. Every other newsletter I have ever subscribed to for AppSec has always drifted into other domains (fitness, self help, AI, and so on).

  2. I read what I share.

General Application Security

Cross-App Security: YouTube Series
I have been putting together a new series on web security with a focus on cross-app attacks. Check out the playlist here.

I Gave OpenClaw a Gun and It Shot My Dog
On the one hand, all the most annoying people are forecasting the imminent end of human security testing. On the hand, people who should know better continue to rediscover all the ways that LLMs are not reliable. Do I need to link anything in this paragraph or have we all seen this?

Do I have a concern about LLMs replacing humans with shoddier and less reliable results? Yes, I do, but only insofar as we exist in a society that does not care about quality and reliability. With the seeming widespread adoption of LLMs in advertising, marketing, and support (the latter I am sure everyone agrees is an annoyance at best), this might unfortunately be our reality.

It is true that the latest frontier models have demonstrated increased capabilities in identifying various classes of vulnerabilities almost autonomously, but I have a hard time understanding how systems that produce notoriously poor writing and even make up content when performing translations might provide reliable and consistently high-quality security testing. With all the hype focused on what is possible, there is limited emphasis on what is practical; what is the human effort required to review, validate, and triage tool output? What is the real cost (worth mentioning the rising costs of the LLMs themselves that are still subsidized)? Will security teams be inundated with low value (or non-existent) findings as they have been from traditional tools?

The challenge we will face as security practitioners is resisting competition from fully automated solutions on the grounds of quality. Why? In part, we have let our industry develop without rigorous standards or measures of efficacy. More importantly, however, quality may not triumph over speed and cost. Think about it this way: do you love the software you have to use day-to-day? Are you excited to boot up Microsoft Windows every day? Or are companies pushing sub-par software because they aren’t optimizing for quality and reliability?

The way forward has always been the same: the professionalization and standardization of security testing. As is, we have no industry standards. No baseline. No voice. In contrast, medical associations are advocating against the potential harms of LLM-derived health information. Because they are organized, they can protect the interests of their professional members while also advocating for public safety. Who will advocate for us and the work we do if we are replaced?

Android 17 Bringing Localhost Protections
More here via new permissions.

“stop locking bank networks thinking that you will monetize something, learn to monetize”
New monetization is available for developers of Smart TV apps by turning your TV into a residential proxy. I wouldn’t worry though, “Users don’t actually see or don’t feel anything.”

LLMs Improve the Efficiency of Deanonymization
We show that LLM agents can figure out who you are from your anonymous online posts.“ Something to consider in the context of otherwise minor data leaks from applications.

Toxic Combinations
Cloudflare details how they identify potential attacks against web applications using a combination of measures.

iOS Emulation
I have not had a chance to dive in, but there appears to be work towards an iOS emulator. It is not clear to me whether this will have any value for app testing. More here.

Standards

Ubuntu Agrees: Low Value Hiding Password Length
Ubuntu 26.04 will show a masked password instead of nothing with sudo. I personally would prefer that attackers see how long my passwords are. They won’t waste their time on them.

Methodology

Sorry, We Do Not Accept Submissions For Leaked Google API Keys
New supply chain risk (perhaps not new, but not seemingly common anyways): API vendor changing the nature of API keys. As per Google API Keys Weren't Secrets. But then Gemini Changed the Rules, Google API keys should now be checked against https://generativelanguage.googleapis.com/v1beta/files?key=<KEY>. The tool gmapsapiscanner has a PR open for this check.

Connect

Respond to this email to reach me directly.

Connect with me on LinkedIn.

Follow my YouTube.

Keep Reading