NoFluffSec Weekly

Issue #4

Welcome to another edition of NoFluffSecurity, the newsletter that cuts straight to the point—no hype, no fluff, just the cybersecurity insights you need. Whether you're a seasoned pro or new to the game, we’re here to help you stay ahead of threats and keep your clients, products, and services secure.

We don’t just serve up the facts; we dish out our unfiltered take on what’s happening in the industry. No sugar-coating, no scare tactics—just actionable advice that actually matters. What you do with it? That’s on you. But if you're here, you already know the stakes.

Before you enjoy this week’s dose of clarity, make sure to click that subscribe button if you haven’t already. You won’t want to miss our next issue!

Feature Story: AI Data Harvesting – A Global Privacy Tug-of-War

Social media platforms like Meta and LinkedIn are under scrutiny for harvesting publicly shared data to train AI models without direct user consent. In the U.S., concerns have been raised over how freely companies use data from user posts. The FTC flagged this as a lack of user control. In the UK, LinkedIn paused its AI data processing activities due to mounting legal pressure, while Meta continues to press ahead, intensifying debates on data ownership, AI ethics, and the transparency of big tech’s data usage.

Meta’s push to train AI models using public data, particularly in the UK, has sparked a critical conversation about the ethical boundaries of data usage. Public posts on social media have become fertile ground for AI models to grow, but users often remain unaware that their digital footprint is feeding these algorithms. While the data is technically "public," the ambiguity surrounding consent reveals how outdated our notions of privacy and control are in the digital age. There is no clear line between data shared voluntarily and data used for purposes the user never intended.​

The tension between AI innovation and personal privacy is no longer theoretical—it’s an urgent, global issue. Relying on government action, especially in a fragmented international landscape, is not enough. Users need to become aware of the costs of their digital habits and push for genuine transparency from platforms. As AI evolves, so too must our understanding of privacy, data rights, and how our own actions feed into a larger system that thrives on the erosion of both.

NoFluff’s Take: The Illusion of Privacy in a Public World

The reality is stark: users are willingly sharing content in environments they don’t control, and tech giants are exploiting that. This isn’t just about individual privacy—it's about the commodification of personal expression. While many point fingers at companies for lack of transparency, the truth is, most users trade their privacy for convenience daily. Governments have been slow to adapt, but even with regulation, can you truly “own” what you willingly put in public spaces? The issue is not just one of control, but of accountability and awareness. It’s naive to think that regulation alone can tackle the vast, intricate ecosystem of data collection. The real challenge lies in reshaping public awareness and rethinking how much personal information is willingly surrendered in exchange for fleeting digital rewards.

We need to rethink how we engage with digital platforms, and more importantly, who we trust with our data. Companies claim public data is fair game, but without user education and stricter boundaries, this balance will continue to tilt in favor of those who extract value from it—at the cost of user privacy. The reality is that platforms are not simply offering free services in exchange for a few ads; they are creating value systems based on the data we consciously give away.

References

Breaking News

Disney to Ditch Slack After July Data Breach
In July 2024, Disney suffered a breach where over 1TB of sensitive data, including usernames, passwords, and confidential project details, was leaked from its Slack channels by the hacktivist group NullBulge. This breach affected nearly 10,000 channels, prompting Disney to switch from Slack to Microsoft Teams. Disney cited concerns over Slack’s lack of default end-to-end encryption for messages as one of the reasons for the transition.

However, the primary issue appears to be less about encryption and more about how Disney managed its Slack environment. The breach likely stemmed from compromised access controls and poor data management practices, leading to the exposure of vast amounts of sensitive data over several years. While end-to-end encryption is a valid concern, it seems more like a post-breach justification, masking the deeper issue of internal security failures and the mishandling of sensitive information.
CNBC: Disney to Ditch Slack After July Data Breach

Dell Faces Employee Data Leak Following Hack
Dell is facing the fallout from a data breach in which hackers leaked sensitive employee information on the dark web. The breach highlights the vulnerability of internal data management and the consequences of insufficient safeguards around employee data. Companies need to reassess their internal security frameworks regularly to stay ahead of attackers who continue to exploit weak links in corporate security.
HackRead: Hacker Leaks Dell Employee Data Following Breach

Latest Research

Lumen’s Raptor Train Botnet Investigation
Black Lotus Labs has uncovered the "Raptor Train" botnet, a Chinese state-sponsored operation involving over 200,000 compromised IoT and SOHO devices. This botnet uses sophisticated tools like a Node.js backend and Electron-based control systems, capable of large-scale IoT exploitation. The botnet’s focus on critical sectors such as military and IT, along with the possibility of future DDoS attacks, makes this a critical read for cybersecurity professionals looking to understand the growing threat from state-backed botnets.
Lumen: Raptor Train Handbook

Vulnerability Management Series – Part 1
Asset discovery forms the foundation of any effective vulnerability management program, as emphasized in this research. Without accurate and complete asset inventories, security teams risk missing vulnerabilities, rendering remediation efforts incomplete. This first step is often underappreciated, but it is vital to any comprehensive security posture.
Pulse: Vulnerability Management Part 1: Assets

Tools

IAM Access Analyzer by AWS
AWS’s IAM Access Analyzer now offers powerful automation to identify and remove unused permissions. This is an important step toward enforcing the principle of least privilege, which is essential for reducing the attack surface in large cloud environments. However, as with any tool, the true value comes from integrating these recommendations into an ongoing, disciplined review process.

Learning ProTip

This article emphasizes least privilege and access management—two core concepts in information security. To dive deeper into these topics, explore these resources:

Undocumented AWS API Hunter by Datadog
Datadog’s new tool, Undocumented AWS API Hunter, empowers developers and security researchers to explore undocumented AWS APIs, providing insights into hidden capabilities. While this tool opens up exciting possibilities for deeper AWS integration, it also raises questions about transparency and security, as undocumented APIs may introduce unknown vulnerabilities if not properly monitored.
GitHub: Undocumented AWS API Hunter

If you’re not already one of our regulars then that Subscribe button below has your name on it ;) See you next week!

All views and opinions expressed therein are solely the authors’ own and do not reflect those of any employers past or present.
NoFluffSec is a Bitsavant LLC publication