Scams are flooding Meta’s advertising ecosystem
Across Meta’s platforms, fraudulent advertising has reached a scale that is difficult to ignore. Current research suggests that roughly one out of every three ads displayed on Meta services is likely connected to a scam, phishing operation, or malware campaign. Despite years of public commitments to user safety, the company’s largely reactive enforcement model appears structurally incapable of keeping pace with the speed and scale of organized online fraud.
Even Meta itself has acknowledged that scams and prohibited-goods advertising generate billions of dollars in revenue across its platforms. Earlier internal projections indicated that as much as 10% of Meta’s 2024 total revenue could be linked to scam-related advertising activity. These figures reinforce concerns that economic incentives may be misaligned with effective fraud prevention.
Scope of the problem according to independent research
A detailed independent study published in early February examined the prevalence of scam advertising across Meta-owned platforms, including Facebook, Instagram, Threads, WhatsApp, and Messenger. The researchers relied on the Meta Ad Transparency API, analyzing ads over a 23-day period in the European Union and the United Kingdom.
The dataset included 14.5 million advertisements, of which 30.99% were classified as scam-related, phishing-oriented, or associated with malware distribution. In absolute terms, this translates to approximately 4.51 million fraudulent ads, which collectively generated over 300 million impressions during the observation window.
One of the most striking findings was the level of concentration among advertisers. More than half of all identified scam ads (56%) were linked to just ten advertisers, indicating that large-scale, coordinated actors dominate much of the fraudulent advertising landscape. These operators frequently reused the same technical infrastructure, domains, landing pages, and near-identical ad copy across multiple campaigns, allowing them to scale rapidly with minimal variation.
Common scam advertising patterns and themes
Fraudulent ads on Meta platforms follow recurring patterns designed to exploit urgency, trust, and familiarity. The most prevalent categories include:
-
Fake products and counterfeit goods, often presented as limited-time offers
-
Fraudulent investment schemes, particularly those promising high or guaranteed returns
-
Prohibited or unlicensed medical products, including miracle cures and weight-loss supplements
-
Illegal online gambling and casino platforms
-
Fake concert tickets and event access scams
-
Cryptocurrency-related fraud, including wallet-draining schemes and fake exchanges
A growing trend involves impersonation-based scams, where attackers misuse the names, images, or fabricated endorsements of public figures, politicians, or celebrities. These campaigns increasingly incorporate AI-generated deepfake videos and images, significantly increasing their perceived credibility and click-through rates.
Malvertising as a dominant threat vector
Malvertising—malicious advertising designed to distribute malware or redirect users to harmful infrastructure—has become one of the most significant risks facing individual users online. Industry-wide data indicates that approximately 41% of cyberattacks targeting end users are linked to malvertising campaigns.
Meta’s platforms offer criminals a uniquely attractive environment: instant reach, granular targeting, behavioral profiling, and rapid campaign iteration. Attackers actively borrow from legitimate digital marketing techniques, tracking trends, testing creatives, optimizing conversion funnels, and deploying psychologically persuasive messaging. Time pressure, fear of missing out, and fabricated social proof are standard elements of these campaigns.
The combination of precise targeting and massive scale makes social media advertising systems particularly efficient tools for cybercriminals, especially when enforcement mechanisms lag behind campaign deployment.
Reactive enforcement and structural limitations
Researchers and security professionals consistently describe Meta’s defensive posture as primarily reactive. Harmful ads are often removed only after user reports or external investigations, rather than being proactively blocked. Even when ads are flagged, takedown timelines can be slow, allowing campaigns to reach millions of users before intervention occurs.
The complexity of modern advertising supply chains further complicates enforcement. Ads pass through automated review systems optimized for scale and speed, not deep contextual analysis. As a result, malicious advertisers can frequently outpace moderation workflows, launching new variants faster than they are removed.
General studies of social media fraud estimate that around 30% of scam incidents observed on social platforms originate directly from paid advertising, making this one of the most common threats users encounter in feeds and sponsored placements.
Internal decisions that weakened defensive capacity
Meta’s internal priorities have also come under scrutiny. In 2023, the company reportedly downsized teams responsible for handling brand-rights abuse reports and limited the computational resources available to security teams. These reductions coincided with increased investment in AI and virtual reality development, effectively shifting internal capacity away from fraud mitigation.
According to expert analysis, the core issue is not simply imperfect moderation but a systemic imbalance. The advertising infrastructure enables attackers to operate at industrial scale, while countermeasures remain comparatively slow, fragmented, and constrained by business considerations.
Allegations of financial incentives and internal documents
Concerns about Meta’s financial incentives have been amplified by investigative reporting citing leaked internal documents covering the period between 2021 and 2025. These materials suggest that Meta employees were, at times, discouraged from suspending malicious advertiser accounts due to fears of negative revenue impact, particularly in relation to funding AI development initiatives.
According to the documents, some accounts linked to abusive activity remained active despite hundreds of warnings and user reports. Rather than disabling these advertisers, Meta allegedly applied higher advertising fees as a form of penalty, allowing campaigns to continue operating. In some cases, internal statements implied that Meta’s targeting systems may have actively optimized scam ads by delivering them to users statistically most likely to click.
Meta’s official response and disputed figures
Meta has rejected this interpretation of internal materials, stating that they present a selective and misleading picture of the company’s anti-scam efforts. Company representatives argue that the conclusions exaggerate the extent to which scam advertising contributes to overall revenue.
While Meta previously projected that approximately $16 billion, or about 10% of annual revenue, could be linked to scam-related ads, the company now characterizes this estimate as overstated. According to its public response, actual figures are “significantly lower,” though no revised numbers have been disclosed.
In February, Meta reportedly restricted the authority of teams responsible for advertiser vetting, preventing them from implementing measures that could reduce revenue by more than 0.15% of total income. Meta disputes this characterization, stating that the guidance was intended to minimize revenue loss caused by false positives, where legitimate advertisers might be incorrectly blocked.
The evidence increasingly suggests that scam advertising on Meta platforms is not an isolated moderation failure but a structural byproduct of large-scale, profit-driven advertising systems. High automation, economic incentives, and slow enforcement create an environment where organized fraud can flourish. Until systemic changes align safety mechanisms with the speed and scale of abuse, users are likely to remain exposed to a level of risk that reactive moderation alone cannot meaningfully reduce.
Image(s) used in this article are either AI-generated or sourced from royalty-free platforms like Pixabay or Pexels.




