Especially Facebook gives absolutely zero ducks about their ads being literal phishing scams, and it’s been like that for years. Reporting them does not help at all.
In most similar situations, one would be charged as accessory to a crime, but somehow in the online ad business they avoid that? How?
Comments
They have functionally infinite money to draw out court cases, and the US Justice Department is deeply incentivized to do nothing about it.
Other countries can sue them, but most will do nothing more than give a slap on the wrist, that they’ll gladly pay to do nothing. Anything bigger would be litigated for years if not decades.
Part of it seems to be “safe harbor” provisions like what’s found with the DMCA. Because google and facebook have so many ad partners they “can’t reasonably be expected to vet every single ad,” and so they rely on AI and reporting to remove the ads, and so long as they remove illegal ads when they are made known of them they can’t be sued. I don’t know this for sure, but do know that’s how the DMCA works (youtube can’t be directly sued for copyright infringement since they just host, and because they get so many uploads it would be impossible for humans to manually review every single upload and so the law allows them the benefit of the doubt of “it’s OK until you’re made aware, then if you don’t comply you’re getting sued”), and it’s the only logical thing I know of for why they can get away with it.
I have had a similar experience with Facebook ad from the other side of that coin too…
I run a small business and recently spent some money on advertising with the objective of receiving more leads through messages.
ALL of the messages I have received have been fake support pages trying to scam me that my page is going to be closed down. Every. Single. One.
I have literally paid Facebook to get people to try and scam me.
I’ve reported all of these pages, and most of them are still there
I won’t be running ads again
They rely on various laws that mean they don’t have to govern every piece of information that gets put onto their platform. In particular, Section 230
Facebook has several billion users and a few million advertisers. Anyone can serve ads and the barrier to entry is quite low. They simply don’t have the people power to manually approve every single ad so they have some tools (some AI, some traditional) to give it a quick check for anything problematic like nudity or violence or banned keywords / links, then it gets published.
As long as they have a way for the end user to report something and is seen to be taking at least a little bit of action on the reports, legally they’re not breaking the law because they can (and have) argued in court that they took reasonable steps to protect users from scams and that they realistically cannot screen every single bit of content because even if you have a team of 10,000 people reviewing content, checking the 1 BILLION posts that get made a day would mean each person is reviewing a post every two seconds (assuming they never stop)
It basically boils down to big numbers. YouTube can’t watch all of the 500 hours of footage that gets uploaded to its site every single minute (yes, 500 hours every minute!) so they give it the once over with some quick AI tools and let the end user and various record labels / content providers flag stuff that could be problematic.