Elaborate Social Engineering Scam on Discord

over the past month I have been the target of an elaborate social engineering scam that I thought was interesting to talk about, involving Discord, Community Servers, and Roblox.

Essentially, scammers have been forging videos of me sending phishing links in public discord servers to other users, sending that “evidence” to real moderators that in turn ban me, and then they pretend to be a moderator and contact me directly once the ban has occurred.

I would like to assure that my account has not actually been hacked into. I have not received any fraudelent log in attempts or 2FA notifications or any warnings that would bring legitimate alarm.

The following google doc explains the scam in full, and shows the forged evidence as well.
https://docs.google.com/document/d/1iNNtyBx9lVr-zZtxBn3W-sr457M2FnkN3Ode8y8JuuI/edit?usp=sharing

I’ll assume you’ve read the doc, or at least skimmed it. The following talks about the nature of the scam:
I think this situation, this scam, brings up some very interesting questions to my mind.

How do you, as a moderator, verify that someone’s account is truly sending other users private messages which you cannot view?
And in addition, what systems should discord have in place, or any social media site, to protect its users from social engineering scams of this nature?

I think generally speaking, most moderators are unpaid volunteers in a typically thankless job. When dealing with bots, scammers, and spam, its much easier to shoot first and ask questions later. The unfortunate thing is that the appeal process for me into the Blender Discord server is essentially non existent. I am unable to directly contact them and they have refused a middle man once.
A video, like the one I sent, appears pretty damning. I assume that most people, including myself, would assume that would be enough. Only by knowing the truth, that I didn’t send those messages, do you start thinking about questions like… maybe the video was edited really well?

Another question could come up, why would the person reporting this go through so much effort to show that the discord app appears as real as possible, and to copy the user ID as well? It seems like too much evidence, unprompted, when most users would just leave a message or a screenshot behind. Screenshots are much easier to fake of course, but I don’t think most reporters would try to make their proof so ironclad.

I think that if you are a moderator investigating this situation, and if the reporting account is new, that should immediately be suspicious and raise alarm, especially if the reported account has been in the server for many years.

From a platform side, I wish that discord had a way where you could truly send messages to other people, in a form that allows the moderator to verify that the account is truly that account themselves on their uncompromised client. They have a forwarding feature, but it strips all personal identifiers. There might be a security concern here, but I don’t see how its much more of a security concern than a screenshot or video evidence, both of which can be employed, but can also be faked. Maybe someone else here with better security knowledge could explain to me why this is a bad idea, in a way that screenshots and screen recordings don’t already violate?

I know its not discord’s policy to moderate disputes such as these on community servers, but I still feel like its their responsibility as a platform to provide those tools to the community to allow them to self moderate. I don’t know how interested they are in that, though, especially since they got rid of PID numbers.

4 Likes

First, I’m really sorry you had to go through that. That’s a lot of extra hassle for anyone, let alone for communities that are directly tied to your livelihood.

These elaborate scams almost always come from Roblox. It’s a toxic community and they depend on these elaborate scams to mostly swindle kids. Hence, the threat of “in-game ban and possibly bans related to partnered games”. There are whole scripts and detailed plans for these and they cycle through them as soon as one wears out or gets relatively well known.

Unfortunately, neither Discord nor Roblox have any kids of incentive to want to moderate. If anything, Roblox actively supports their abusive community because it’s valuable to them.

As for moderating phishing links, I think servers should be able to disable DMs using that server as a “connection”. It’s a major gap in that Discord feature, where it might be useful for small friend servers (how Discord started) but does not work well for Community servers. I don’t think it’s a stretch to have to initiate communication in a public space and then allow it to be shifted to DMs by friending each other. That would at least provide some paper trail that a moderator can use to verify if DMs were actually sent.

The other part of that is that there really should be a separation of domains in this. Community moderators have no way to verify whether their server was used as the “connection” in a phishing DM so they have no choice BUT to moderate people who send phishing links using their server. As a result, they are forced to moderate areas that really should be handled by Discord moderation. They are community mods, not Discord mods. The previously mentioned feature would provide mods with the certainty that their server was abused for phishing DMs and can moderate accordingly. Otherwise, those DMs can be left to Discord moderation to securely handle, since they do have access to those PIDs.

3 Likes

I’m really sorry to hear that you’re going through this, it sounds really frustrating.

I agree with @generalred that Discord’s design around communities and direct messages is clearly lacking, which allows these kinds of scams to proliferate. It’s my opinion that we will likely see this continue to happen since Discord seems to be more interested in enshittifying their service with so-called “Quests” and other features that don’t really benefit the user.

To answer one of your questions:

How do you, as a moderator, verify that someone’s account is truly sending other users private messages which you cannot view?

In my previous experience with moderation on Discord, the answer is you don’t. At most, all you can do is take a report at face value. Not only do these scams happen a lot, it’s well known that Discord accounts with previously good standing tend to get hacked and repurposed to proliferate them. Fatigue eventually sets in, and you tend to take these reports less seriously and give them less time. Especially for accounts that have never sent any messages in the server, since this is what happens most of the time.

I was personally unsatisfied with this for my own Discord server, so the rules are different for mine. In order to even get in, you need to pass an interview using Discord’s server application process (similar to t/suki, actually). This is an effective way to cut out a lot of problems, since most scammers will not bother and I know who each person is and why they are in the server, but I can understand why this is unfeasible for larger communities.

3 Likes

It sucks that you’re going through that. I think the best tool against scamming is education “no such thing as a free lunch” “check that the email address from your bank is legit” “look for glaring typos and syntax errors in professional correspondence” …etc.

I made a thing to use with our moderation bot YAGPDB.xyz that seeks to provide some education around scams and scamming:

Don’t Get Scammed!

PSA: If someone sends you a DM/friend request or asks you to download/scan something, consider the following:

  1. Do I trust this person?
    • Do I know them?
    • Have I talked to them before?
  2. Is this a fresh new account? (Click on a user and View Profile)
    • When did they join Discord?
    • When did they join the server?
  3. Are they active in our mutual server?
    • Can I search for their name? Steps:
      • Click on the Discord search (top right)
      • type from: <username> in search field
        • substitute the actual username for <username>
    • What’s their message history like?
      • From what I can tell, do they write like a human?
  4. Are they trying to get me to click/scan/download something? Limited but not included to:
    • Steam keys
    • Website/Discord Server Access
    • Links to beta test software/games
    • Alternate Discord sign-in via QR codes or anything other than the supported login methods
    • Anything free, especially for a limited time only

TL;DR

If anything at all seems suspicious, don’t click, scan, download or otherwise engage!

Instead:

__**Ghost**__ → __**Report**__ → __**Block**__

And remember:
:Emoji_Sparkles: A compromised account is a banned account :Emoji_Sparkles:

:heart:

The r/Ableton Discord Moderation Team

Further reading:

QR Code Safety
Scam & Phishing
Reporting Abusive Behavior to Discord

(Click the photo to enlarge)

This is what it looks like in practice:


I moderate a server with a little over 11K users and we get scam bots all the time. Our general rule tho is that if it didn’t happen on the server, it didn’t happen.

For me, someone is harassing someone or sending them scams via DM, there’s ample opportunity to block or ignore them. Therefore, I gotta take that into account when investigating reports. I can’t just trust a report if it didn’t happen in a space I actually moderate.

I don’t discredit the reporter or attempt to have them “prove” anything, but I also don’t go all in on their perspective as the prevailing one. If a report comes in and it’s private, I share instructions on how to handle it privately and then put the reported user on a “watch list” so to speak.

If the actual behaviour is actually as bad as the reports say, it will reach the server eventually. That’s when I can act. Scam accounts are notorious for playing themselves and it’s often best to just let them do that imo.

If impersonation is happening on the server, I can easily check that and handle it.

All that to say, if a mod is acting on evidence that only presents in DMs, that mod might be a little too trigger happy. I do know Blender Community has had issues with rogue mods in the past.