Psychologist Glen Moriarty founded the emotional support platform 7 Cups in 2013 as a way to help people listen to each other's concerns, particularly when they had nowhere else to turn. Users are free to be as vulnerable as they wish, provided they obey the platform's community guidelines and terms of service.
The platform, which users can join at no cost, may seem like the perfect solution to both the loneliness epidemic and the broken American mental health care system, which is expensive and hard to access.
But for some users, 7 Cups comes with its own high cost: trolling and abusive behavior.
A months-long investigation into 7 Cups found that the platform sometimes struggles to contain and address problems with users who act inappropriately, poorly, or aggressively, or even threaten other users. In the past, such abuse has included discussion of sexual acts and fetishes as well as comments directing another user to kill themselves. Mashable found that teens may be targeted by predators.
This story is part of our investigation into the emotional support platform 7 Cups and the growing marketplace for apps and platforms that pair people with someone who is supposed to be a compassionate listener. The series explores a failed experiment between the state of California and 7 Cups, as well as the myriad risks of seeking emotional support online from strangers. These dangers can include the manipulation of vulnerable youth and targeted abuse and harassment. The series also includes an analysis of why it's so hard to stop online child exploitation, and looks at solutions to make platforms safer.
High-level current and former staff and volunteers who spoke to Mashable anonymously because they didn't want to violate a nondisclosure agreement they signed say 7 Cups' approach to punishing those who violate the platform's rules can be surprising or confusing. Users, for example, have been encouraged by Moriarty himself to help rehabilitate "trolls" who behave poorly.
Moriarty denied that trolling was pervasive on the platform and noted that the company has taken steps over the last decade to improve user safety.
"We are constantly solving problems, getting stronger, and continue to hold true to our core mission of helping the community," Moriarty told Mashable in an email.
Trolls have existed on 7 Cups for years
Moriarty has long known about bad actors on 7 Cups, because he's personally been subject to their unwelcome behavior.
In June 2020, a few months into pandemic isolation, he dedicated a forum post on 7 Cups to the subject of "people who are trolling." He noted that his own experience on the platform "has included all types of trolling," including what he described as "sexual trolling," wherein "the person is trying to engage with you — sneakily — in a sexual manner."
Moriarty's advice to 7 Cups users about how to handle trolling was largely unconventional: He encouraged victims of abuse and harassment to attempt to persuade the other user to change their behavior.
His sample script included empathetic statements like, "I know that life has likely been challenging for you…I think that is partly why you are behaving towards me like you are right now."
The post elicited dozens of responses, including from users who'd been harassed.
One commenter challenged Moriarty's conviction that the platform's bad actors could be rehabilitated. They wrote: "[W]hat about the troll that keeps telling me to eat poison soup and go to the grave though?"
Another listener chimed in: "I'm lucky my troll finally decided to leave me alone. I say that because I don't feel that Cups did enough to protect me as a listener from the vile filth spewed forth in my inbox."
When asked about these remarks, Moriarty noted that other commenters reported "positive interventions they utilized in response to people trolling."
He told Mashable that "we take numerous steps to address and stop trolling behavior," including auto-detection of abusive activity and the use of blocking, muting, and reporting tools. He also said that the company has been developing a tool powered by artificial intelligence that can scan and identify messages that violate the platform's terms of service and guidelines in one-on-one and group chats.
"Our expectation is that this will make circumventing our existing safety processes and guidelines very, very difficult," Moriarty noted.
Whitney Phillips, assistant professor of digital platforms and ethics at the University of Oregon, reviewed a copy of Moriarty's 2020 post and the comments. She characterized Moriarty's approach to trolling behavior as harmful to users.
Phillips, author of This Is Why We Can't Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture, said it's a common misconception that people always troll because they're wounded and act out for attention.
Instead, the behavior is often game-like. They derive joy and pleasure out of finding ways to make someone feel uncomfortable. They're not desperate for validation and often can't be deterred by appeals to a better self, said Phillips.
She also warned against asking triggered or traumatized users to rehabilitate their abuser, a request she described as "cruel." The responsibility of holding trolls accountable, and protecting victims, should rest with 7 Cups, Phillips said.
"To offer this advice, it's mismatched with the kinds of behaviors that are clearly chronicled in the comments," she added.
Multiple listeners secretly trolled other users
Moriarty's discussion of trolling also didn't reveal a discovery that 7 Cups staff said they made years ago: Some of the platform's highly rated listeners had alternate secret accounts they used to harass or bully other users. Moriarty denied this and said the behavior violated the platform's terms of service. Former staff said they stumbled across the problem when attempting to identify the platform's best listeners.
7 Cups had already deployed an algebraic formula to determine trust and reputation "scores" for listeners, which helped identify trolling accounts, as well as users demonstrating good behavior.
It wasn't long before staff noticed inappropriate, trolling, or bullying accounts registered to the same email address of highly rated listeners, or other telling links between such accounts.
"You couldn't just say, 'This person's great and you can trust them all the time,'" a former staff member said.
The severity of the trolling problem led 7 Cups to control the demo environment by using hand-picked listeners who wouldn't sink the company's chances of landing a lucrative deal by engaging in offensive or abusive behavior, according to multiple sources who worked for the platform over the last several years, a claim that Moriarty also denied.
Phillips said she's unsurprised that people engaging in trolling behavior have conflicting personas and accounts. Trolling actually requires good listening skills, according to Phillips' research. Such users must pay close attention to someone's vulnerabilities. But those who engage in trolling also possess the social skills to weaponize those vulnerabilities.
Phillips believes it's generally a mistake to simply observe people's online behavior and assume their actions are sincere, but especially in a digital environment premised on helping others. Instead, there's the real possibility that people on emotional support platforms may be bored or even mean.
"People play in dark directions and light directions and lots of directions in between," she said. "They do all kinds of things for all kinds of reasons that don't fit into any clear-cut box, particularly one that takes sincerity as the default mode of human expression."
Dealing with trolling on 7 Cups
One infamous user has wreaked havoc on the platform by bullying, abusing, and threatening other users since at least 2019. They've been given opportunities to improve and rehabilitate their behavior, which Moriarty acknowledged occurred years ago as an attempt to coach the user to behave in more "prosocial ways."
When they've violated those expectations and been banned, they've found ways to create burner accounts at a pace that 7 Cups staff has not been able to effectively counter. Moriarty said that when moderators recognize the user or their behavior, they are banned in under a minute or faster.
Recently, 7 Cups began requiring listeners to verify their phone number, which can more closely tie a user's identity to their behavior, if they use a real number. Those who want to avoid detection can easily obtain a throwaway number from various online services. Moriarty said members would soon have to go through phone verification.
In order to deter abusive behavior and set expectations, 7 Cups uses a points system for listeners and members, but some of the punishments can seem surprisingly unclear or lenient. An adult-teen listener who gives their contact information to a teen but doesn't initiate off-site contact isn't permanently banned from the platform but is instead given three behavioral points as a consequence, for example. Initiating off-site contact with a teen is a four-point offense.
Both violations result in a warning, a break from the platform, and removal of the user's adult-teen listener badge and access to the teen community. While this is unclear based on the points system chart, Moriarty said that any adult-teen listener who behaves this way is put on a months-long break. They also lose their badge and cannot regain it in the future. Ten or more points leads to suspension from the platform, but points can also expire six months after they are accrued. Moriarty told Mashable that the points system is similar to how points on a driver's license works.
When users are caught, violations that lead to immediate removal from 7 Cups include spamming forums with inappropriate content or ads; posting inappropriate or graphic pictures; repeatedly sexting; and being underage or signing up for the wrong age community.
A former high-level volunteer who left the platform in 2020 said that its rules were unevenly applied. Newer members who committed more serious infractions were often bounced from the platform, but established listeners with a good trust score and a rapport with moderators might be given dispensation.
"If there is not consistent enforcement of the rules, it creates a permission structure for anything to happen," said Phillips.
Moriarty noted the difficulty of knowing exactly what happened in each situation that involved a violation of the rules.
"Not all cases are black and white," he told Mashable. "I imagine there have been uncertain issues, vague situations, or competing explanations where it could be interpreted as dispensation, but likely not significant."
Multiple sources who've worked or volunteered at 7 Cups stressed that they've tried to elevate safety issues and solutions over the years, with limited success. They felt that costly initiatives or efforts that might negatively affect growth but improve safety were ignored or rejected by senior management or Moriarty. He told Mashable that this characterization was inaccurate.
Though 7 Cups employs blocking and reporting tools, as well as the ability to ban users, those strategies are stretched when bad actors try repeatedly, and doggedly, to regain access to the platform by creating a new anonymous account. Currently, when a member is temporarily removed or banned from 7 Cups, it can be easy to make a new account using a quickly generated burner email address and a new fake persona.
Security tools to stop trolls can be bypassed
Sources familiar with 7 Cups' security protocol say the site attempts to prevent bad actors from creating multiple burner accounts by tracking users' internet protocol (IP) addresses. Yet this tactic is rendered useless if someone accesses the internet through a virtual private network, which can conceal their digital identity.
Additionally, an IP address is an imprecise tracking tool as it can be assigned not to a user's device, but to the coffee shop they frequent or their dorm building. Banning a user based on that information might unintentionally ban dozens or hundreds of other people using that address.
An IP fingerprint, a more specific set of data that can be tied to an individual device, can help narrow the search. Yet, it's also an imperfect solution given that sophisticated bad actors can use technology to mimic or hijack the identity of a different device.
As a result, sources say that for years the platform's moderators have played whack-a-mole trying to catch users who've been banned for various infractions but quickly return with a new account.
John Baird, cofounder and CEO of the identity verification company Vouched, told Mashable that while an IP address and device ID can help identify bad actors, they shouldn't be the sole way to verify an identity and block a user from accessing a platform. Vouched, for example, uses visual evidence, algorithmic evaluation, geo-location data, and device-related information, among other strategies, to verify identity and vet an individual's risk to an organization.
"Security is always multiple factors stacked on top of one another to be able to catch the bad guy," said Baird. "The challenge is, if it's a single factor, the bad guys will figure out a way around that single factor."
Moriarty told Mashable he was confident that new technology solutions, like the AI-powered speech detection tool, would be "more effective at scale than anything else has been to date."
He also acknowledged that 7 Cups may have fallen short despite its efforts: "We understand that we are far from perfect, but have worked hard and continue to work hard on this issue."
Still, the evolution of security on 7 Cups has arguably taken a toll on its members and moderators.
Last summer, a user made multiple accounts on the platform and told people to kill or harm themselves. Mashable viewed evidence of the incident and its fallout.
Separately, the user who has frequently engaged in abusive behavior over the past several years was also creating new accounts to evade bans, exhausting the platform's moderators with their efforts to stay on the site.
People complained when the user began a new harassment campaign last year, including telling a listener to kill themselves, according to documentation shared with Mashable.
Of this incident, Moriarty said that censors blocked the language and that the user was removed: "The system worked as designed."
According to a source familiar with the problem, the user has continued to harass members and vex moderators since then.
If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can reach the 988 Suicide and Crisis Lifeline at 988; the Trans Lifeline at 877-565-8860; or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.
Topics Mental Health Social Good