How a Kick Troll Took Over Kick With Bots, Doxxing and Harassment

What Happened on Kick: December 2025 Bot Attack Overview

In early December 2025, a single troll brought one of gaming’s fastest-growing streaming platforms to its knees. Over 50 streamers fell victim to a coordinated campaign of bot attacks, harassment, and doxxing that left some unable to broadcast for hours. The attacker, known as PBD (PeriodBloodDrinker), didn’t just disrupt streams. They exposed the personal information of streamers, their families, and in some cases, their children.

Key Facts: December 2025 Kick Attack

Victims: 50+ streamers affected
Attack Duration: 4+ hours on peak day
Accounts Created: 100,000+ bot accounts
Timeline: 8 months of harassment before escalation

This kick streaming news story represents one of the most severe platform safety failures in recent memory. On December 3, 2025, the attacks reached their peak intensity. Streamers found themselves “front-paged” with artificial viewers while thousands of bot accounts flooded their chats with spam and threats. Some were forced into sub-only mode for over four hours, effectively cutting off interaction with their real audiences.

The fallout extended beyond inconvenience. Victims reported long-term psychological effects, declining viewership, and real-world safety concerns. Several have already filed complaints with the FBI and local law enforcement. What makes this case particularly troubling is how long it went unchecked. Reports suggest PBD had been operating for eight months before the December escalation brought widespread attention.

For the streaming industry, this incident raises uncomfortable questions about platform responsibility and the limits of content moderation.

How PBD Used Kick Bots and Harassment to Disrupt Streams

PBD’s operation relied on a sophisticated arsenal of automated tools and coordinated harassment tactics. Understanding the technical methods reveals just how vulnerable streaming platforms can be to determined bad actors.

Automated Chat Flooding

The foundation of PBD’s attack was scale. According to victim reports, the troll controlled access to over 100,000 accounts. These kick bots would flood live chat rooms simultaneously, drowning out legitimate viewers with spam, threats, and personal information. The sheer volume made manual moderation impossible. When moderators banned accounts, new ones appeared within seconds.

The bots weren’t simply spamming random text. They were programmed to post targeted harassment, including doxxed information about streamers and their families. This transformed chat rooms from community spaces into weapons.

View Count Manipulation

PBD also deployed kick view bots to artificially inflate viewer numbers. This might seem counterintuitive, but the tactic served multiple purposes. Inflated counts pushed targeted streamers to prominent positions on the platform, attracting more attention to the harassment. It also made the attacks more visible, amplifying their psychological impact on victims.

Targeted Selection

The attacks weren’t random. PBD specifically targeted female streamers, following a pattern common in online harassment campaigns. Victims reported persistent stalking across multiple streams, with the attacker tracking when they went live and coordinating bot swarms accordingly. This kick harassment created an environment where some streamers felt unsafe continuing to broadcast.

Doxxing Is a Federal Crime

Publishing someone's private information (doxxing) can violate federal cyberstalking laws under 18 U.S.C. Section 2261A. Penalties include fines and up to 5 years imprisonment. When minors are targeted, additional charges may apply.

Escalation to Doxxing

The most serious element was the public exposure of personal information. PBD obtained and distributed home addresses, phone numbers, and photos of family members. In several documented cases, children’s information was posted in chat. This crossed the line from digital harassment to real-world safety threats, prompting victims to contact law enforcement.

The technical sophistication suggests either significant resources or access to existing bot farm infrastructure. Rapid account regeneration and coordinated timing point to automated scripts rather than manual operation.


Timeline: How the Kick Bot Attack Escalated (Nov 28 - Dec 4, 2025)

The December crisis didn’t emerge overnight. PBD had been building capabilities and testing platform defenses for months before the final escalation.

Attack Timeline Overview

April-October 2025: Initial operations and account building
November 28: Public X account created
December 1: Community complaints intensify
December 3: Peak attack day
December 4: FBI reports filed

Early to Mid-2025: Initial Operations

Kick streamer victims report harassment beginning as early as spring 2025. During this period, PBD established patterns of targeting, tested moderation evasion techniques, and built the account infrastructure that would enable larger attacks. Platform reports filed during this time went largely unaddressed.

November 28, 2025: Public Escalation

PBD established a public presence on X (formerly Twitter) under the handle @Kicks_PBD. This account was used to announce attacks, mock victims, and coordinate with other bad actors. The public profile marked a shift from covert harassment to open provocation.

December 1, 2025: Community Response

Complaints within the Kick community intensified as more streamers shared their experiences. Discussion threads identified PBD as a “common” problem that had persisted for months. The kick streaming news today at this point focused on growing frustration with platform inaction.

December 3, 2025: Peak Attack

The most severe day of attacks began. Multiple streamers were targeted simultaneously with bot swarms lasting over four hours. Doxxing reached new extremes, with personal information about children being distributed in chat. Sub-only mode proved ineffective as bots had been given subscription access. Several streamers ended broadcasts entirely.

December 4, 2025: Law Enforcement Involvement

Victims began filing formal complaints with the FBI, local police departments, and in some cases, counterterrorism divisions. The legal response marked a new phase in the incident, moving from platform moderation to criminal investigation.


Why Kick’s Moderation Failed to Stop the Attack

The PBD incident exposed significant gaps in Kick’s content moderation infrastructure. While no platform can prevent all abuse, the eight-month duration of this campaign raises serious questions about response capabilities.

Reactive Rather Than Proactive

Kick’s moderation approach relied heavily on user reports and manual review. Against an attacker generating thousands of accounts, this model couldn’t keep pace. By the time one wave of bots was banned, replacements were already active. Streamers reported filing dozens of reports with no visible effect.

Insufficient Anti-Bot Technology

Modern streaming platforms employ various automated defenses against bot activity. These include CAPTCHA challenges, behavioral analysis, and device fingerprinting. Kick’s implementation of these tools appears to have been inadequate against PBD’s techniques. The rapid account creation and coordinated activity should have triggered automated flags long before December.

Platform Comparison

Twitch and YouTube have faced similar coordinated harassment campaigns. Both platforms have developed more robust automated detection systems, though neither is immune to abuse. Kick, as a newer platform, may lack the technical infrastructure and moderation staff that established competitors have built over years.

Community Demands

Affected streamers are calling for immediate improvements. Requests include stronger account verification requirements, improved detection of coordinated bot behavior, and faster response times for harassment reports. Some have demanded direct communication from Kick leadership about platform safety measures.

The streamer exodus has already begun. Several high-profile creators have reduced their Kick presence or moved exclusively to other platforms, citing safety concerns.


The December attacks crossed legal lines that extend well beyond platform terms of service. Victims are pursuing multiple avenues of legal recourse, though prosecution presents significant challenges.

Federal Complaints

Multiple streamers have filed complaints with the FBI’s Internet Crime Complaint Center. The doxxing of minors and persistent stalking behavior potentially invoke 18 U.S.C. Section 2261A, the federal cyberstalking statute. This law criminalizes using electronic communications to cause substantial emotional distress or reasonable fear of death or serious bodily injury.

Computer Fraud Claims

The automated bot attacks may also fall under the Computer Fraud and Abuse Act (CFAA). Mass account creation and coordinated platform manipulation could constitute unauthorized access or exceeding authorized access to computer systems. However, CFAA prosecution has historically proven complex in cases involving terms of service violations.

Local Law Enforcement

Some victims have filed reports with local police departments and, in cases involving threats, counterterrorism divisions. The doxxing of home addresses creates jurisdiction in victims’ localities, potentially enabling state-level prosecution for stalking or harassment.

Current Investigation Status

As of early December 2025, no arrests have been confirmed. Identifying anonymous online attackers requires substantial investigative resources. PBD's use of multiple accounts and potential technical obfuscation complicates attribution.

Platform Accountability

Legal experts have suggested Kick could pursue civil action through cease-and-desist orders. However, Section 230 of the Communications Decency Act generally protects platforms from liability for user-generated content, limiting legal pressure on Kick itself.


Frequently Asked Questions About the Kick Bot Attack

What is Kick streaming and why was it targeted?

Kick is a livestreaming platform that launched in 2022 as an alternative to Twitch. It gained popularity by offering streamers a higher revenue share and more permissive content policies. The platform’s rapid growth and newer moderation infrastructure may have made it an attractive target for attackers seeking platforms with fewer defensive capabilities.

How do kick view bots work?

View bots are automated programs that simulate viewer activity on streams. They connect to broadcasts and register as viewers without any human involvement. More sophisticated versions can interact with chat, follow channels, and even subscribe. Operators typically run these bots from cloud servers or compromised computers to avoid detection.

Can streamers protect themselves from kick harassment?

Streamers can take several protective measures, though none are foolproof. These include enabling sub-only chat mode, using keyword filters, appointing trusted moderators, and avoiding sharing personal information. However, determined attackers with significant resources can circumvent most streamer-level protections. Platform-level intervention remains essential.

Stay safe with these quick tips on moderating your chat here.

What is Kick doing to prevent future attacks?

Kick has not issued a detailed public statement about specific security improvements following the December 2025 attacks. The platform faces pressure from creators to implement stronger verification requirements, improved bot detection, and faster response times for harassment reports. Whether meaningful changes will follow remains to be seen.


Conclusion

The PBD attacks on Kick represent a critical failure in platform safety that affected over 50 streamers and their families. As victims pursue legal action and the streaming community demands accountability, this incident may become a defining moment for how platforms approach content moderation. The question now is whether Kick will treat this as a wake-up call or a passing crisis.