kekbees How to Quickly Spot a Clown March 15, 2021, 9:08 p.m. No.394   🗄️.is 🔗kun

This write-up from the /qresearch/ General head post is re-posted here, because it's good to be able to see the clowns clearly… Enjoy!


Quickly spot a clown: >>396

Spot a clown's bot: >>397


Thank you to the anons who put it together.

Original Post:

kekbees March 17, 2021, 10:52 a.m. No.396   🗄️.is 🔗kun   >>394

Don't get suckered into wasting your time. Hardened anons should already know this shit; This is more a crash course for new visitors.


How to Quickly Spot a Clown


They will:


  • Attempt to get a divisive or emotional response from you to derail research.

  • Concern troll and copy/pasta spam shill to contradict confirmed findings.

  • Employ faux debate tactics: Generalizations, gas-lighting, projection, misdirection, false equivalences, confusing correlation with causation, appeal to authority, transference, false precepts, personal attacks, straw-men, red herrings, etc.

  • Promote social ethics that are disingenuous like doxxing, "reverse psychology", or promoting propaganda.

  • Promote tactics that are unethical, illegal or involve violence outside the scope of the Law.

  • Employ Fear, Uncertainty and Doubt to dissuade research.


Topic sliding If information of a sensitive nature has been posted on a discussion forum, it can be quickly removed from public view by topic sliding. In this technique a large number of unrelated posts, or posts aimed at diluting the information presented, are submitted in an effort to trigger a topic slide to literally push content out of view. Operators can control several fake UIDs via the bots they make use of; these can also be called upon in the other techniques to mask the intent of the operator from the users at large. Although it is difficult or impossible to censor the posting it is now lost in a sea of unrelated and bogus postings.


Seeding bad information Operatives will insert flawed or bogus information from time to time as an ongoing tactic, depending on their skill set and the needs of their mission. Their most common ruse is providing information or evidence which is backed by bad source material in the hope that the "source of the source" is never checked. This serves several objectives, mainly resource consumption, evidence pollution, discouragement and misdirection.


Astroturfing consensus This is a technique that attempts to build a manufactured consensus around a flawed set of statements or compromised information. This is related to consensus cracking, where false evidence is injected in an attempt to dispute or discredit what the current consensus is, and push it towards the desired false consensus. Misleading and false evidence and information are often salted into the evidence pool, with an aim to impede organic consensus building, while also poisoning the available information and evidence.


Cultivating tacit approval (The legal term for this is 'silent agreement') Attempting to attain this state is done using a technique where operators will try to convince the user population to ignore, or not respond to bad information or false assertions. This is done in a bid to reduce push-back against the above mentioned tactics. It's worth noting that the reply filtering mechanism of the boards (which currently can't be disabled without code changes from the site admin) is used as a weapon of sorts in this tactic: Filtering with software prevents anons from defending against seeding bad information and astroturfing consensus. This is why the operators push so hard to condition anons into filtering material they disagree with.

kekbees March 17, 2021, 10:52 a.m. No.397   🗄️.is 🔗kun   >>394

How to spot a Clown's bot


NEVER DIRECTLY ENGAGE A BOT It just wastes bread with their responses, and hands them a target to programmatically lock on to without handler interaction. It seems that the easiest way to foil the bots is to point them out by proxy, by copy/pasting the user's ID as a quoted reference and intentionally breaking its post link in your response and/or answering it with a meme until they start misfiring because they can't parse the response to lock onto a target correctly.. Doing this can also make the bots look artificially erratic and easier for other anons to spot.


What we know about the cl0wnbots


  • Are used to facilitate topic sliding, manufacturing consensus, obfuscation of intent, and general disruption

  • Require a handler to watch for and be in the thread

  • Cannot enter threads themselves

  • Respond to replies and each other, and can create replies

  • Can pick up random or contradictory meme flags

  • Activate on lists of trigger words; these can change over time

  • Use a combination of legit pasta, pre-written points, or spam targeted at various objectives.

  • Have unwittingly pasta'd supportive replies

  • Are employed mostly at night and on weekends (US time)

  • Add to bump limits

  • Can be filtered by ID once they are observed

  • Are not perfected and can be easily spotted

  • Were deployed starting on the /CBTS/ board

  • Have certain flaws that can cause them to misfire in sometimes comical ways

  • The handlers can make bogus clown threads, but also can be confused by accidental ones

  • Handlers still cannot access the servers

  • Have still not succeeded in their mission

  • Still can not meme