Anonymous ID: e5441d Dec. 25, 2020, 8:23 p.m. No.12178575   🗄️.is đź”—kun
  1. False evidence. Whenever possible, introduce new facts or clues designed and manufactured to conflict with opponent presentations – as useful tools to neutralize sensitive issues or impede resolution. This works best when the crime was designed with contingencies for the purpose, and the facts cannot be easily separated from the fabrications.

 

  1. Use a straw man. Find or create a seeming element of your opponent's argument which you can easily knock down to make yourself look good and the opponent to look bad. Either make up an issue you may safely imply exists based on your interpretation of the opponent/opponent arguments/situation, or select the weakest aspect of the weakest charges. Amplify their significance and destroy them in a way which appears to debunk all the charges, real and fabricated alike, while actually avoiding discussion of the real issues.

 

  1. Question motives. Twist or amplify any fact which could be taken to imply that the opponent operates out of a hidden personal agenda or other bias. This avoids discussing issues and forces the accuser on the defensive.

 

  1. Alice in Wonderland Logic. Avoid discussion of the issues by reasoning backwards or with an apparent deductive logic which forbears any actual material fact.

 

  1. Fit the facts to alternate conclusions. This requires creative thinking unless the crime was planned with contingency conclusions in place.

 

  1. Change the subject. Usually in connection with one of the other ploys listed here, find a way to side-track the discussion with abrasive or controversial comments in hopes of turning attention to a new, more manageable topic. This works especially well with companions who can 'argue' with you over the new topic and polarize the discussion arena in order to avoid discussing more key issues.

 

  1. Manufacture a new truth. Create your own expert(s), group(s), author(s), leader(s) or influence existing ones willing to forge new ground via scientific, investigative, or social research or testimony which concludes favorably. In this way, if you must actually address issues, you can do so authoritatively.

 

  1. Create bigger distractions. If the above does not seem to be working to distract from sensitive issues, or to prevent unwanted media coverage of unstoppable events such as trials, create bigger news stories (or treat them as such) to distract the multitudes.

 

Falls under the aforementioned. This is the playbook and you are following it to a T.

Anonymous ID: e5441d Dec. 25, 2020, 8:23 p.m. No.12178583   🗄️.is đź”—kun   >>8628

Don't get suckered into wasting your time. Hardened anons should already know this shit; This is more a crash course for new visitors.

 

How to Quickly Spot a Clown

 

They will:

  • Attempt to get a divisive or emotional response from you to derail research.

  • Concern troll and copy/pasta spam shill to contradict confirmed findings.

  • Employ faux debate tactics: Generalizations, gas-lighting, projection, misdirection, false equivalences, confusing correlation with causation, appeal to authority, transference, false precepts, personal attacks, straw-men, red herrings, etc.

  • Promote social ethics that are disingenuous like doxxing, "reverse psychology", or promoting propaganda.

  • Promote tactics that are unethical, illegal or involve violence outside the scope of the Law.

  • Employ Fear, Uncertainty and Doubt to dissuade research.

 

Topic sliding If information of a sensitive nature has been posted on a discussion forum, it can be quickly removed from public view by topic sliding. In this technique a large number of unrelated posts, or posts aimed at diluting the information presented, are submitted in an effort to trigger a topic slide to literally push content out of view. Operators can control several fake UIDs via the bots they make use of; these can also be called upon in the other techniques to mask the intent of the operator from the users at large. Although it is difficult or impossible to censor the posting it is now lost in a sea of unrelated and bogus postings.

 

Seeding bad information Operatives will insert flawed or bogus information from time to time as an ongoing tactic, depending on their skill set and the needs of their mission. Their most common ruse is providing information or evidence which is backed by bad source material in the hope that the "source of the source" is never checked. This serves several objectives, mainly resource consumption, evidence pollution, discouragement and misdirection.

 

Astroturfing consensus This is a technique that attempts to build a manufactured consensus around a flawed set of statements or compromised information. This is related to consensus cracking, where false evidence is injected in an attempt to dispute or discredit what the current consensus is, and push it towards the desired false consensus. Misleading and false evidence and information are often salted into the evidence pool, with an aim to impede organic consensus building, while also poisoning the available information and evidence.

 

Cultivating tacit approval (The legal term for this is 'silent agreement') Attempting to attain this state is done using a technique where operators will try to convince the user population to ignore, or not respond to bad information or false assertions. This is done in a bid to reduce push-back against the above mentioned tactics. It's worth noting that the reply filtering mechanism of the boards (which currently can't be disabled without code changes from the site admin) is used as a weapon of sorts in this tactic: Filtering with software prevents anons from defending against seeding bad information and astroturfing consensus. This is why the operators push so hard to condition anons into filtering material they disagree with.

 

How to spot a Clown's bot

NEVER DIRECTLY ENGAGE A BOT It just wastes bread with their responses, and hands them a target to programmatically lock on to without handler interaction. It seems that the easiest way to foil the bots is to point them out by proxy, by copy/pasting the user's ID as a quoted reference and intentionally breaking its post link in your response and/or answering it with a meme until they start misfiring because they can't parse the response to lock onto a target correctly.. Doing this can also make the bots look artificially erratic and easier for other anons to spot.

Anonymous ID: e5441d Dec. 25, 2020, 8:25 p.m. No.12178600   🗄️.is đź”—kun

Cultivating tacit approval (The legal term for this is 'silent agreement') Attempting to attain this state is done using a technique where operators will try to convince the user population to ignore, or not respond to bad information or false assertions. This is done in a bid to reduce push-back against the above mentioned tactics. It's worth noting that the reply filtering mechanism of the boards (which currently can't be disabled without code changes from the site admin) is used as a weapon of sorts in this tactic: Filtering with software prevents anons from defending against seeding bad information and astroturfing consensus. This is why the operators push so hard to condition anons into filtering material they disagree with.

 

>>12178567

 

 

>>12178422, >>12178428, >>12178436, >>12178440, >>12178448, >>12178484, >>12178499, >>12178539, >>12178572

white girls fuck dogs needs to be removed as bo\bv again and watch what fucking happens to the board bo. if there are actually problems that is…

Anonymous ID: e5441d Dec. 25, 2020, 8:26 p.m. No.12178618   🗄️.is đź”—kun   >>8624

23. Create bigger distractions. If the above does not seem to be working to distract from sensitive issues, or to prevent unwanted media coverage of unstoppable events such as trials, create bigger news stories (or treat them as such) to distract the multitudes.

Anonymous ID: e5441d Dec. 25, 2020, 8:28 p.m. No.12178638   🗄️.is đź”—kun

Seeding bad information Operatives will insert flawed or bogus information from time to time as an ongoing tactic, depending on their skill set and the needs of their mission. Their most common ruse is providing information or evidence which is backed by bad source material in the hope that the "source of the source" is never checked. This serves several objectives, mainly resource consumption, evidence pollution, discouragement and misdirection.

 

Astroturfing consensus This is a technique that attempts to build a manufactured consensus around a flawed set of statements or compromised information. This is related to consensus cracking, where false evidence is injected in an attempt to dispute or discredit what the current consensus is, and push it towards the desired false consensus. Misleading and false evidence and information are often salted into the evidence pool, with an aim to impede organic consensus building, while also poisoning the available information and evidence.

''Currently as has been ongoing… of course…''