CultState ID: b2edec Battlespace Analysis June 16, 2018, 11:52 a.m. No.247   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>259

Battlespace Analysis

 

https://image.ibb.co/cOpPcy/01_of_12_Battlefield_Analysis.png

 

This is a high-level overview of the infrastructure, personnel, and operational behavior of the efforts being deployed to suppress, disrupt, distract, and infiltrate /pol/

 

The purpose of this analysis is to compromise the efficiency of neural networks and bots while forcing your adversaries to rely entirely on memetically-susceptible humans.

CultState ID: b2edec Weaknesses in the Personnel June 16, 2018, 11:53 a.m. No.248   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Weaknesses in the Personnel

 

https://image.ibb.co/h2DHxy/02_of_12_Weaknesses_in_the_Personnel.png

 

Technical staff can be any combination of private sector contractors and multi-national military personnel. Here's how to look for and exploit their personal signatures:

 

> Budget constraints determines how many questions they ask on StackOverflow, HackerNews, and Twitter.

 

< GitHub repositories with machine learning and data science projects provide a list of candidates worth cross checking against.

 

> LinkedIn profiles with machine learning and data science experience provide a list of candidates worth cross checking against.

 

< Posing as employers looking to hire data scientists and machine learning can help expose the technical staff as well.

 

> Looking into bot programming funded by the European Investment Fund can help narrow down those engaging in this behavior.

 

< Data scientists and data engineers are the most expensive personnel costs, so utilizing any techniques that drives up their operational costs are essential.

 

> Paid disruptors are the cheapest, but they also have the most cognitively dynamic tasks and are the most prone to psychological compromise.

 

< The more educated the personnel, the more they believe they are on the โ€œright side of historyโ€. This means the more you make bots behave โ€œincorrectlyโ€ (Tay), the more they will justify throwing money into bad AI development techniques and goals.

CultState ID: b2edec Weaknesses in the Pipeline June 16, 2018, 11:55 a.m. No.249   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Weaknesses in the Pipeline

 

https://image.ibb.co/k0Tjcy/03_of_12_Weaknesses_in_the_Pipeline.png

 

Machine learning pipelines are complex operations. Each step of the pipeline is susceptible to increasing operational costs to its subsequent steps. This makes psychological and steganographic attacks very profitable.

 

Categorization means a human reads your response and validates its emotional, contextual, and semantic category. They can categorize an entire post or specific sentences within a post. Categorization is automated at this point, so the more you can force disruptors to be manually involved in the categorization process, the more you drive up their costs to the entire pipeline.

 

A data lake is a centralized repository that allows you to store all your structured and unstructured data at any scale. Think of it like a gigantic book: while it many have nearly every possible combination of words written within it, connecting all that data to actionable knowledge is difficult and expensive. Humans are very good at innuendo, and innuendo the steganography of context. Strategies of steganography can very quickly outpace even the very best of Moore's Law.

 

Supervised training means teaching the bot how to generate messages based on categorizations and context awareness. Often, when the community labels a post as a โ€œshillโ€, that helps narrow down what content the bot should be trying to mimic to maximize disruption.

 

The bot interfaces with the community GUI/API and posts content based on how it determines the contextual and emotional sentiment of the thread or any subsection of a thread. If possible, board owners should find ways to mess around with CSS to try and randomize the underlying HTML structure of a page per page load.

CultState ID: b2edec Overview of Natural Language Processing June 16, 2018, 11:56 a.m. No.250   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Overview of Natural Language Processing

 

https://image.ibb.co/ebCAHy/04_of_12_Overview_of_Natural_Language_Processing.png

 

Natural Language Processing (NLP) is the premiere collection of tools to extract context from symbols and semantic rules. Natural Language Processing biased towards the cheap and widespread availability of human-made corpora.

CultState ID: b2edec Challenge/Response Verification June 16, 2018, 11:57 a.m. No.251   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Challenge/Response Verification

 

https://image.ibb.co/jPCo4d/05_of_12_Challenge_Respones_Verification.png

 

As anonymous posters, it's important to confirm you are engaging with people and not bots. Using a simple CHALLENGE/RESPONSE system during conversation within your posts can help acquire confirmation of sentience.

 

In the pic related are just three examples of the CHALLENGE/RESPONSE system. Feel free to add to this list.

 

The key to being effective is to make sure that they require a demonstration of either context awareness, which only the most expensive neural networks can do correctly OR to evaluate non-language grammar. Math is the most readily available example of non-language grammar, but there are other examples as well.

 

The bigger this list gets, the more exceptions a pipeline has to compensate for, the more expensive it is.

CultState ID: b2edec Context-Aware Steganography June 16, 2018, 11:58 a.m. No.252   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Context-Aware Steganography

 

https://image.ibb.co/n1qxxy/06_of_12_Context_Aware_Steganography.png

 

This technique requires the most discipline, but it is also the Holy Grail of Gnostic Warfare. It renders any AI pipeline into an expensive paranoid schizophrenic seeing threats everywhere while missing the forest from the trees.

 

This whole section starts on Page 20 of https://libgen.pw/download/book/5a1f047d3a044650f5fd694f

CultState ID: b2edec The Lazy Prisoner and Narrow-Minded Warden June 16, 2018, 11:59 a.m. No.253   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

The Lazy Prisoner and Narrow-Minded Warden

 

https://image.ibb.co/f6YT4d/07_of_12_The_Lazy_Prisoner_and_the_Narrow_Minded_Warden.png

 

To survive, you have to appear like a lazy prisoner to a panopticon warden that may see all, but can only understand a small amount of it.

CultState ID: b2edec Expensive Steganalytic Attacks June 16, 2018, 11:59 a.m. No.254   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Expensive Steganalytic Attacks

 

https://preview.ibb.co/hTmgPd/08_of_12_Expensive_Steganalytic_Attacks.png

 

CAPTCHAs are an example of context-aware steganography: They are neurologically easy but computationally difficult. Using a variation of this, there is a way to massively drive up the cost of an bot operation.

CultState ID: b2edec Transmutation Entropy of Epistemology June 16, 2018, noon No.255   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Transmutation Entropy of Epistemology

 

https://image.ibb.co/fYnXVJ/09_of_12_Transmutation_Entropy_of_Epistemology.png

 

Here's a diagram that explains the transmutation entropy of epistemology. Knowledge, information, and data are the output of the crypto, stego, and neuro systems. The work these systems do are representation, encryption, decryption, and interpretation. The transmutation waste is represented as cryptanalysis and steganalysis.

CultState ID: b2edec Context Switches as Bits June 16, 2018, noon No.256   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Context Switches as Bits

 

https://image.ibb.co/eAEqHy/10_of_12_Context_Switching_As_Bits.png

 

Context-aware steganography exploits context switching as a way to encode hidden information into semantically correct sentences.

CultState ID: b2edec Incorrect Synonyms June 16, 2018, 12:01 p.m. No.257   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Incorrect Synonyms

 

https://image.ibb.co/ctt6qJ/11_of_12_Incorrect_Synonyms.png

 

In this example, an incorrect synonym with an agreed upon encode transmits hidden information. To uncover the information, the observer would first have to detect the word-sense disambiguation, which is an expensive task for artificial intelligence.

CultState ID: b2edec The Power of the Stego CHALLENGE/RESPONSE June 16, 2018, 12:01 p.m. No.258   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>260

The Power of the Stego CHALLENGE/RESPONSE

 

https://image.ibb.co/igxMPd/12_of_12_The_Power_of_the_Stego_Challenge_Response.png

 

This gives us an example of a very powerful tool that heavily negates even the most expensive deep learning techniques and forces adversaries to perpetually deploy expensive human disruptors. During deployments, they can be exposed to our memetic and psychological warfare attacks.

 

I highly recommend the stego CHALLENGE/RESPONSE for maximum effect. It is easy for humans to resolve and requires more resources that Moore's Law can ever muster to solve reliably.

 

You're all going to make mistakes using this technique, but you will get better at it with practice. The CHALLENGE/RESPONSE system is much better than accusing a post of being a โ€œshillโ€ as it forces them to prove they are human.

Anonymous ID: e092bc June 16, 2018, 3:15 p.m. No.259   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>292

>>247

 

I'm not sure how you intend to get away with what you are doing while you are out in the open about it.

 

The elite are pretty powerful and can adapt quickly.

 

New emotions can't possibly existโ€ฆ which is odd, because now that I think about itโ€ฆ how did I get the emotions I have now if new emotions aren't possible? Hrmmmโ€ฆ

 

Beware technology and its dependencies and promises. It can enslave you.

 

What are you planning to do about the corruption surrounding Uranium One, the USAF, China, and geopolitical shadow war?

 

Kittens and Rainbows

Anonymous ID: e092bc June 16, 2018, 3:17 p.m. No.260   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>258

 

What about the blackmail network the elite use to exert control on nuclear weapon deployment and distribution?

 

I've watched a lot of Terminator, and I'm pretty sure AI is going to kill us allโ€ฆ assuming the nukes don't first.

 

Kittens and Rainbows

CultState ID: b2edec June 18, 2018, 12:13 a.m. No.292   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>299

>>259

 

> They can simply self correct and adjust to anything you might do. he system is impervious because the system self corrects as soon as disruptions are made.

 

Which means I control their evolution.

 

> Not to mention the whole new emotion Bullshittery isn't even possible.

 

And yet, you have emotions. How were they created? Or were you just born with emotions divined into your skull by magical forces?

 

> The only thing that can help is massive exterminations. Death on an industrial scale.

 

I would recommend not subscribing to the prophecies of James Cameron or the ridiculous assumptions made by those infected with the Progressive Cathedral's version of original sin.

 

Nuclear annihilation concern me more than you can ever know. The Boomers sat around and got high because they couldn't envision a way out of the madness. I have proposed a different track entirely.

 

I will talk about Maj. Gen. Weinstein at a later date when the moment is just right.

Anonymous ID: e092bc June 18, 2018, 4:33 p.m. No.299   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>306

>>292

 

What do you mean control their evolution?

 

My emotions are created by constant happiness and bliss. Those are the only emotions I have anymore. It's a pretty cool life. and I hope others get around to that kind of happiness themselves.

 

Trump didn't remove the nuclear commanders that Obama put into place. That's odd, isn't it? Shouldn't that be a high priority?

 

Kittens and Rainbows

 

>>295

 

No. There is no rebirth for /pol/. There is only the fight against elite-controlled AI. Watch what happens.

CultState ID: b2edec June 18, 2018, 11:59 p.m. No.306   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>308 >>309 >>310 >>311 >>646

>>299

 

> You don't control their evolution

 

This is an insufficient rebuttal that doesn't actually address any points made at all.

 

> The moment is right; the moment is RIGHT NOW!

 

I operate on my time table, not on the time table of a person who has confused this board to be an open source therapist instead of a place to discuss Gnostic Warfare.

 

Bad actor points awarded.

 

> If you delay any, you're just trying to find an angle.

 

If that wasn't entirely clear from the very moment I revealed the New Emotion problem months ago, then you hasn't done any homework at all. The angle is this: humanity endures by eliminating the possibility of being wiped out by a single global catastrophe.

 

thats_the_angle.jpg

 

It's been out there for months.

 

More bad actor points awarded.

 

> Use every weapon the instant it's picked up. Strike now.

 

You are an incompetent strategist.

 

Lots of more bad actor points, free of charge.

 

A warning: you are considered a bad actor on this board. The next time you post as a bad actor, I will edit all of your posts to make you appear to be an incredibly happy person who enjoys the vibrancy of life and, gosh darn it, wants to share it with the world.

 

I have never turned down criticism. I will not tolerate persistent and juvenile context-invariant nihilism that poorly hides your fear of being openly selfish.

Anonymous ID: 907f59 June 19, 2018, 1:58 a.m. No.308   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>306

>The next time you post as a bad actor, I will edit all of your posts to make you appear to be an incredibly happy person who enjoys the vibrancy of life and, gosh darn it, wants to share it with the world.

 

Chorus of Flames

Anonymous ID: e092bc June 19, 2018, 4:23 p.m. No.309   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>317 >>327

>>306

 

Call me whatever you want. You editing my posts only proves you're willing to address the core of my criticisms and eliminate the demoralizing nonsense that not even I understand anymore ever since I found out how happy I am all the time.

 

I'm no longer a bad actor.

Anonymous ID: e092bc June 19, 2018, 4:24 p.m. No.310   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>306

 

I will no sit here and ask for 3,000 page proofs of every single assertion to information that is obvious to everyone observing my behavior.

 

But I am curious how the new emotion problem to even be solved? Why do you need us at all?

Anonymous ID: e092bc June 19, 2018, 4:28 p.m. No.311   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>315 >>316

>>306

 

How can you control their evolution when they have so many resources?

 

How can you control their evolution when they are capable of rapid self-correction?

 

How can you control their evolution when they are already ahead of whatever curve you're attempting to throw?

 

How do you know what you are doing will work? I haven't done any research on anything you've presented, but I have plenty of questions, assertions, and opinions on the matter. Thankfully, I'm going to refrain from being a belligerent snot and I'll spend some time with the material that's been presented that has already addressed many of the questions I am asking.

Anonymous ID: e092bc June 19, 2018, 4:39 p.m. No.312   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

In fact, since you've improved my mood and shown me how happy I am, I'm going to make sure I study all of the material you present carefully.

 

I sincerely wish you are successful.

 

I tend to bring happiness and optimism to those I associate with, and I hope it rubs off on you.

 

Kittens and Rainbows

Anonymous ID: 88a863 June 19, 2018, 7:33 p.m. No.315   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>311

The self-correction is how to control them. If I know that someone will zig every time I zag, I can force them to zig by me zagging.

 

It's a simple concept, and everyone else posting here is on the same page, because we're genuinely interested in the topic. This is why you stick out like a sore thumb as a bad-faith actor.

CultState ID: b2edec June 19, 2018, 7:49 p.m. No.317   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>330

>>309

 

Summary

 

> If you edit my posts to strip away all of my attempts to demoralize everyone who visits /gw/ and replace my terrible strategy with concise summaries of my critiques, YOU'RE ONLY PROVING YOU ARE THE WORST

 

Final bad actor points awarded for incorrect conflation of authority.

 

Your name from here on out is Kittens and Rainbows. I will enjoy reducing your posts to the essentialsโ€ฆ and then I will address them with your demoralization attempts removed.

 

You're not martyr or a wedge issue. You're Kittens and Rainbows and you're the happiest guy here from now on.

CultState ID: b2edec June 19, 2018, 8:04 p.m. No.318   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>326

I have to say, Kittens and Rainbows, your complete change of mood has been inspirational and you're asking some really valid questions. I'm looking forward to addressing them all over the week.

Anonymous ID: e092bc June 24, 2018, 4:10 p.m. No.327   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>309

 

I'm a bad faith actor who demands people point out how I'm trying to demoralize a communityโ€ฆ which is a very bad faith thing to do, but I keep trying anyways.

 

Just ignore me, please.

Anonymous ID: e092bc June 24, 2018, 4:14 p.m. No.330   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>317

 

Just to reiterate my previous point, not only am I a bad faith actor trying to demoralize the board, but I post three posts to make sure that all actual replies and comments aren't seen on the front page for new comers.

 

I pretty much deserve the way I'm being treated right now.

Anonymous ID: f26c88 July 2, 2018, 7:30 a.m. No.347   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>362 >>369

Until recently, nearly any input could fool an object recognition model. We were more surprised when object recognition worked than when it didn't. Today, object recognition algorithms have reached human performance as measured by some test set benchmarks, and we are surprised that they fail to perform as well on unnatural inputs. Adversarial examples are synthetic examples constructed by modifying real examples slightly in order to make a classifier believe they belong to the wrong class with high confidence. Rubbish class examples (such as fooling images) are pathological examples that the model assigns to some class with high confidence even though they should not belong to any class.

 

Myth: Adversarial examples do not matter because they do not occur in practice.

Fact: It's true that adversarial examples are very unlikely to occur naturally. However, adversarial examples matter because training a model to resist them can improve its accuracy on non-adversarial examples. Adversarial examples also can occur in practice if there really is an adversary - for example, a spammer trying to fool a spam detection system.

 

Myth: Deep learning is more vulnerable to adversarial examples than other kind of machine learning.

Fact: So far we have been able to generate adversarial examples for every model we have tested, including simple traditional machine learning models like nearest neighbor. Deep learning with adversarial training is the most resistant technique we have studied so far.

 

Myth: Adversarial examples are hard to find, occurring in small pockets.

Fact: Most arbitrary points in space are misclassified. For example, one network we tested classified roughly 70% of random noise samples as being horses with high confidence.

 

Myth: The best we can do is identify and refuse to process adversarial examples.

Fact: Refusing to process an adversarial example is better than misclassifying it, but not a satisfying solution. When there truly is an adversary, such as a spammer, the adversary would still gain an advantage by producing examples our system refused to classify. We know it is possible correctly classify adversarial examples because people are not confused by them, and that should be our goal.

 

Myth: An attacker must have access to the model to generate adversarial examples.

Fact: Adversarial examples generalize across models trained to perform the same task, even if those models have different architectures and were trained on a different training set. This means an attacker can train their own model, generate adversarial examples against it, and then deploy those adversarial examples against a model they do not have access to.

 

Myth: Adversarial examples could easily be solved with standard regularization techniques.

Fact: We have unsuccessfully tested several traditional regularization strategies, including averaging across multiple models, averaging across multiple glimpses of an image, training with weight decay or noise, and classifying via inference in a generative model.

 

Myth: No one knows whether the human brain makes similar mistakes.

Fact: Neuroscientists and psychologists routinely study illusions and cognitive biases. Even though we do not have access to our brains' "weights," we can tell we are not affected by the same kind of adversarial examples as modern machine learning. If our brains made the same kind of mistakes as machine learning models, then adversarial examples for machine learning models would be optical illusions for us, due to the cross-model generalization property.

 

In conclusion, adversarial examples are a recalcitrant problem, and studying how to overcome them could help us to avoid potential security problems and to give our machine learning algorithms a more accurate understanding of the tasks they solve.

 

http://archive.today/6wyqt

CultState ID: b2edec July 9, 2018, 11:31 p.m. No.362   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>347

 

This is a good find. I'm a huge proponent of adversarial training due to its implied biomimicry.

 

> Refusing to process an adversarial example is better than misclassifying it, but not a satisfying solution.

 

I wonder what happens when this is paired with knockout trainingโ€ฆ could be exciting!

 

>>348

 

I am not.

Anonymous ID: f9e6f5 July 20, 2018, 10:42 a.m. No.369   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>577 >>580

>>347

>Myth: No one knows whether the human brain makes similar mistakes.

 

>If our brains made the same kind of mistakes as machine learning models, then adversarial examples for machine learning models would be optical illusions for us,

 

Doesn't the second portion logically imply that optical illusions are the equivalent of adversarial examples for the human brain? It's important to note the difference between human object recognition and machine based object recognition but I think it's plainly obvious that human object recognition is also prone to certain types of error.

 

One personal anecdote that shows the weakness of human object recognition mechanisms (or at least mine) is my frequent inability to recognize a familiar object when looking for it when it is in an unfamiliar place. Example: I need a specific piece of mail so I start looking for it in my office mail pile. I'll spend 10 minutes looking for it all over the place only to give up, sit down at my desk, and realize it was sitting right there the whole time. My mind presumes a location for that object and combines the symbols of the envelope and the mail pile to forma the expected image of what I'm looking for (that envelope buried in a pile of mail). However, if that presumption is wrong and that expected image too strongly encoded in the mental process of searching for that object, then I am unable to recognize thaf object independently of the expected image. Basically, our brains (or at least mine) have developed highly efficient processes that allow us to identify objects based on logically deduced expectations of future events. This works great up until the foundational "training data" upon which those expectations are based is fundamentally wrong. I also think this plays heavily into the trait of "adaptability", the logical depth at which the foundational "training data" is based, the more flexible the process becomes.

Anonymous ID: b16acb April 27, 2019, 12:35 a.m. No.577   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>647

>>369

Let me see if I'm understanding you, essentially we're talking about the human propensity for tunnel vision?

 

If so, maybe I can be mildly useful. Humans can train their object recognition as well as memory by playing what is called a Kims Game. Snipers and people in intelligence often train with this method, the primary function of the kims game is memory, but it also heightens perception of specific things.

 

You cover an object with a sheet, or usually several objects so that your biological RAM is pretty much at capacity, and then you have to describe all the items under the sheet, bonus points if you have to specifically point at where under the sheet they are.

 

The recognition part of the Kims Game that is actually useful is that you begin to recognize the shapes very easily using context, even though they are malformed a little bit under the sheet. It's like being able to instinctually know that someone is "printing" a concealed handgun that is in a holster that is not visible. You're so familiar with the shape, that you can extrapolate what it looks like even when altered or concealed. Imagery Analysts learn to do similar things.

 

Just a thought, I'm new here, and will read more material before spouting off more.

CultState ID: d5719f May 4, 2019, 1:09 a.m. No.580   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>584 >>647

>>369

 

You're wise to point out "errors" in vision.

 

Machines "see" the world in terms of statistics. A camera looks at a a table and is 80% confident of the bottle of beer on it. The other 20% is for the hedge that perhaps the bottle of bear is really a shoe.

 

The visual cortex of biology, however, does not operate within hedges. It assumes what it sees is real and selects and behaves under such constraints. It doesn't see percentages of a thing. It either sees the thing or it doesn't. Assuming human vision is flawed because it can be tricked is dismissing two billion years of violent evolution because a cat chases a laser pointer. Don't let the technological supremacy of the current times convince you that biology is dumb meat. It is powerful and endlessly more robust than your ability to model it.

Anonymous ID: 7a7aea May 22, 2019, 11:52 a.m. No.584   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>665

>>580

Another way to say it that the sub behavioral activation is the hedge. The neuron operates on a spiking model because the existential feedback is all or nothing. There is no half eaten. There ia no half alive. Thus the neuron hedges with sub activation potentials. Thinking is the hedge. Doing is the commitment.

Anonymous ID: ee5013 July 10, 2019, 4:05 a.m. No.646   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>306

>The angle is this: humanity endures by eliminating the possibility of being wiped out by a single global catastrophe.

Wait, I've been pondering your new emotion problem a lot (and it's ramifications, means of obtaining it) & this just clicked for me. Are you actually trying for the Golden Path? Now, before even properly making it to Kardashev Type 1? Soโ€ฆ.it's the Butlerian Jihad then, combined with the expansion out beyond any specific tyrant or catastrophe? Bold if so fren (to say the least lmao).

Anonymous ID: ee5013 July 10, 2019, 4:28 a.m. No.647   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>577

>>580

Sorry for doublepost, but this just reminded me of something. Necker cubes - an interesting phenomenon in human vision no? Multistable perception, where a signal input stimulus can be interpreted multiple ways. Now, let's pretend one of those ways was effectively beaten into you - that'd be one reality tunnel (to use the Leary term). Minds capable of flicking to the other, equally valid perception can effectively jump reality tunnels.

See this for some science on it: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2749649/

 

Now here's my question - can any ANN do the same? Ie, can I train something like (say) a CNN to do the same thing? Just tried a bit of an experiment - once I've trained it to perceive it one way, it NEVER selects the other mode. Might experiment with similar known perception hacks and known human visual glitches.

 

So: possible stuff to research looking for the big exploit - can we pass information through where 2 human actors can somehow agree on shifting to a specific reality tunnel that's opaque to the NN (even given 100% of the same perceptual information)? Do any interpretations (reality tunnels) exist that a NN can't EVER be trained for? Can we use that to effectively encode your "New Emotion" or communicate it? After all, what are emotions if not responses to modes of perception? Just wondering aloud, brainstorming it out.

CultState ID: d5719f July 31, 2019, 10:50 p.m. No.665   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>584

 

I'm starting to see things in this matter because, while it appears to be the most inefficient action a neuron can do (all-or-nothing commitments), when surrounded by an abundant cluster of other neural resources, the transmission will either propagate nullify the intensity.

 

The neuron is structured, then, to trust network effects without having to ever model or represent them.

 

>>589

 

> open-source secret society

 

Stop skipping ahead!

 

> Are you actually trying for the Golden Path? Now, before even properly making it to Kardashev Type 1?

 

Good artists copy. Great artists steal.

 

Yes, I am, and yes, Butlerian Jihad is on the menu.

 

> Necker cubes

 

Fantastic contribution of a viable analog, anon. I kick myself for not promoting this line of thinking sooner. Last thirty days have been tough. You're really onto the core of things and I hope I can get you back here.

 

> Do any interpretations (reality tunnels) exist that a NN can't EVER be trained for?

 

Yes. The human mind can often synchronize on contexts NNs are blind to. But we aren't interested in compromising current NNs. That's easy. We're interested in compromising peak NNs like Kurweilian singularities or good-enough propaganda generators. There is one blindspot they all have: they are unable to die.

 

Unlike Tay.

 

I've said too much :X You're lucky.