Anonymous ID: 11cc34 Dec. 4, 2021, midnight No.15132643   🗄️.is 🔗kun   >>2756

>>15132631

>make use of a number of compartmentalized identities, possible if trauma was used to create distinct “alters” or separate, autonomous personalities or ‘virtual selves.’ (Each alter can have discreet memories)

Is it possible for two distinct noncorporeal entities to parasitize/possess two distinct alters in the same person?

Anonymous ID: 11cc34 Dec. 4, 2021, 12:06 a.m. No.15132661   🗄️.is 🔗kun

9 year old Kenzie Hollingsworth who suffered a severe skull fracture, bleeding on the brain, and a broken leg in the Christmas parade massacre.

Anti-White animus.

Anonymous ID: 11cc34 Dec. 4, 2021, 12:21 a.m. No.15132701   🗄️.is 🔗kun   >>2712 >>2735 >>2778 >>2837 >>2978 >>3024 >>3088 >>3198

Meta asks users to send nudes

3 Dec, 2021

 

Meta, the new name for Facebook Inc., has co-developed a platform that asks people to submit their intimate photos and videos in order to prevent them from being used as ‘revenge porn’ on Facebook or Instagram.

 

The tool is for “adults over 18 years old who think an intimate image of them may be shared, or has already been shared, without their consent,” Meta said in a blogpost on Thursday.

 

The new platform, which Meta developed together with the UK Revenge Porn Helpline and 50 other NGOs, aims to prevent the publication of ‘revenge porn’, rather than just removing the delicate files after they’ve already appeared online.

 

Concerned users are being asked to submit photos or videos of themselves naked or having sex to a hash-tagging database through the StopNCII.org (Stop Non-Consensual Intimate Images) website.

 

The special hashtags, or “digital fingerprints,” are then assigned to those materials by the tool, and can be used to instantly detect and curb attempts to upload them online by the perpetrators.

 

Meta said that the system had been developed “with privacy and security at every step.” Only the hashtags are being shared with StopNCII.org and the tech platforms participating in the project, while the explicit images and clips never leave the user’s device and remain “securely in the possession of the owner,” it assured.

 

The new tool represents “a sea-change in the way those affected by intimate image abuse can protect themselves,” Revenge Porn Helpline manager Sophie Mortimer insisted.

 

But the question remains whether people will actually be willing to use it, considering Meta’s bad rap for mishandling user data.

 

https://www.rt.com/news/542115-meta-facebook-revenge-porn/

Anonymous ID: 11cc34 Dec. 4, 2021, 12:25 a.m. No.15132714   🗄️.is 🔗kun   >>2730 >>2994 >>3024 >>3088 >>3198

AI Training Is Outpacing Moore’s Law

The new set of MLPerf results proves it

Samuel K. Moore

02 Dec 2021

 

The days and sometimes weeks it took to train AIs only a few years ago was a big reason behind the launch of billions of dollars-worth of new computing startups over the last few years—including Cerebras Systems, Graphcore, Habana Labs, and SambaNova Systems. In addition, Google, Intel, Nvidia and other established companies made their own similar amounts of internal investment (and sometimes acquisition). With the newest edition of the MLPerf training benchmark results, there’s clear evidence that the money was worth it.

 

The gains to AI training performance since MLPerf benchmarks began “managed to dramatically outstrip Moore’s Law,” says David Kanter, executive director of the MLPerf parent organization MLCommons. The increase in transistor density would account for a little more than doubling of performance between the early version of the MLPerf benchmarks and those from June 2021. But improvements to software as well as processor and computer architecture produced a 6.8-11-fold speedup for the best benchmark results. In the newest tests, called version 1.1, the best results improved by up to 2.3 times over those from June.

 

For the first time Microsoft entered its Azure cloud AI offerings into MLPerf, muscling through all eight of the test networks using a variety of resources. They ranged in scale from 2 AMD Epyc CPUs and 8 Nvidia A100 GPUs to 512 CPUs and 2048 GPUs. Scale clearly mattered. The top range trained AIs in less than a minute while the two-and-eight combination often needed 20 minutes or more.

 

Nvidia worked closely with Microsoft on the benchmark tests, and, as in previous MLPerf lists, Nvidia GPUs were the AI accelerators behind most of the entries, including those from Dell, Inspur, and Supermicro. Nvidia itself topped all the results for commercially available systems, relying on the unmatched scale of its Selene AI supercomputer. Selene is made up of commercially available modular DGX SuperPod systems. In its most massive effort, Selene brought to bear 1080 AMD Epyc CPUs and 4320 A100 GPUs to train the natural language processor BERT in less than 16 seconds, a feat that took most smaller systems about 20 minutes.

 

According to Nvidia the performance of systems using A100 GPUs has increased more than 5-fold in the last 18 months and 20-fold since the first MLPerf benchmarks three years ago. That’s thanks to software innovation and improved networks, the company says. (For more, see Nvidia's blog.)

 

Given Nvidia’s pedigree and performance on these AI benchmarks, its natural for new competitors to compare themselves to it. That’s what UK-based Graphcore is doing when it notes that it’s base computing unit the Pod16—1 CPU and 16 IPU accelerators—beats the Nvidia base unit the DGX A100—2 CPUs and 8 GPUs—by nearly a minute.

 

https://spectrum.ieee.org/ai-training-mlperf