Anonymous ID: b73672 June 1, 2018, 1:13 p.m. No.1609087   🗄️.is 🔗kun   >>9111 >>9294 >>9557

How a Pentagon Contract Became an Identity Crisis for Google

 

Fei-Fei Li is among the brightest stars in the burgeoning field of artificial intelligence, somehow managing to hold down two demanding jobs simultaneously: head of Stanford University’s A.I. lab and chief scientist for A.I. at Google Cloud, one of the search giant’s most promising enterprises.

 

Yet last September, when nervous company officials discussed how to speak publicly about Google’s first major A.I. contract with the Pentagon, Dr. Li strongly advised shunning those two potent letters.

 

“Avoid at ALL COSTS any mention or implication of AI,” she wrote in an email to colleagues reviewed by The New York Times. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.”

 

Dr. Li’s concern about the implications of military contracts for Google has proved prescient. The company’s relationship with the Defense Department since it won a share of the contract for the Maven program, which uses artificial intelligence to interpret video images and could be used to improve the targeting of drone strikes, has touched off an existential crisis, according to emails and documents reviewed by The Times as well as interviews with about a dozen current and former Google employees.

 

It has fractured Google’s work force, fueled heated staff meetings and internal exchanges, and prompted some employees to resign. The dispute has caused grief for some senior Google officials, including Dr. Li, as they try to straddle the gap between scientists with deep moral objections and salespeople salivating over defense contracts.

 

The advertising model behind Google’s spectacular growth has provoked criticism that it invades web users’ privacy and supports dubious websites, including those peddling false news. Now the company’s path to future growth, via cloud-computing services, has divided the company over its stand on weaponry. To proceed with big defense contracts could drive away brainy experts in artificial intelligence; to reject such work would deprive it of a potentially huge business.

 

The internal debate over Maven, viewed by both supporters and opponents as opening the door to much bigger defense contracts, generated a petition signed by about 4,000 employees who demanded “a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

 

Executives at DeepMind, an A.I. pioneer based in London that Google acquired in 2014, have said they are completely opposed to military and surveillance work, and employees at the lab have protested the contract. The acquisition agreement between the two companies said DeepMind technology would never be used for military or surveillance purposes.

 

About a dozen Google employees have resigned over the issue, which was first reported by Gizmodo. One departing engineer petitioned to rename a conference room after Clara Immerwahr, a German chemist who killed herself in 1915 after protesting the use of science in warfare. And “Do the Right Thing” stickers have appeared in Google’s New York City offices, according to company emails viewed by The Times.

EDITORS’ PICKS

She Found Comfort in a Brooklyn Diner, Then Lost Everything

What Drove a Man to Set Himself on Fire in Brooklyn?

On Social Media’s Fringes, Extremism Targets Women

 

Those emails and other internal documents, shared by an employee who opposes Pentagon contracts, show that at least some Google executives anticipated the dissent and negative publicity. But other employees, noting that rivals like Microsoft and Amazon were enthusiastically pursuing lucrative Pentagon work, concluded that such projects were crucial to the company’s growth and nothing to be ashamed of.

 

https:// www.nytimes.com/2018/05/30/technology/google-project-maven-pentagon.html

Anonymous ID: b73672 June 1, 2018, 1:40 p.m. No.1609252   🗄️.is 🔗kun   >>9268 >>9269 >>9288 >>9293 >>9311 >>9399 >>9446 >>9557 >>9618

Coming to Grips with the Implications of Quantum Mechanics

 

The question is no longer whether quantum theory is correct, but what it means.

 

For almost a century, physicists have wondered whether the most counterintuitive predictions of quantum mechanics (QM) could actually be true. Only in recent years has the technology necessary for answering this question become accessible, enabling a string of experimental results—including startling ones reported in 2007 and 2010, and culminating now with a remarkable test reported in May—that show that key predictions of QM are indeed correct. Taken together, these experiments indicate that the everyday world we perceive does not exist until observed, which in turn suggests—as we shall argue in this essay—a primary role for mind in nature. It is thus high time the scientific community at large—not only those involved in foundations of QM—faced up to the counterintuitive implications of QM’s most controversial predictions.

 

Over the years, we have written extensively about why QM seems to imply that the world is essentially mental (e.g. 1990, 1993, 1999, 2001, 2007, 2017a, 2017b). We are often misinterpreted—and misrepresented—as espousing solipsism or some form of “quantum mysticism,” so let us be clear: our argument for a mental world does not entail or imply that the world is merely one’s own personal hallucination or act of imagination. Our view is entirely naturalistic: the mind that underlies the world is a transpersonal mind behaving according to natural laws. It comprises but far transcends any individual psyche.

 

The claim is thus that the dynamics of all inanimate matter in the universe correspond to transpersonal mentation, just as an individual’s brain activity—which is also made of matter—corresponds to personal mentation. This notion eliminates arbitrary discontinuities and provides the missing inner essence of the physical world: all matter—not only that in living brains—is the outer appearance of inner experience, different configurations of matter reflecting different patterns or modes of mental activity.

 

According to QM, the world exists only as a cloud of simultaneous, overlapping possibilities—technically called a “superposition”—until an observation brings one of these possibilities into focus in the form of definite objects and events. This transition is technically called a “measurement.” One of the keys to our argument for a mental world is the contention that only conscious observers can perform measurements.

 

Some criticize this contention by claiming that inanimate objects, such as detectors, can also perform measurements, in the sense described above. The problem is that the partitioning of the world into discrete inanimate objects is merely nominal. Is a rock integral to the mountain it helps constitute? If so, does it become a separate object merely by virtue of its getting detached from the mountain? And if so, does it then perform a measurement each time it comes back in contact with the mountain, as it bounces down the slope? Brief contemplation of these questions shows that the boundaries of a detector are arbitrary. The inanimate world is a single physical system governed by QM. Indeed, as first argued by John von Neumann and rearticulated in the work of one of us, when two inanimate objects interact they simply become quantum mechanically “entangled” with one another—that is, they become united in such a way that the behavior of one becomes inextricably linked to the behavior of the other—but no actual measurement is performed.

 

Let us be more specific. In the well-known double-slit experiment, electrons are shot through two tiny slits. When they are observed at the slits, the electrons behave as definite particles. When observed only after they’ve passed through slits, the electrons behave as clouds of possibilities. In 1998, researchers at the Weizmann Institute in Israel showed that, when detectors are placed at the slits, the electrons behave as definite particles. At first sight, this may seem to indicate that measurement does not require a conscious observer.

 

https:// blogs.scientificamerican.com/observations/coming-to-grips-with-the-implications-of-quantum-mechanics/

Anonymous ID: b73672 June 1, 2018, 1:56 p.m. No.1609386   🗄️.is 🔗kun

>>1609294

This is crazy I believe and who or what morale compass decides which direction it goes and when, personally I think this is a very dangerous path to follow. I also recall during the Obama years a mention of pre-cog as a possibility of being used as well.

Anonymous ID: b73672 June 1, 2018, 2:05 p.m. No.1609468   🗄️.is 🔗kun   >>9773

How to hear (and delete) every conversation your Amazon Alexa has recorded

 

Digital assistants like Amazon Alexa and Google Assistant are designed to learn more about you as they listen, and part of doing so is to record conversations you’ve had with them to learn your tone of voice, prompts, and requests. Recently, this feature-not-a-bug has landed Amazon in a string of bizarre headlines. In March, users reported that their Echo speakers began spontaneously laughing, and last week, a family in Portland said their device recorded and sent conversations to a colleague without their knowledge. For these instances, Amazon claims that the devices were likely triggered by false positive commands.

 

It’s not uncommon for smart speakers to pick up a random part of your everyday conversations and misunderstand it as a wake word (especially if you may have changed the Alexa trigger to a more common word, like “Computer”). If you’re curious what Alexa has been hearing and recording in your household, here’s a quick way to check.

 

Digital assistants like Amazon Alexa and Google Assistant are designed to learn more about you as they listen, and part of doing so is to record conversations you’ve had with them to learn your tone of voice, prompts, and requests. Recently, this feature-not-a-bug has landed Amazon in a string of bizarre headlines. In March, users reported that their Echo speakers began spontaneously laughing, and last week, a family in Portland said their device recorded and sent conversations to a colleague without their knowledge. For these instances, Amazon claims that the devices were likely triggered by false positive commands.

 

It’s not uncommon for smart speakers to pick up a random part of your everyday conversations and misunderstand it as a wake word (especially if you may have changed the Alexa trigger to a more common word, like “Computer”). If you’re curious what Alexa has been hearing and recording in your household, here’s a quick way to check.

 

If you are uncomfortable having any particular recording in your Alexa history, you can delete it on an individual basis or go to the Amazon’s Manage Your Content and Devices page to wipe it entirely. The company, of course, cautions that doing so “may degrade your Alexa experience.”

 

As noted above, Amazon keeps these recordings to personalize the Alexa experience to your household and uses them to create an acoustic model of your voice. While it does automatically create a voice profile for each new user it recognizes (or ones you’ve manually added), the company says it deletes acoustic models if it has not recognized any particular user for three years.

 

For heavy Alexa users, going through all of these commands to find egregious conversations to delete might be too much work. But if you’re nervous about what the Echo has been listening to you say, it may be worth browsing to make sure nothing it has recorded is something you want transmitted elsewhere.

 

https:// www.theverge.com/2018/5/28/17402154/how-to-see-amazon-echo-alexa-conversation-recording-history-listen

Anonymous ID: b73672 June 1, 2018, 2:08 p.m. No.1609500   🗄️.is 🔗kun   >>9578

>>1609421

This is something I would usually post then, but I thought there might be some who have an interest, especially if they don't read the notables, I could always re-post l for the night crowd, if necessary.