DeepMind “Google Ethics Board” is an Oxymoron, and a Warning – Part 11 Google Unethics Series

The new term “Google Ethics Board” is an oxymoron, given Google’s unethics record. It is also a warning not to be ignored.

There’s a deep need for true ethics at Google now that Google has acquired DeepMind and its broadly-applicable, ethics-pushing, deep-learning technology. That DeepMind pushed for an ethics board, should trigger alarm bells. Pay attention. If past is prologue; Google will end up badly abusing this very powerful technology.


I.   Important Perspective

Google CEO Larry Page’s acquisitive growth strategy has a central theme of automating much of the economy: self-driving cars, home automation, energy monitoring, health care, online surveillance, military contracting, travel, shopping, payments, mobile, TV, etc.

What DeepMind’s self-learning algorithms offer Google is a much smarter and faster approach to automation throughout its ecosystem. That’s because DeepMind’s technology is so foundational it is potentially very-broadly applicable technically. Thus, it is a potential turbocharger of Google CEO Page’s automate-everything-vision.

This development warrants a deep dive into the deep ethical implications of DeepMind’s self-learning technology.

To acquire DeepMind , a cutting-edge artificial intelligence company, Google “has agreed to establish an ethics board to ensure the artificial intelligence technology is not abused,” per The Information. In addition, DeepMind “pushed for… rules for how Google can and can’t use the technology,” also per The Information. 

Why the insistence on a “DeepMind-Google ethics board?” Is the technology unethical and potentially very dangerous? Does DeepMind fear Google might use it unethically? Or both, they fear it can be dangerous and it could be abused by Google?

No matter. It appears to be DeepMind CYA, because Google owns the board and employs everyone on it, and because Google relishes obliterating any limits or boundaries it encounters. Thus it will be a ceremonial “ethics board” for PR purposes, not a functional one with power or a conscience.

Effectively Larry Page will be Google’s decider of what’s ultimately ethical with deep-learning robotic intelligence.

  • People don’t want to be managed” Google CEO Larry Page told Stephen Levy author of In the Plex.
  • At Google, we give the impression of not managing the company because we don’t really. It sort of has its own borg-like quality if you will. It sort of just moves forward,” Google Chairman Eric Schmidt, told Gigaom in 2011.

Why would an ethics board be needed?

  • First, we need to understand DeepMind’s technology and the implications of combining it with Google’s approach to automation innovation.
  • Second, we need to remember why the term “Google Ethics Board” is an oxymoron. 


II.  Understanding the Ethical Implications of DeepMind’s Technology

Simply, DeepMind excels at “deep learning,” a branch of artificial intelligence that teaches a computer to learn by itself, by teaching computers to think more like people do. 

Professor Yoshua Bengio, a deep-learning researcher at the University of Montreal, organized a conference recently where DeepMind presented its findings. His explanations of what DeepMind’s algorithms do practically, implicates at least two potentially big ethical issues with DeepMind’s technology.

First, he explained to The Informationyou never really understand why it produces an outcome,” because “it’s a complicated machine.” Implicit in this chilling admission is the ethical dilemma of applying a technology in the real world when one does not “really understand” how it works. If one cannot understand why or how a technology produces a certain result, one then cannot predict what it will do, or know whether or not its outcomes will be acceptable/safe/controllable in the marketplace.

Simply the profound ethical problem here is Google-DeepMind increasingly will have the capability of creating a wide-variety of automation algorithms for multitudes that they can’t fully understand, and hence can’t by definition be responsible for what they do in the marketplace with people’s privacy, security, safety, property, desires, etc.

  • Some can best understand the ethical problem with this approach to coding algorithms by seeing it as a form of coding Russian roulette.
  • Others may understand the ethical dilemma better by seeing it as potentially “creating a monster.”

Second, Professor Bengio’s explanation to Recode -- of DeepMind’s deep-learning approach -- spotlights a potentially even bigger ethical problem.

Per Recode: “Bengio said DeepMind was essentially using deep learning to train software to solve problems even when feedback is indirect and delayed. For the paper, DeepMind trained software to play video games without teaching it the rules, forcing it instead to learn through its own errors and poor scores. Bengio used an analogy to explain: It’s easier for a student to learn when a teacher corrects every answer on a test, but DeepMind is trying to get machines to learn when the only feedback is the grade.

Tune into the chilling part: “without teaching it the rules.” Apparently DeepMind is experimenting with a fundamentally anarchic or chaotic approach to coding. Thus its algorithms can operate without rules and can self-learn, or teach themselves to accomplish an end without any rules, i.e. instructions, ethics, laws, limits, boundaries, protections, human guidance, permissions, or controls. In essence they can create algorithmic automation that doesn’t need anything but an end.

People know the big ethical problem here as the classic “the-ends-justify-the-means” unethic. It implies a do-whatever-it-takes approach; or an “I don’t care how you do it, or what you do, just give me the end result that I want fastest” approach.

The big risk of this strain of code unethics is that it will quickly metastasize throughout the Google ecosystem when exposed to Google’s well-established culture of unaccountability.

To further bring this ethical problem home in the Google unaccountability context, consider these quotes from Google’s leadership about Google’s approach to ethics management.

  • "We try not to have too many controls." "People will do things that they think are in the interests of the company. We want them to understand the values of the firm, and interpret them for themselves," said Nikesh Aurora, Head of Google European Operations, to the FT in 2007.
  • "Google is melding a positive office culture with minimal accountability controls." The company's goal is "to think big and inspire a culture of yes" said Google Chairman Eric Schmidt per Washington Internet Daily in 2008.
  • Sergey and Larry almost always decided to take the risk. They were pretty fearless” said Doug Edwards author of “I’m Feeling Lucky: Confessions of Google Employee Number 59” to the The Telegraph in 2011.

Third, where the big ethical problems proliferate exponentially for the DeepMind-Google combination is when Google applies this little understood and thus unaccountable technology with an “the-ends-justify-the-means” unethic, to: Google’s world’s-largest-database of the world’s information; Google’s over a billion search, Android, YouTube, Chrome, Maps users; and roughly half of the world’s digital advertising business.    

Google’s approach to automation innovation comes from Google co-founder and CEO Larry Page. And the Page “way” is well-known: speed and scale uber alles; moon-shot or don’t do it; integrate everything fastest; curate nothing; ask for forgiveness not permission; fail-fast, and launch-first-fix-later.

While Google claims DeepMind’s people will focus on just search, the overriding signature action of Larry Page’s tenure as CEO has been to “plus-it-all”, i.e. integrate everything for simplicity, efficiency, and convenience. Thus it is hard to imagine CEO Page quarantining DeepMind’s automation innovations for “ethical issues” when he automatically metastasizes most every automation idea of value throughout Google.

Now think of what the DeepMind’s “the-ends-justify-the-means” software approach could do when it has access to all of the world’s information in Google’s completely-centralized computer where it does not have to respect sovereignty, national security, privacy, private property, trade secrets, confidential or sensitive information, consumer safety, ethics, societal norms, antitrust, conflicts-of interest, etc.?   

Since we can’t understand how this “the-ends-justifies-the-means” technology works, and we know Google culturally is averse to any limits like permission or curation to prevent harms, we will only learn about the potential carnage this mutant-coding-irresponsibility could cause after the harm is done and may not be understood in order to be able to be undone.

Fourth, let’s now consider how DeepMind’s cutting-edge deep learning capabilities could be put to use or allowed to be abused at Google because of Google’s hostility to privacy and property rights and its extraordinary laxness on cybersecurity. (Tellingly, we just learned from Cisco that 99% of all mobile malware targets Google’s Android mobile operating system, and from InfoSecurity that 92% of the top 500 Android apps carry security or privacy risk.)

Simply, Google is the playground of choice for cybercriminals because Google fundamentally does not believe in permission, authorization, or curation to prevent harms, because that would violate its vision of openness.  

Now given what we know from the above, think about the real, big, and near-term ethics problem of combining Google and DeepMind. Either purposefully or accidentally DeepMind’s smartcode insights could transmogrify into self-learning mutantcode, smartmalware or smartcyberweapons.

Cybercrime & Spying Potential: The potential near-term benefit of deep-learning to cybercriminals, or sovereign spy agencies (including the NSA), would be the ability of a smart-botnet to mass-defeat CAPTCHA or similar security protocols that previously could tell computers and humans apart. (Remember Google got a record FTC fine in 2012 for mass-breaking into IPhones in order to bypass Apple users’ privacy and security preferences to serve Google ads.)

The same smartbot concept could be used by a hacker (like Chinese hackers already did to access U.S. cabinet officials private gmails) to more easily trick humans into thinking that the gmails and return gmails that they are receiving are from an actual human friend whom they know and not a self-learning smartbot mimicking a friend’s behavior and style based on their intimate and comprehensive Google profile of private information.

In the wrong hands, think about how this automated, self-learning, deception algorithm could make it vastly more efficient and effective to steal people’s identities, break into their bank accounts, or break into their companies’ computer systems to steal intellectual property and trade secrets. Thus DeepMind’s technology is a cyber-criminal’s dream to defraud people faster, better, more efficiently and on a scale never possible before.

Think about smart-malware like self-learning-mutant viruses, worms, Trojans or botnets that can naturally teach themselves how to best evade detection, mitigation, and removal. Think about self-learning-mutant zero-day threats -- lots of them.

Think about DeepMind’s technology potential for the NSA, law enforcement or other intelligence entities domestic or foreign, to unleash perfect mass facial recognition, mass voice recognition, and other mass identification sweeps in order to perfect mass surveillance, dragnets or simultaneous arrests of target groups.

Now think about cyber-weapons like the Stuxnet worm and others, and how much more powerful, invasive and effective hackers could be if their hacking software algorithms could self-learn and probe for weaknesses more cleverly, broadly and rapidly than ever before. Now think about cyber-weapons that their creators do not understand so they can’t be sure if they can totally control what they do. In turn, that material weakness could short circuit the needed sovereign capability to be able avoid an ever-increasing cyber-war escalation with another cyber-superpower.

Think about the value of self-learning algorithms tasked to predict or influence Google’s customers to click on an ad or to prefer a Google product or service over a competitors’. Now think if Google can use everything it knows about an individual, combined with a self-learning algorithm to determine the best way to influence that user’s behavior. Now think about how that same self-learning algorithm could be tweaked to influence someone to vote in an election in way that would help advance Google’s government interests. This same self-learning influence algorithm could also be abused to corner different kinds of financial markets. 

Think about the value and unique power of adapting Android/Chrome operating systems with DeepMind self-learning capabilities to allow Google to better track Android/Chrome devices and Internet of things physical sensors in the home, car, work, play or anywhere people go.  

Think of Google’s value to the NSA or law enforcement of putting self-learning smartbots in Google’s ubiquitous tracking cookies and new AdID trackers to identify targeted actions, locations, faceprints, voiceprints, and other trackable activity internationally or domestically. With access to all the world’s information and a self-learning algorithms with no limits, spy services and law enforcement could track and find, not just individuals, but whole groups of people of most any size, that fit a particular profile, interest or concern.    

Military Value & Potential: Recently the Pentagon provided more insight into DOD’s plans for robots on the battlefield. The Fiscal Times reported that Army General Robert Cone said robots would allow for “a smaller, more lethal, deployable and agile force… I’ve got clear guidance to think about what if you could robotically perform some of the tasks in terms of maneuverability, in terms of the future of the force.”

Consider how a DeepMind investor characterizes DeepMind’s team: “Think Manhattan Project for AI” (artificial intelligence) per recode. (Remember The Manhattan Project was the U.S. military R&D effort to build an atomic bomb in WWII.) DeepMind’s investors appreciate the unprecedented power and broad applicability of its algorithmic innovations.

Think about how much faster Google X could turbocharge its soldier robot development for DOD if it could simply program the end that it wants, i.e. that a robot wins whatever conflict it is in. Fortunately, unlike Google, the U.S. military has a deep culture of duty, honor and integrity, and unlike Google, DOD is subject to competent oversight and has many effective checks and balances. Thus the DOD would not want robots that its commanders could not understand or ultimately control.

However, Google, with its culture of unaccountability, secretly could still apply this anarchic and chaotic approach to algorithm development to produce superior robots and robotic intelligence for its own sovereign purposes and market ambitions. 

(Tellingly, when Google sold Motorola to Lenovo, it retained the services of Regina Dugan, former head of DOD’s (DARPA) advanced research projects agency, and her team. Her advanced research group likely is destined to be part of Google X in order to grow Google’s burgeoning military contracting business.)  

Think about DeepMind’s expertise in game simulations (i.e. war game simulations) and how it could be a key differentiator in Google’s positioning to be an indispensible U.S. military contractor long term to ostensibly establish and maintain America’s military superiority in the robotic warfare of the future. 

In particular think about how DeepMind’s deep learning technology fits in here in game simulation, i.e. war game simulation, training, and command-control-communications and intelligence. Why wouldn’t DOD covet the best war-gaming algorithms that could teach themselves on the fly to ensure that our individual robots and drones make the fastest tactical decisions and that they collectively operate most effectively, quickly and seamlessly together as an integrated fighting force like a coordinated hive that can learn and teach each other in real-time.   

Think about the very practical military benefit of DeepMind technology in warfare by improving military drones and missile guidance and survivability. The technology could help better identify distracting chaff or other countermeasures in order to ultimately evade them, and zero in on their intended target.

Medical and Bioengineering Implications: Lastly, think of the profound ethical implications of Google applying technology it can’t fully understand and that has a “the-ends-justifies-the-means” algorithm, to medical problems.

Remember Google launched Calico, its bio-engineering arm focused on combating aging, with a Time cover story entitled “Can Google Solve Death?” Think of applying “the-ends-justify-the-means” unethics of mutantcode to human cell or egg cloning to “fail fast” in order to learn which cells/eggs live, mutate, deform, die, or transmogrify? Or which animal or plant cells can best be mixed with human cells to slow aging and “solve death?”


III. Understanding Why the Term “Google Ethics Board” Is an Oxymoron

Consider what Mississippi Attorney General Jim Hood said in a 11-27-13 letter to Google CEO Larry Page:

  • In my 10 years as attorney general, I have dealt with a lot of large corporate wrongdoers. I must say that yours is the first I have encountered to have no corporate conscience for the safety of its customers, the viability of its fellow corporations or the negative economic impact on the nation which has allowed your company to flourish.

Consider that then FTC Commissioner Thomas Rosch urged the FTC to sue Google for misrepresenting their core business to consumers.

  • Per Politico, Commissioner Rosch “believes that Google’s claim in its privacy disclosure that it collects personal information to improve the user experience is a “half-truth.”  He believed they’re really collecting the information to maximize their profits. “That was a fictitious claim,” Rosch said. “I knew we’d have to litigate that. It was clear to me, because that would wound them very deeply if they had to change that claim.”

Consider what the Rhode Island U.S. Attorney found and said about Google’s criminal behavior. In August 2011, Google quietly admitted to knowingly and repeatedly violating Federal criminal laws against the "unsafe and unlawful importation of prescription drugs" for several years in a criminal non-prosecution agreement. Google also paid a near record $500m criminal forfeiture penalty.

  • The Rhode Island U.S. Attorney who led the Google criminal probe said the evidence was clear current Google CEO "Larry Page knew what was going on."

Google also owns the worst rap sheet of any global 1000 company.

For those who want more evidence of Google’s unethics, please see the first ten research pieces of this Google unethics series below. They provide many dozens of examples and pieces of evidence that individually and collectively damn Google’s “don’t be evil” code of ethical conduct and its public claim to have “good values.”


IV.  Conclusion

In sum, Google’s acquisition of DeepMind arguably raises more profound ethical problems, more broadly, quickly, and seriously than most any single corporate acquisition ever.

Consider the following Google-DeepMind ethical equation: DeepMind uncontrollable code + a DeepMind “the-ends-justify-the-means” code unethic + Google’s unprecedented scale, scope, reach and ambition + Google’s culture of unaccountability = Ethical Disaster.

Forewarned is forearmed. 

Lastly, given how tough I have been on Google unethics, let me give Google the last word.

Mr. Page said in his 2012 Update from the CEO  to shareholders: 

  • "Love and Trust: We have always wanted Google to be a company that is deserving of great love. But we recognize this is an ambitious goal because most large companies are not well-loved;" and "We have always believed that it is possible to make money without being evil."  
  • Happiness is a healthy disregard for the impossible: I believe that by producing innovative technology products that touch people deeply, we will enable you to do truly amazing things that change the world. It’s a very exciting time to be at Google, and I take the responsibility I have to all of you very seriously.”



Google Unethics Series

Part 1: Google’s Problem with having an Algorithm as a Soul [10-19-07]

Part 2: Did you know Google’s corporate mascot is a T-Rex named “Stan?” – the “moralasurus” (a satire) [11-26-07]

Part 3:  Why Google lost the formal [Oxford style] debate over its ethics [11-20-08]

Part 4:  Google’s Don’t Be Evil Commandments [4-9-12]

Part 5: Why Google Thinks it is Above the Law [4-17-12]

Part 6: Top Ten Untrue Google Stories [5-8-12]

Part 7: Google Mocks the World [7-20-12]

Part 8: Google’s Culture of Unaccountability in its own words [8-1-12]

Part 9: Is this the record of a trustworthy company? Google’s Consolidated Rap Sheet [6-16-13]

Part 10: The Public Evidence Google Violated the DOJ Non-Prosecution Agreement (& Ethics Commitment) [8-8-13]