Artificial intelligence should be regulated.
This is the stance of Alphabet (parent company of Google) CEO Sundar Pichai, expressed in the form of an op-ed article in the Financial Times Monday.
Pichai calls AI “one of the most promising new technologies,” but also highlights possible risks stemming from careless use of AI, naming a few historical examples in which breakthrough new tech brought with it new issues.
“History is full of examples of how technology’s virtues aren’t guaranteed,” he wrote. “The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.”
And while he’s not exactly alone in his opinion — the EU, the U.S., and Australia, among others, are currently drafting proposals for AI regulation — Pichai thinks that how we approach AI regulation is an equally important mission.
“The EU and the U.S. are already starting to develop regulatory proposals. International alignment will be critical to making global standards work. To get there, we need agreement on core values,” he wrote.
According to Pichai, Google’s internal AI principles, which the company published in 2018, as well as the company’s open source tools to test whether AI decisions conform to those principles, could help build a fair, universal regulatory framework for AI. He also mentions Europe’s GDPR rules as a “strong foundation” for AI regulation.
Large, international tech companies have recently increasingly been calling for regulation in certain sensitive areas. Just last week, Facebook called for better regulation when it comes to political ads. When it comes to AI, Tesla CEO Elon Musk has been calling for government regulation of the technology for several years now, claiming in 2017 that “AI is a fundamental risk to the existence of human civilization.”