Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!
The Pentagon is teaming up with some of the biggest names in tech to combat hacks designed to mess with the automated systems we’ll rely on in the near future. From a report: In February, DARPA issued a call for proposals for a new program. Like most DARPA projects, it had a fantastic acronym: Guaranteeing Artificial Intelligence (AI) Robustness against Deception (GARD). It’s a multimillion-dollar, four-year initiative that’s aiming to create defenses for sensor-based artificial intelligence — think facial recognition programs, voice recognition tools, self-driving cars, weapon-detection software and more. Today, Protocol can report that DARPA has selected 17 organizations to work on the GARD project, including Johns Hopkins University, Intel, Georgia Tech, MIT, Carnegie Mellon University, SRI International and IBM’s Almaden Research Center. Intel will be leading one part of the project with Georgia Tech, focusing on defending against physical adversarial attacks. Sensors that use AI computer vision algorithms can be fooled by what researchers refer to as adversarial attacks. These are basically any hack to the physical world that tricks a system into seeing something other than what’s there.
BASIC is the Computer Science equivalent of `Scientific Creationism’.