How trustworthy is AI?

Artificial intelligence (AI) and machine learning (ML) are two very hot buzzwords within the broader waves of technological change that are sweeping through our world under the banner of the Internet of Things (IoT). And, although their benefits look good, there is a fear that AI programs could go rogue and turn on us – or even be hacked by other AI programs.
Researchers from Harvard University demonstrated how medical systems using AI can be manipulated by an attack on image recognition models, getting them to see things that were not there. The attack program finds the best pixels to manipulate in an image to create adversarial examples that will push models into identifying an object incorrectly
and thus cause false diagnoses.

Another doomsday scenario came from the RAN Corporation, a US policy think-tank that described several scenarios in
which ML technology tracks and sets the targets of nuclear weapons. This would involve AI gathering and presenting intelligence to military and government leaders, who make the decisions to launch weapons. If the AI is compromised, it could be fooled into making the wrong decision.

Hackers love artificial intelligence as much as everyone else in the technology space and are increasingly tapping AI to
improve their phishing attacks. Anup Gosh, a cyber-security strategist, recently reported: “The evidence is out there that machines are far better at crafting emails and tweets that get humans to click. Security companies that fight these bad guys will also have to adopt machine learning.”

An AI security arms race is likely to be coming, as hackers’ ML-powered attacks are met with cyber-security professionals’
ML-powered countermeasures. A new concern around AI is in regard to regulation, specifically the General Data Protection Regulation (GDPR). Is it permissible to let a user give an application permission to make automated decisions on their behalf? If yes, will it be accompanied by a comprehensible explanation of how the AI makes decisions and how these decisions may impact that user? It could be a problem for companies developing AI that is so advanced nobody fully understands how it makes decisions. It is hard to know how all this will play out in practice. From a technical perspective,
the level of granularity GDPR requires in explaining automated decisions is unclear. Until this is known, some innovators may choose to forge ahead with super algorithms. Others, worryingly, may ban European citizens from using some highly valuable functionality.

 

Read the full article here on page 20 of Network Security Magazine.