AI and ML

The Security Conundrum of Artificial Intelligence/Machine Learning

The Security conundrum – Whilst Artificial Intelligence (AI) and Machine Learning (ML) are two buzzwords right now, especially within the broader waves of technological change sweeping through our world under the banner of the Internet of Things (IoT), they are, in fact, different.

AI is the concept of machines carrying out tasks in a smart way.  ML is an application of AI.  It is based on the premise that the machine is given data and left to learn for itself.

Though the benefits of both look good, there is a fear that these programmes could ‘go rogue’, turning on us, or, being hacked by other AI programmes.

Researchers from Harvard University demonstrated how medical systems using AI could be manipulated by an attack on image recognition models, getting them to see things that were not there. The attack programme found the best pixels to manipulate in an image to create adversarial examples that pushed models into identifying an object incorrectly and thus, caused false diagnoses.

Another doomsday scenario comes from the RAN Corporation, a US policy think tank, which describes several scenarios in which AI technology tracks and sets the targets of nuclear weapons. This involves AI gathering and presenting intelligence to military and government leaders, who make the decisions to launch weapons. If the AI is compromised, it could be fooled into making the wrong decision and lead to ‘the button’ being pressed incorrectly.

Hackers love AI as much as everyone else in the technology space and are increasingly tapping into it in order to improve their phishing attacks.

Anup Gosh, a cybersecurity strategist said, “The evidence is out there that machines are far better at crafting emails and tweets that get humans to click. Security companies that fight these bad guys will also have to adopt machine learning.”

An AI security arms race is likely to be coming, as hackers’ machine-learning-powered attacks are met with cybersecurity professional’s machine-learning-powered countermeasures.

This is seen in training applications that educate users to spot phishing attacks. Phishing is the process of attempting to acquire sensitive information such as usernames, passwords, and credit card details by masquerading as a trustworthy entity using bulk email, which tries to evade spam filters.

Emails claiming to be from popular social websites, banks, auction sites, or IT administrators, are commonly used to lure the unsuspecting public. It’s a form of criminally fraudulent social engineering. However, these emails are so well crafted that many users click on the offered links or attachments, launching the attack.

By using AI and ML techniques, email training systems can take a company’s normal email behaviour and craft emails to simulate a phishing email into an organisation. It then monitors the level of opens and, when triggered, can run a short training video to educate the user on why they missed the evidence that the email was fraudulent. Deploying such systems can save companies from expensive shutdowns or rebuilds, due to ransomware outbreaks.

A new concern around AI is in regard to regulation, specifically GDPR. Is it permissible to let a user give an application permission to make automated decisions on their behalf? If yes, will it be accompanied by a comprehensible explanation of how the AI makes decisions and how these may impact the user? This could be a problem for companies developing AI.

It is hard to make a definitive statement about how all this will play out in practice. From a technical perspective, the level of granularity GDPR requires in explaining automated decisions is unclear. Until this is known, some innovators may choose to forge ahead with super algorithms. Others, worryingly, may ban European citizens from using some highly valuable functionality.

What is needed in the AI world is to ensure that the fundamental code is sound and not compromised by human error.

All software, no matter how well written, has bugs. These bugs can, if an attacker becomes aware of them, become a vector for attack. It is difficult for even the most skilled programmers to see the flaws in their own work, an outside review of the code will often turn up potentially dangerous vulnerabilities that have been missed by the development team.

With a source code review from Digital Pathways, you can minimise the number of vulnerabilities in your software and gain the assurance you need that your source code keeps to the very best security practices.

When code is developed, organisations need some shared accountability to ensure that all future application development remains secure. This requires security issues to be discussed at the beginning of each development cycle and then integrated throughout. Code should be regularly tested during the development phases and signed off, ensuring copies are securely kept to allow a controlled roll back to a known, previously verified position, should the need arise.

AI and ML are however having a positive impact within data security.  They have the ability not only to ingest information but to react and positively block attacks or ransomware outbreaks. Such systems combine Security Information & Event Management (SIEM) and Extended Detection & Response (XDR), along with Security Orchestration, Automation & Response (SOAR), and Incident Response Management (IRM) all in a single command and control interface.

It integrates disparate technologies to improve security monitoring, operations & incident response capabilities across SOC teams, Network & Security Operations, Security Analysts, InfoSec Managers, CTOs & CISOs. All interested parties can be aware of an incident but need not take action, as it can be left to the intelligence of the system to take the steps needed to stop the attack.

It has been reported that Elon Musk speaking with Demis Hassabis, a leading creator of AI, said his ultimate goal at SpaceX was the most important project in the world: interplanetary colonisation. Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonise Mars so that we’ll have a bolthole if AI goes rogue and turns on humanity. Amused, Hassabis said that AI would simply follow humans to Mars!

AI/ML are with us and will remain so, with the development of human-like AI seen as an inevitability by technologists.  But, will they overcome the challenges to solve problems that are difficult for the computer but relatively simple for humans? How many issues will we face before we can trust the code that runs the programmes, If ever?

Only time will tell.

Every organisation can benefit from added protection. Call us on 0844 586 0040, or email [email protected] and we’ll be happy to advise you.