Artificial Intelligence (AI) and Machine Learning (ML) in Cybersecurity
The buzzwords, Artificial Intelligence (AI) and Machine Learning (ML) are often interchanged. However, they are not the same thing, which can lead to confusion.
What is Machine Learning?
Machine Learning is a type of Artificial Intelligence (AI) that allows software applications to become more accurate in predicting outcomes, without being explicitly programmed.
What is Artificial Intelligence?
Artificial Intelligence (AI) is the process of simulating human intelligence, using machines, especially computer systems. The process includes learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions) and self-correction.
AI is already used in many circumstances including in our buildings. For example, to control the environmental needs of people working within an office where, by monitoring of the volume of people in any area, AI can control whether or not the air-conditioning should be switched on or, if the lowering of shades or the opening of windows, will suffice.
And, AI will continue to expand into our daily business and personal lives.
Can Artificial Intelligence (AI) programmes ‘go rogue’?
But, although the benefits look good, there is a fear that such AI programmes could ‘go rogue’ and turn on us or, be hacked by other AI programmes.
Hackers love Artificial Intelligence (AI) and Machine Learning (ML) as much as everyone else in the technology space and are increasingly using it to improve their phishing attacks. The need for innovative and robust data security therefore becomes even more important.
Imagine a hacker taking over a building’s security system by accessing the system’s intelligence and having all key personnel move to one room, under the auspice of a ‘gunman threat’. Once the key people are in the room, through the AI’s skill in facial identification, it is locked by the system and ransom threats sent to all the computer screens in the building, using Ransomware tactics, to make people react quickly i.e. ‘the ticking count down clock’.
Although AI looks good, many of our current systems are not so ‘smart’ and use old technology. Simply bolting on AI will not give the perceived benefits, as it will be held back by the lack of integration. Given the high cost of system replacement, such as Heating Ventilation Air Conditioning (HVAC), it will be sometime before there are the platforms available to exploit the benefits of AI.
The GDPR and Artifical Intelligence (AI) Conundrum
The General Data Protection Regulation (GDPR )poses another conundrum. Will it be permissible to A let a user give an application permission to make automated decisions on their behalf, such as recommendation systems? These were first implemented in music content sites but now extend to many different industries.
For example, the AI system may learn of a user’s content preferences and push content that fits those criteria. This can help companies reduce bounce rate, by keeping the user interested. Likewise, you can use the information learned by your AI to craft better-targeted content to users with similar interests.
However, GDPR will see the AI application as holding User Personally Identifiable Information (PII), which might include age, gender and location, to present the information it has learnt from one user to others, with similar profiles. The GDPR requires that the data be secure and used appropriately. But, with the AI program constantly learning and sampling data, this becomes a problem.
And, if a user does give permission for their data to be modelled, will it be accompanied by a comprehensible explanation of how the AI makes decisions and how these decisions may impact that user? This would be very difficult to achieve as GDPR calls for ‘clear language’ and AI code learning is far from easy to explain.
From a technical perspective, the level of granularity GDPR requires, in explaining automated decisions, is unclear. Until the picture is clarified, some innovators may choose to forge ahead with super algorithms. Others, worryingly, may ban European citizens from using some highly valuable functionality.
Three laws of robotics
When thinking about automating important decisions and giving high-stake autonomy to AI machines, particular attention should be given to constraining their behaviour by defining what is desired, what is acceptable and what is not acceptable. This is what the Three Laws of Robotics of the science-fiction writer, Isaac Asimov, say:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
2. A robot must obey the orders given by human beings, except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The need for human intervention
AI power will need to be controlled and the three Laws of Robots need to be the mantra for AI programs. It should be mandated in all code that the AI programs should ask for human intervention when unusual situations are detected, or when the computed uncertainty in predictions/decisions, is above a certain threshold. This may go against the vision of AI, but until we can have total trust in the underlying code being used to develop it, we must show caution. Remember, humans, are still writing the code and can make mistakes or, more worryingly, add code that will allow for future control of the AI, for malicious means.
It is almost impossible to say how an organisation can have trust in any AI unless they have access to the source code and the ability, or contacts, to read and debug it. As AI is introduced it will fall on the facilities teams to question what level of code review has been undertaken within the AI module. This might be possible if the designer of the AI is a large vendor who can show in-depth test results and other customer implementations but, most AI vendors leading the technology revolution are small and do not have the client base, or the volume, of test data.
At this point, a difficult decision needs to be taken by management as to how far they ‘dip their toe’ into AI. A bit like autonomous cars, they do work but governments are still wary of allowing legislation to be brought in, to allow the technology.
AI is with us and will increasingly be integrated into our lives. Whilst the potential benefits are far-reaching, making lives better, the environment cleaner and providing efficiency to our personal and business lives, we must be aware of the possible threats it can create and take the appropriate action from the very beginning.
Need advice on Artificial Intelligence and Machine Learning?
Every organisation can benefit from added protection if you have concerns with regard to Artifical Intelligence and Machine Learning give us a call on 0844 586 0040, or email [email protected], and we’ll be happy to advise you.