Like any tool, AI can be a force for good or bad. It stands for artificial intelligence, although the code actually only simulates human brains, mimicking traits like learning, conversation, and problem-solving. AI considers the context of a situation, rather than strictly following pre-set steps or using historical data.
The technology isn’t new, but until recently, it’s mainly been in the hands of specialists. Lately, new AI tools like ChatGPT have enabled ordinary people to interact with them, harnessing AI’s incredible ability to analyse and interpret data, but creating heated public debate about the impacts of technology on humans.
What’s less known is that AI is also helping cybersecurity experts, including N4L, and we’re working hard to manage increasingly sophisticated cyber threats. If your school is part of our Managed Network, you may be reassured to know our security team and some partners already utilise AI and it’s helping keep your school safer and more secure from online threats.
Helping spot intruders
Among many capabilities, AI can strengthen systems, spot threats earlier and shut them down before they cause damage. And, although the technology is still developing, we’re already seeing it in action – this is just the beginning.
A 2023 study suggested almost a third of cybercrime is never spotted. Some attackers can sneak into your network through a back door and quietly go about their business. These attacks are called APTs – advanced, persistent threats – and, for a human to find evidence of one before it does damage, is almost impossible. You would need to trawl through thousands of lines of code and logs and, even then, you may not notice anything strange.
By contrast, AI can analyse vast volumes of data in seconds, comparing behaviour and code against what’s happened in the past and what’s expected. It can uncover anomalies, and then use those to investigate the issue further. This can help reveal attacks that may have otherwise stayed under the radar.
However, technology can’t do it all alone – we all have a part to play in being vigilant against online threats. Strengthening the ‘human firewall’ is still the most effective way schools can stay safer and more secure online.
Faster assessment and response times
Previously, when it became clear an organisation’s system was under attack, cyber experts would often have to work with an incomplete picture of the causes and impacts. They needed time to assess the scene, access and analyse data, and then follow the clues to determine the nature of the breach and how best to respond.
At N4L, we utilise technology that blocks hundreds of thousands of safety and security events in schools every day. As part of this process, we use machine learning and AI tools built into the technology (for example, our Email Protection tool, Proofpoint) to analyse and assess logs, helping us gain faster and better intelligence on threats. This can help speed up our analysis, investigation, incident response and mitigation, meaning quicker resolution of cybersecurity threats potentially impacting schools, helping keep your school more secure.
Empowering schools and kura
More broadly, AI is becoming a part of most of the services schools use every day, whether this be helping create lesson plans, understanding and translating information, running Gemini for the Google product and Copilot for the Microsoft 365 product, and smart utilisation of tools like ChatGPT. As this technology continues to evolve and be adopted globally, the education sector will need to continue to respond and make decisions about the new technology, as the pace of innovation is expected to be exponential. With these tools being more frequently utilised, it’s important to consider the risks of using AI in your school.
Protecting users
Human behaviour tends to be the weakest link in the cybersecurity chain of protection, and AI technology is helping lessen reliance on the resilience of that link.
As mentioned above, if you have N4L’s Email Protection, AI is utilised to help filter out advanced email threats and catch unsafe emails before they reach your inbox. It’s also doing it in far greater volume than was possible before and at much greater speeds.
The Google or Microsoft tools many schools have are already building more AI-driven protections into systems. Your IT support should be able to help guide you on how to get greater value from these tools and how they can better help protect your school.
Racing against the cybercriminals
AI doesn’t get tired or need holidays – it can be on watch around the clock, finding and addressing issues where possible, and empowering people to make faster, more accurate decisions. This shifts the cybersecurity game from reactive to proactive and preventative.
The best protection, though, continues to be ensuring you’re keeping your online learning environments safer and more secure, such as staying on top of your own cybersecurity, continuous education for kaiako and ākonga, and raising awareness of good digital citizenship.
If you’d like to hear more from N4L, or see more blogs like this, why not subscribe?