Splunk Urges Australian Organisations to Secure LLMs
Splunk’s SURGe team has assured Australian organisations that securing AI large language models against common threats, such as prompt injection attacks, can be accomplished using existing security tooling. However, security vulnerabilities may arise if organisations fail to address foundational security practices. Shannon Davis, a Melbourne-based principal security strategist at Splunk SURGe, told TechRepublic that Australia was showing increasing security awareness regarding LLMs in recent months. He described last year as the “Wild West,” where many rushed to experiment with LLMs without prioritising security. Splunk’s own investigations into such vulnerabilities used the Open Worldwide Application Security Project’s “Top 10 for Large Language Models” as a framework. The research team found that organisations can mitigate many security risks by leveraging existing cybersecurity practices and tools. The top security risks facing Large Language Models In the OWASP report, the research team outlined three vulnerabilities as critical to address in 2024. Prompt injection attacks OWASP defines prompt injection as a vulnerability that occurs when an attacker manipulates an LLM through crafted inputs. There have already been documented cases worldwide …