Hackers attack AI infrastructure platforms since these systems contain a multitude of valuable data, algorithms that are sophisticated in nature, and significant computational resources.
So, compromising such platforms provides hackers with access to proprietary models and sensitive information, and not only that, it also gives the ability to manipulate the outcomes of AI.
Cybersecurity researchers at Wiz Research recently discovered an Ollama AI infrastructure platform flaw that enables threat actors to execute remote code.
Ollama AI Platform Flaw
The critical Remote Code Execution vulnerability has been tracked as “CVE-2024-37032” (“Probllama”), in Ollama which is a popular open-source project for AI model deployment with more than 70,000 GitHub stars.
Free Webinar on API vulnerability scanning for OWASP API Top 10 vulnerabilities -> Book Your Spot
This vulnerability has been responsibly disclosed and mitigated. Users are encouraged to update to Ollama version 0.1.34 or later for their safety.
By June 10, numerous internet-facing Ollama instances were still utilizing vulnerable versions, which highlights the need for users to patch their installations to protect them from potential attacks that exploit this security hole.
Tools of this kind often lack such standard security features as authentication and consequently can be attacked by threat actors.
Over 1000 Ollama instances were exposed, and various AI models were hosted without protection.
Wiz researchers determined in the Ollama server, that leads to arbitrary file overwrites and remote code execution. This issue is especially severe on Docker installations operating under root privileges.
The vulnerability is due to insufficient input validation in the/api/pull endpoint, which allows for path traversal via malicious manifest files from private registries. This highlights the need for enhanced AI security measures.
This critical vulnerability allows for the manifestation of malicious descriptive files using path traversal to enable arbitrary reading and writing of files.
In Docker installations with root privileges, this can escalate into remote code execution by tampering with /etc/ld.so.preload to load a malicious shared library.
The attack starts when the /api/chat endpoint is queried, creating a new process that loads the attacker’s payload.
Even non-root installations are still at risk, as some other exploits utilize the Arbitrary File Read technique.
However, it’s been recommended that the security teams should immediately update Ollama instances and avoid exposing them to the internet without authentication.
While Linux installations bind to localhost by default, Docker deployments expose the API server publicly, which significantly increases the risk of remote exploitation.
This highlights the need for robust security measures in rapidly evolving AI technologies.
Disclosure Timeline
May 5, 2024 – Wiz Research reported the issue to Ollama.
May 5, 2024 – Ollama acknowledged the receipt of the report.
May 5, 2024 – Ollama notified Wiz Research that they committed a fix to GitHub.
May 8, 2024 – Ollama released a patched version.
June 24, 2024 – Wiz Research published a blog about the issue.
Free Webinar! 3 Security Trends to Maximize MSP Growth -> Register For Free
The post Ollama AI Platform Flaw Let Attackers Execute Remote Code appeared first on GBHackers on Security | #1 Globally Trusted Cyber Security News Platform.