Google’s Artificial Intelligence Tool Big Sleep Identifies Previously Unknown Weakness in SQLite Database Engine

Nov 04, 2024Ravie LakshmananAI / Vulnerability

Google announced the discovery of a zero-day flaw in the SQLite open-source database engine utilizing its extensive language model (LLM) supported platform called Big Sleep (formerly known as Project

Google’s AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Nov 04, 2024Ravie LakshmananAI / Vulnerability

Google’s AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Google announced the discovery of a zero-day flaw in the SQLite open-source database engine utilizing its extensive language model (LLM) supported platform called Big Sleep (formerly known as Project Naptime).

The large tech corporation depicted the breakthrough as the “primary real-world flaw” exposed through the artificial intelligence (AI) agent.

“We are of the opinion that this is the initial instance of an AI agent finding a hitherto undiscovered memory-safety issue that could be exploited in globally utilized real-world software,” the Big Sleep unit explained in a blog article shared with The Hacker News.

Cybersecurity

The weakness in focus is a stack buffer underflow issue in SQLite, a situation where software accesses a memory position preceding the start of the memory buffer, often resulting in a system crash or unauthorized code execution.

“This mainly occurs when a pointer or its index is decreased to a location before the buffer, when pointer calculations lead to a location before the valid memory section, or when a negative index becomes applicable,” as stated in a Common Weakness Enumeration (CWE) explanation concerning the bug category.

Subsequent to the responsible disclosure, the deficiency has been resolved since early October 2024. It’s crucial to note that the vulnerability was discovered in a developmental branch of the library, indicating it was identified before being integrated into an official release.

Google initially showcased Project Naptime in June 2024, presenting it as a technical platform to advance the automation of vulnerability discovery processes. It has since transformed into Big Sleep, forming part of a broader collaboration between Google Project Zero and Google DeepMind.

Through Big Sleep, the concept is to capitalize on an AI agent’s capabilities to mimic human actions in detecting and illustrating security loopholes by utilizing an LLM’s code understanding and logical reasoning abilities.

Cybersecurity

This involves employing a set of specialized applications that enable the agent to navigate through the targeted codebase, execute Python scripts in an isolated environment to generate inputs for fuzz testing, and debug the program to assess outcomes.

“We are convinced that this project has significant defense capabilities. Identifying software vulnerabilities prior to their release eliminates the opportunity for malicious entities to capitalize: the bugs are rectified before they can be exploited by attackers,” highlighted Google.

Nevertheless, the company also stressed that these outcomes are still experimental, adding that “currently, it’s plausible that a fuzzer tailored towards a specific target would likely be as efficient (in discovering vulnerabilities).”

Found this article intriguing? Follow us on Twitter and LinkedIn for more exclusive content we publish.

About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.