Prompt Injection Vulnerability in LangChain through 0.0.131 Allows Arbitrary Code Execution

Prompt Injection Vulnerability in LangChain through 0.0.131 Allows Arbitrary Code Execution

CVE-2023-29374 · CRITICAL Severity

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.

Learn more about our Web Application Penetration Testing UK.