Threat Research

    Between late February and March 2026, TeamPCP launched a calculated series of escalating supply chain attacks. They compromised trusted open-source security tools like Trivy, KICS, and the AI gateway LiteLLM. The campaign also targeted the official Python SDK of Telnyx. Malicious infostealer payloads were injected into GitHub Actions and PyPI registries....
    A growing share of cyber incidents now stems from supply chain attacks. Attackers use tactics like malicious open-source libraries or hijacked developer accounts. These compromised libraries spread widely, affecting countless applications and services. In March 2026, a trojanized LiteLLM Python library was uploaded to PyPI, infecting systems....
    Large language models (LLMs) and AI agents are increasingly integrated into browsers, search engines, and automated content-processing systems. While this expands functionality, it also introduces a new and largely unexplored attack surface....
    The report highlights a rise in model extraction (“distillation”) attacks aimed at stealing proprietary AI logic, alongside the growing integration of generative AI into real-world threat operations....
    Recent advancements in the code understanding capabilities of LLMs have raised concerns about their misuse to generate novel malware. While LLMs struggle to create malware from scratch, criminals can leverage them to rewrite or obfuscate existing malware, complicating detection efforts....
    Looking for Something?
    Threat Research Categories:
    Tags