TeamPCP hackers are advertising Mistral AI code resides for sale

The TeamPCP hacker group is threatening to leak the source code from the Mistral AI project unless the buyer of the data is found.
In a post on a hacker forum, the creepy actor is asking for $25,000 for a set of nearly 450 skulls.
Mistral AI is a French artificial intelligence company founded by former researchers from DeepMind and Google’s Meta, which provides large-scale open-source linguistic models (LLMs), both open source and proprietary.
In a statement to BleepingComputer, Mistral AI confirmed that hackers compromised the codebase management system after the Mini Shai-Hulud software attack.
The incident began with the disruption of legitimate packages from TanStack and Mistral AI using stolen CI/CD credentials and legitimate workflows.
It then spread to hundreds of other software projects in the npm and PyPI registry, including UiPath, Guardrails AI, and OpenSearch.
“See [the hackers] temporarily contaminated some of our SDK packages,” the company said.
TeamPCP claims to have stolen approximately 5 gigabytes of “internal repositories and source code” that Mistral uses for training, optimization, calibration, model delivery, and specification for future research and projects.
“We want a BIN of $25k or they will pay this and we will rub them forever, sell them for the best offer and it is limited to one person, if we can’t find a buyer within a week we will leak all this for free on the forums,” the hackers said.
The threatening actor appears to be open to negotiations, stating that the asking price is flexible and that interested buyers are free to submit what they believe is a fair offer for the 450 vaults offered for sale.

source: KELA
Mistral AI told BleepingComputer that TeamPCP was able to taint some of the company’s software development packages (SDK).
In an advisory published earlier this week, the company said the breach occurred after a developer’s device was affected by a TanStack supply-chain attack.
However, Mistral says a forensic investigation found that the affected data was not part of the underlying code collections.
“None of our hosting services, managed user data, or any of our research and testing facilities have been compromised,” Mistral told BleepingComputer.
Earlier today, OpenAI also confirmed that the TanStack supply-chain affected the system by two of its employees having access to a “limited collection of internal source code.”
A small collection of data was stolen from the storage facilities, but the investigation found no evidence that it was used in additional attacks.
OpenAI responded by rotating the code-signing certificates exposed in the incident and warning macOS users that they must update their OpenAI desktop applications before June 12, or the software may fail to launch and stop receiving updates.

Automated testing tools deliver real value, but they’re designed to answer one question: can an attacker deploy on a network? They are not designed to check that your controls are blocking threats, your firewall detection, or your cloud configs.
This guide covers the 6 areas you really need to verify.
Download Now



