Businesses can now train custom AI models from production workflows – no ML team required

Every query a business AI application processes, every correction a subject matter expert makes to your output — that interaction is training data. Most organizations do not. The manufacturing workflow companies already built is generating a continuous signal that improves AI models, and it’s disappearing.
San Francisco-based Impromptu AI on Thursday launched Alchemy models on a specific basis: The AI applications businesses are already building generate training data, and most of it will be wasted. The platform captures that signal automatically, moving validated results from subject matter experts back into a fine-tuning pipeline that improves the model over time. Businesses own the resulting weight directly.
It resides in a separate environment from both the RAG and the normal configuration. RAG finds the external context during decision-making without changing the model weights. A typical optimization modifies the weights but requires separately clustered labeled datasets and a dedicated ML pipeline. Alchemy does the latter proactively, using the business plan itself as a data source.
Companies using model-based APIs face three interrelated issues: cost considerations equal to usage, no ownership of models whose data is successfully trained, and limited ability to customize the behavior of domain-specific tasks. Impromptu CEO Shanea Leven says those issues are felt but rarely addressed.
"Every customer, everyone I talk to, is like, how can I not be distracted? How will I protect my business? And they just don’t see the way," Leven told VentureBeat in an exclusive interview.
How Alchemy builds a model from a working application
Most custom model training methods require companies to separately collect, clean and label data before any fine-tuning can begin. Alchemy takes a different approach: the business application itself generates and cleans the training data.
The method works with Impromptu Golden Data Pipelines infrastructure in two stages. Before an application is created, the business data is cleaned, extracted and enriched so that the application can start with the planned input. Once operational, everything produced goes back through the pipeline, where subject matter experts within the organization review and modify it. That validated output becomes the training data for the next iteration.
"The app, an AI system that customers already create, cleans the data," Leven said.
The resulting fine-tuned models are what Impromptu calls Professional Nano-Models: small, task-specific models developed for a specific workflow rather than thinking for a general purpose. Evals, guardrails and compliance controls work in the same way, so governance goes with the training process. Customers own the model weights directly. Empromptu hosts and runs its own infrastructure, but the weights are portable and exportable for a fee. The platform is model agnostic, supporting Llama, Qwen and other basic models.
The hard limit is the data volume. Early deployments begin with a base model while the application accumulates enough production data to run useful fine-tuning operations. Leven acknowledged the timeline without sugarcoating it. "Training the model will take time," he said.
Alchemy is different in that it’s a well-managed configuration of who is doing the work
OpenAI’s fine-tuning API and AWS Bedrock’s custom models both provide business fine-tuning. Both require organizations to deliver separately configured training datasets and manage the optimization process outside of their application stack. The responsibility for data selection and model testing rests with the customer’s ML team.
Alchemy’s difference is the synthesis of the process. The training data is generated by the business application itself, so there is no separate data preparation step and no ML expertise is required. The application is a pipeline.
"Do I need to have Bedrock and put together another ML team to figure out how to tune the model and get all that infrastructure? No, anyone can do it now," Leven said.
The tradeoff is platform dependency. Alchemy only works within the Impromptu environment. Businesses seeking the same result from existing infrastructure will need to replicate their own data capture, validation and optimization.
A behavioral health company has cut session recording time by up to 87% using Alchemy
Empromptu targets the most regulated and data-driven verticals: healthcare, financial services, legal technology, sales forecasting and revenue. These are areas where the output of a general purpose model carries a high risk of mismatch and proprietary workflow data is heavily concentrated.
Among the early adopters is behavioral health company Ascent Autism, which uses Alchemy to modify session scripts and communicate with parents.
Facilitators use student session recordings, transcripts, time notes and behavior metrics to create structured notes and personalized parent updates. That workflow previously required one to two hours of writing per session. With Alchemy training on the same data, it now takes 10 to 15 minutes.
"Relying only on API-based models can quickly become expensive," Faraz Fadavi, founder and CTO of Ascent Autism, told VentureBeat. "Alchemy has given us a way to streamline workflows, train models with our data, and reduce costs while improving output quality over time."
Fadavi said the company saw tangible results immediately, with continued improvements as the system was refined. The evaluation process went beyond accuracy to include session data traceability and output consistency with the company’s clinical voice.
"We wanted a system that could learn our workflow and produce results that corresponded to how we actually work – not just text summarization," he said. Practical assessment: how much planning the promoters need, whether the output matches their voice and whether it reasonably reduces the time spent. Facilitators moved from rewriting produced notes to editing and quality checking them.
What does this mean for businesses
The data flywheel is real – but so is the platform lock:
Every workflow is a training opportunity. Businesses that capture and validate the results from their AI production systems will reap those benefits over time. More uses generate more training signals, which produce more accurate domain-specific models, which produce better results, which produce cleaner training data for the next cycle.
Leven places Alchemy as a third architectural option. Businesses have spent the past two years choosing between RAG access to domain information and fine-tuning special models. Workflow-driven model training is a third approach, which includes continuous improvement of optimization and ease of building operations within a managed platform.
"Having that data moat is a very important currency," Leven said.



