Distributional raises $19M to automate AI model and app testing

Distributional, an AI testing platform founded by Intel’s former GM of AI software, Scott Clark, has closed a $19 million Series A funding round led by Two Sigma Ventures.

Clark says that Distributional was inspired by the AI testing problems he ran into while applying AI at Intel, and — before that — his work at Yelp as a software lead in the company’s ad-targeting division.

“As the value of AI applications continues to grow, so do the operational risks,” he told TechCrunch. “AI product teams use our platform to proactively and continuously detect, understand, and address AI risk before it introduces risk in production.”

Clark came to Intel by way of an acquisition.

In 2020, Intel acquired SigOpt, a model experimentation and management platform that Clark co-founded. Clark stayed on, and in 2022 he was appointed VP and GM of Intel’s AI and supercomputing software group.

At Intel, Clark says that he and his team were frequently hamstrung by AI monitoring and observability issues.

AI is non-deterministic, Clark pointed out — meaning that it generates different outputs given the same piece of data. Add to that fact that AI models have many dependencies (like software infrastructure and training data), and pinpointing bugs in an AI system can feel like searching for a needle in a haystack.

According to a 2024 Rand Corporation survey, over 80% of AI projects fail. Generative AI is proving to be a particular challenge for companies, with a Gartner study predicting that a third of deployments will be abandoned by 2026.

“It requires writing statistical tests on distributions of many data properties,” Clark said. “AI needs to be continuously and adaptively testing through the lifecycle to catch behavioral change.”

Clark created Distributional to try to abstract away this AI auditing work somewhat, drawing on techniques he and SigOpt’s team developed while working with enterprise customers. Distributional can automatically create statistical tests for AI models and apps to a developer’s specifications, and organize the results of these tests in a dashboard.

From that dashboard, Distributional users can work together on test “repositories,” triage failed tests, and recalibrate tests if and where necessary. The entire environment can be deployed on-premises (although Distributional also offers a managed plan), and integrated with popular alerting and database tools.

“We provide visibility across the organization into what, when, and how AI applications were tested and how that has changed over time,” Clark said, “and we provide a repeatable process for AI testing for similar applications by using sharable templates, configurations, filters, and tags.”

AI is indeed an unwieldy beast. Even the top AI labs have weak risk management. A platform like Distributional’s could ease the testing burden, and perhaps even help companies achieve ROI.

At least, that’s Clark’s pitch.

“Whether instability, inaccuracy, or the dozens of other potential challenges, it can be hard to identify AI risk,” he said. “If teams fail to get AI testing right, they risk AI applications never making it into production. Or, if they do productionalize, they risk these applications behaving in unexpected and potentially harmful ways with no visibility into these issues.”

Distributional isn’t first to market with tech to probe and analyze an AI’s reliability. KolenaProlificGiskard, and Patronus are among the many AI experimentation solutions out there. Tech giants such as Google Cloud, AWS, and Azure also offer model evaluation tools.

So why would a customer choose Distributional?

Well, Clark asserts that Distributional — which is on the cusp of commercializing its product suite — delivers a more “white glove” experience than many. Distributional takes care of installation, implementation, and integration for clients, and provides AI testing troubleshooting (for a fee).

“Monitoring tools often focus on higher-level metrics and specific instances of outliers, which gives a limited sense of consistency, but without insights on broader application behavior” Clark said. “The goal of Distributional’s testing is to enable teams to get to a definition of desired behavior for any AI application, confirm that it still behaves as expected in production and through development, detect when this behavior changes, and figure out what needs to evolve or be fixed to reach a steady state once again.”

Flush with new cash from its Series A, Distributional plans to expand its technical team, with a focus on the UI and AI research engineering sides. Clark said that he expects the company’s workforce to grow to 35 people by the end of the year, as Distributional embarks on its first wave of enterprise deployments.

“We have secured significant funding in the course of just a year since we were founded, and, even with our growing team, are in a position to capitalize over the next few years on this massive opportunity,” Clark added.

Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence VC, and Alumni Ventures also participated in Distributional’s Series A. To date, the San Francisco-based startup has raised $30 million.

EMEA Tribune is not involved in this news article, it is taken from our partners and or from the News Agencies. Copyright and Credit go to the News Agencies, email news@emeatribune.com Follow our WhatsApp verified Channel210520-twitter-verified-cs-70cdee.jpg (1500×750)

Support Independent Journalism with a donation (Paypal, BTC, USDT, ETH)
WhatsApp channel DJ Kamal Mustafa