FutureHouse, an Eric Schmidt-backed nonprofit that aims to build an “AI scientist” within the next decade, has launched its first major product: a platform and API with AI-powered tools designed to support scientific work.
Many, many startups are racing to develop AI research tools for the scientific domain, some with massive amounts of VC funding behind them. Tech giants seem bullish, too, on AI for science. Earlier this year, Google unveiled the “AI co-scientist,” an AI the company said could aid scientists in creating hypotheses and experimental research plans.
The CEOs of AI companies OpenAI and Anthropic have asserted that AI tools could massively accelerate scientific discovery, particularly in medicine. But many researchers don’t consider AI today to be especially useful in guiding the scientific process, in large part due to its unreliability.
FutureHouse on Thursday released four AI tools: Crow, Falcon, Owl, and Phoenix. Crow can search scientific literature and answer questions about it; Falcon can conduct deeper literature searches, including of scientific databases; Owl looks for previous work in a given subject area; and Phoenix uses tools to help plan chemistry experiments.
“Unlike other [AIs], FutureHouse’s have access to a vast corpus of high-quality open-access papers and specialized scientific tools,” writes FutureHouse in a blog post. “They [also] have transparent reasoning and use a multi-stage process to consider each source in more depth […] By chaining these [AI]s together, at scale, scientists can greatly accelerate the pace of scientific discovery.”
But tellingly, FutureHouse has yet to achieve a scientific breakthrough or make a novel discovery with its AI tools.
Part of the challenge in developing an “AI scientist” is anticipating an untold number of confounding factors. AI might come in handy in areas where broad exploration is needed, like narrowing down a vast list of possibilities. But it’s less clear whether AI is capable of the kind of out-of-the-box problem-solving that leads to bonafide breakthroughs.
Techcrunch event
Berkeley, CA
|
June 5
BOOK NOW
Results from AI systems designed for science so far have been mostly underwhelming. In 2023, Google said around 40 new materials had been synthesized with the help of one of its AIs, called GNoME. Yet an outside analysis found not a single one of the materials was, in fact, net new.
AI’s technical shortcomings and risks, such as its tendency to hallucinate, also make scientists wary of endorsing it for serious work. Even well-designed studies could end up being tainted by misbehaving AI, which struggles with executing high-precision work.
Indeed, FutureHouse acknowledges that its AI tools — Phoenix in particular — may make mistakes.
“We are releasing [this] now in the spirit of rapid iteration,” the company writes in its blog post. “Please provide feedback as you use it.”