AI Model Training Stanford Researchers Build OpenAI o1 Rival in 26 Minutes

A breakthrough in AI training—Stanford and Washington researchers developed s1, a model surpassing OpenAI o1-preview in math, in just 26 minutes and for only $50.
AI Model Training Stanford Researchers Build OpenAI o1 Rival in 26 Minutes AI Model Training Stanford Researchers Build OpenAI o1 Rival in 26 Minutes

Stanford Researchers Train OpenAI Rival in 26 Minutes for Just $50

A Breakthrough in AI Training

A team of researchers from Stanford and the University of Washington has developed an AI model, s1, capable of solving mathematical problems more accurately than OpenAI’s o1-preview. In a groundbreaking experiment, they trained s1 in just 26 minutes, spending only $50. This achievement challenges the common belief that developing competitive AI requires billions in investments.

How s1 Was Trained

To create s1, the researchers started with Qwen2.5, an open-source AI model from Alibaba Cloud. Initially, they curated a dataset of 59,000 math problems, but later found that training on just 1,000 questions provided nearly identical accuracy while drastically reducing costs and time.

The team used knowledge distillation, a process that enables smaller models to learn from more advanced ones by mimicking their reasoning. They trained s1 using solutions generated by Google’s Gemini 2.0 Flash Thinking Experimental model. This allowed s1 to develop effective problem-solving strategies without requiring extensive original training.

Efficiency Through Hardware and Optimization

The model was trained on 16 Nvidia H100 GPUs, significantly cutting down computation time. Additionally, s1 employed test-time scaling, a technique that extends the analysis phase before producing an answer. By inserting a “Wait” command into responses, researchers enabled s1 to double-check its reasoning, improving accuracy.

Implications for AI Development

Test results showed that s1 outperformed OpenAI o1-preview by 27% in mathematical problem-solving, making it a cost-effective yet highly capable alternative. Unlike AI models from giants like Google and OpenAI, which demand massive financial and hardware resources, s1’s success proves that smaller teams can develop competitive AI systems with minimal investment.

Ethical and Industry Concerns

However, the experiment has sparked controversy over intellectual property and AI model usage. In 2023, OpenAI accused DeepSeek of unauthorized data usage for training a rival model. Similarly, Google’s Gemini API explicitly prohibits using its outputs to develop competing AI models.

This shift toward affordable and accessible AI models could disrupt the industry. If large tech companies fail to safeguard their proprietary AI training methods, the cost of AI could plummet, challenging the dominance of corporations that rely on expensive infrastructure.

So far, Google has not responded to inquiries about the s1 experiment. However, if smaller teams can continue developing advanced AI with minimal resources, the future of AI development may shift toward more open and decentralized innovation.

Read more AI news.

Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use