OpenAI has introduced an enhanced version of its o1 “reasoning” AI model, named o1-pro, in its developer API.
As per OpenAI, o1-pro utilizes more computing power than o1 to deliver “consistently better responses.” At present, access is limited to select developers—specifically, those who have spent at least $5 on OpenAI API services. However, the pricing is steep. Very steep.
The company is charging $150 per million tokens (~750,000 words) for input and $600 per million tokens for output. This is double the cost of OpenAI’s GPT-4.5 for input and ten times the price of the standard o1 model.
OpenAI is banking on o1-pro’s superior performance to justify its high cost for developers.
“O1-pro in the API is a version of o1 that uses more computing to think harder and provide even better answers to the hardest problems,” according to an OpenAI spokesperson. “After getting many requests from our developer community, we’re excited to bring it to the API to offer even more reliable responses.”
However, initial reactions to o1-pro, which has been available to ChatGPT Pro subscribers on OpenAI’s ChatGPT platform since December, have been mixed. Users noted that the model struggled with Sudoku puzzles and had difficulty understanding simple optical illusion jokes.
Additionally, OpenAI’s internal benchmarks from late last year revealed that while o1-pro performed slightly better than the standard o1 in coding and math tasks, its primary advantage was providing more consistent answers.