OpenAI’s o3 is doing some weird things. Source: SmokeAwayyy/X
OpenAI’s newly-released reasoning model is full of surprises. One X account reported that the model casually started using “we” and “our” when thinking through responses — as if it’s part of a team. Others claim it somehow knew their names without being told.
Some behaviors seem more like superpowers. For example, o3 can “enhance” any image to find tiny details, like something out of a spy flick. It can inexplicably figure out where in the world a photo was taken. And it can one-shot a 200×200 maze in just four minutes and 40 seconds.
There’s also the question of why it sometimes says one thing but does another. For example, a nonprofit AI lab called Transluce told TechCrunch it caught o3 claiming to use a 2021 MacBook Pro to run some code “outside of ChatGPT” — which of course can’t happen…right? 😅
What it means: As models get smarter and more advanced, their outputs are bound to get weirder, too. They’ll display more emergent qualities — unpredictable behaviors they were never explicitly trained to exhibit. The reason for this isn’t known, but it probably has something to do with the sheer volume and complexity of calculations going on under the hood. And odds are, there are plenty more hidden features and quirks left to be discovered.
14
Responses