A lot of that could be a skill issue and tooling. Reasoning models hallucinate a lot, so
you can't just raw dog model output into your work. Products like Perplexity have kind of figured this out by finding sources first, then synthesizing those sources into the output. I'd expect to see this fall in the future, especially now that people are really looking for it, hopefully event at open access journals.  Previous Message
|