3 Comments

Regarding the scheduling problem horizon publication, these are precisely the types of problems that LLMs can't work on. They can't generalize, understand logic, derive solutions, or propose theorems (most PhD candidates can't either).

Wolfram Alpha is closer to that than any LLM running around on interpolated data. Optimization solvers are better at making decisions and could be interpreted as a higher level of artificial intelligence. Of course, it is not my idea, but Prof. Powell's (https://castle.princeton.edu/the-7-levels-of-ai/)

Anyway, my two cents.

As always a great read.

Bests,

Dario, PhD candidate

Expand full comment

Thanks for sharing your thoughts, Dario—this was an interesting read. I agree that, as of now, LLMs can’t truly generalize, understand logic, derive new solutions, or propose theorems. My main curiosity is whether a model like o3, if given a carefully defined mathematical formulation and enough compute time, might stumble upon an idea that a human researcher could then explore and refine. It’s still too early to say for sure, especially since o3 isn’t publicly available yet, but I wouldn’t rule out this possibility.

Expand full comment

well, that'd be nice. As of now, my experience with LLMs is that even asking about the interpretation of theorems is still very sloppy. The contextualization is still missing and they can't true numerics, just interpolation of phrases. If LLMs incorporate solvers (or vice-versa) that'd be game changer.

Expand full comment