Have you ever tried to travel in time, at least faster than your current light curve is flowing at?
Many years ago now when I was at eLife we tried to do some future mapping. The senior team all got into a room and we had a washing line fixed up along the length of the room.
The task was to predict what the organisation would need to do to be successful in the next five years. The washing line represented that timeline and we had to physically pin our ideas at points along the line.
When you moved up and down the line you had to image that you were travelling through time, so if you were standing some point in the future, instead of saying “in five years we will”, you would have to say “now that we have, I can look back and see what we have done to get here”.
It was a very powerful technique and it quickly raised a few interesting things.
- Our note pinning was pretty accurate for the next seven eight months
- Our note pinning towards the end of the five years clearly had the big goals we had talked about
- BUT the key things that would need to happen between eight months out and five years were hopeless in terms of reality of clustering, or accuracy of description
I think at the moment in scholarly publishing we are all still trying to see just a few months out, but our clarity around what things might be like in five years is really low.
I used to say that not much changes over a five year timescale, but now I am more of the opinion that this will change more in the next five years than they have in some previous length of time.
The one VERY BIG thing that I think not enough attention is being paid to is the state of the AI scientists. There are some people working hard who have access to the most cutting edge models, who are making real progress. They are living in the future right now, and most of the rest of us don’t even know quite what these things can do.
I don’t know either, but I had a conversation last week with an eminent practitioner in this space, and it was just clear from that conversation that these things are pretty much here for a set of tasks. They will dribble out at first, and the thing that will happen initially will be an inflation of knowledge. In an inflationary era will each knowledge claim become less valuable, and will that force researchers to have to create even more in order to stay competitive? Publishers will see more paper and will imagine that this will lead to more revenue, but our previous economic era was mainly in a bun-fight to steal papers from each other against a steady predictable fundable rate of increase of overall volumes.
Inflationary growth of knowledge will not be matched by funding pots, so that biggest intermediate risk is that not untrue, but LLM generated knowledge pieces, will get to market first, and steal the funding away from work that for whatever reason is less attuned to being produced in that way. That’s a big risk for diversification of what we know, and given one of my favourite mental models is Ashby’s law of requisite variety, that means it’s a risk at some level for the world (I’m not giving a risk level here, this post is too speculative).
I actually started to write this post because I just wanted to list things I’ve seen over the last two weeks pointing to LLM Scientists. I’m going to point to them now. This is not exhaustive.
https://research.google/blog/improving-the-academic-workflow-introducing-two-ai-agents-for-better-figures-and-peer-review/ - they can just help writing academic figures now!
Post from Timothy Gowers - LLMs can do classes of mathematics problems now - https://gowers.wordpress.com/2026/05/08/a-recent-experience-with-chatgpt-5-5-pro/
https://github.com/Imbad0202/academic-research-skills a set of skills to help Claude be a research scientist.
A conference of work created by AI agents, reviewed by agents. (Get your agent to look at the video and summarise for you!) - https://www.youtube.com/watch?v=7pXqAeedqOo
https://www.valency.io - specifically not LLM scientist given that their homepage states (“# People-Centered, AI-Accelerated Research”), but VC funded infrastructure to help your LLM scientist have faster access to the literature.
I already pointed to a few other Google LLM Science related announcements a few posts ago.
So AI scientists will be here, and we are all, for the most part, not thinking far enough along the washing line of time, for what that will mean for everything. A paper that does not have such a problem is the After Science paper (https://www.science.org/doi/10.1126/science.aec7650).