In February I had the great pleasure to participate in a small workshop at EMBO in Heidelberg to discuss the role LLMs may play in the future of single cell biological science. It was Chatham house rules, and in the two days we covered an extensive set of themes. We should be having a paper coming out soon with a structured write up.
One of the things that blew me away is one of the people at the workshop demoed data-to-paper - a system they built to get two instances of LLMs to work together in an agent based framework to write an academic paper, from coming up with the hypothesis, writing the analysis code, doing to analysis, writing the paper. I see that the code is now available and you can check out the code here - https://github.com/Technion-Kishony-lab/data-to-paper. It really works, though the papers are not top flight science, the fact that you can even do this is frankly astonishing.
This week I was also reading this blog post about using LLMs to answer legal questions about large regulatory documents - https://hugodutka.com/posts/answering-legal-questions-with-llms/. This post is amazing, and also makes the very detailed prompt that they used available - https://gist.github.com/hugodutka/6ef19e197feec9e4ce42c3b6994a919d.
For data-to-paper you can see some of the prompting interspersed with the python code here - https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/data_to_paper/data_to_paper/research_types/scientific_research/reviewing_steps.py.
It's really worth looking at both projects to get a sense of where we are right now. In particular you can read through the prompts to get a sense of how much can be done through instructing the LLM with plain language.
What strikes me right now about both of these projects is that there is still a very high level of effort required to get something that begins to be marginally useful. Too much effort to be radically disruptive, but certainly a level of disruption is available that was not available before.
I think we have gone now from "let me get these amazing tools (LLMs) to do a trivial task with almost no effort from me" to "let me get these amazing tools to nearly do an `almost impossible to code` task with a lot of heavy lifting, but without it quite working yet".
The tools are improving, as are our understanding of how to use them, so we are either going to see them help lift the capability of existing systems with some heavy level of lifting, OR, they might just maybe get to the place where they can do the almost impossible task with almost no effort from the operator.
One of the things that blew me away is one of the people at the workshop demoed data-to-paper - a system they built to get two instances of LLMs to work together in an agent based framework to write an academic paper, from coming up with the hypothesis, writing the analysis code, doing to analysis, writing the paper. I see that the code is now available and you can check out the code here - https://github.com/Technion-Kishony-lab/data-to-paper. It really works, though the papers are not top flight science, the fact that you can even do this is frankly astonishing.
This week I was also reading this blog post about using LLMs to answer legal questions about large regulatory documents - https://hugodutka.com/posts/answering-legal-questions-with-llms/. This post is amazing, and also makes the very detailed prompt that they used available - https://gist.github.com/hugodutka/6ef19e197feec9e4ce42c3b6994a919d.
For data-to-paper you can see some of the prompting interspersed with the python code here - https://github.com/Technion-Kishony-lab/data-to-paper/blob/main/data_to_paper/data_to_paper/research_types/scientific_research/reviewing_steps.py.
It's really worth looking at both projects to get a sense of where we are right now. In particular you can read through the prompts to get a sense of how much can be done through instructing the LLM with plain language.
What strikes me right now about both of these projects is that there is still a very high level of effort required to get something that begins to be marginally useful. Too much effort to be radically disruptive, but certainly a level of disruption is available that was not available before.
I think we have gone now from "let me get these amazing tools (LLMs) to do a trivial task with almost no effort from me" to "let me get these amazing tools to nearly do an `almost impossible to code` task with a lot of heavy lifting, but without it quite working yet".
The tools are improving, as are our understanding of how to use them, so we are either going to see them help lift the capability of existing systems with some heavy level of lifting, OR, they might just maybe get to the place where they can do the almost impossible task with almost no effort from the operator.