Ian Mulvany

October 31, 2023

Some thoughts on Futurepub - October 2023

I attended FuturePub last night, actually, I also spoke at it too. I love these events, I've attended a ton of them over the years, and last night's was in association with AI fringe, so there was a nice ad-mixture of different communities, and I got to chat to some folk that I wouldn't otherwise have met. A really good event, many thanks to Digital Science. 

Each speaker had five minutes to talk, and five minutes of Q&A, so it was rapid. 

Andy Duffield from Full Fact  talked about using an AI pipelines to help the team prioritise what they choose to fact check. Their pipeline ingests text, does audio to text, does video transcript to text, and looks at frequency of mentions of fact like statements (this is where the LLM goodness comes in). The thing they have built is described a bit more here.

Damien Posterino talked about how you can use LLMs to create a chat interface to help folk from disadvantaged backgrounds get experience of the interview pipeline and process, with the goal of increasing their chances of making it into the workforce. Their founder Claudine Adelemi has an amazing back story. Her startup https://www.getearlybird.ai is where the magic happens. I had a conversation later in the evening about the state of hiring today. I've been so far removed from what the process feels like for a young person today, batteries of chat interfaces, online tests, many many hoops before you get to speak to a human. 

Daniel Hook talked about the trade-off between certainty, and the utility of a summary. Daniel comes from his physics background. I'm reminded of the quote "the map is not the territory". In the creation of our synthesis, there are planes of bias that come in. Humans do this all the time. What we choose to cite, where we choose to show the point of our attention. We are broadly used to this. I think one of the concerns that Daniel raised is that the bias in the machine is hidden from us.

Nikos Tzagkarakis tried to talk about hierarchical representation. I think the gist of this is that while LLMs have encoded perception really well, there are other hierarchies of understanding that contribute to our way of working and being in the world, and that it might be possible to engineer layers into our systems that operate in this way, leading to the potential to improve our systems. I really liked his point that in terms of perception the vector space of LLMs now far exceed humans ability to perceive, and in specific tasks they already out perform us, but equally there is no agency within the LLMs at all.

I spoke about implementing AI governance. My main point is that most of us have some levers of control. We can choose to act, and we can choose to engage in this conversation, and work to think about how to use these tools responsibly, and work to allow our organisations to work with these tools responsibly. I strongly believe that this is work we must lean in to. I had good feedback on BMJs current approach, and one great suggestion from the floor was to think about how to actively create the space to hear and listen to weaker voices inside the organisation.

Carl Miller spoke about AI and power. He talked about this being a moment of unbundling of power. A wild moment, a moment of potential creation and destruction, one when those who have created these systems don't know what they have built. Perhaps this is even a promethean moment. I know of no one better to speak to this theme than Carl - he has a new podcast on the topic.


At the event I got to bump into the amazing Cat Allman, who has now joined Digital Science. 

I had a brief discussion with an analyst from Digital Science who is looking at metrics around rates of change of the surprise of co-author networks over time, as an indicator for what might be inappropriate publishing behaviour. This seems both useful, obvious, and frustrating to me that we do not do this at scale in our industry. 

Figshare founder Mark Hahnel whipped me good at table tennis. 11 - 4. I got some good shots in, but I mean, 11 - 4. 

It was a great event, if you get a chance you should make it to one in the future. 

Classifications from OpenAI:

categories:
2. ai applications: fact-checking, interview pipeline, ai governance
4. ai ethics: bias in ai, ai and power, ai governance
6. sports: table tennis


About Ian Mulvany

Hi, I'm Ian - I work on academic publishing systems. You can find out more about me at mulvany.net. I'm always interested in engaging with folk on these topics, if you have made your way here don't hesitate to reach out if there is anything you want to share, discuss, or ask for help with!