Ian Mulvany

October 30, 2023

Implementing AI Governance

On Monday the 30th of October I’m taking part in a FuturePub - AI and Research event. I’m going to be leading an open discussion on the topic of implementing AI Governance. EventBrite page with some more details.

I put in the proposal when I saw the event pop up for a few reasons, the most direct one being that at BMJ we have implemented a governance process, and governance group over the last few months, and we think it’s working pretty well for us. (I’ll touch on some of the other reasons anon).

The other speakers have now been announced, and if we think about any given debate from the perspective of what level of abstraction it takes place at, the other speakers are, in my mind, clearly at a level of abstraction above what I originally thought about speaking about when I first put the proposal in.

When it comes to technology, I’m an incrementalist. I’ve probably gone on a fair old journey over the last twenty years, have been in the weeds of many systems, at many levels of abstraction. Now, this might not be the best time to be an incrementalist, given the pace of change, the calls to draw parallels between what is happening with GenAI and all of the other major techno-socio revolutions that we we allude to from the past, in moments like this, but I’m going to hew to that position nonetheless. The reason for that is I have only what is in front of me, and what I can affect.

The debate today is hot, there are a lot of hypotheticals running around, there is a lot of hype, and there is a lot of new computing capability that we have simply never seen before with such a wide range of uses, and widespread availability to the general public. Add to this, that this kind of technology is emerging in societies that have been telling stories about non-human entities with great powers, and the consequences of such, for as long as we have been telling stories. That this is emerging in a period where moderate, but visible, flows of wealth, have become concentrated in some technology hubs, and where the general competence of government to regulate has faced significant suspicion.

With all of this, what then can we mean by talking about governance, and what are we governing for, or against? Just today I saw this paper: (1)(https://www.cell.com/patterns/fulltext/S2666–3899(23)00241–6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2666389923002416%3Fshowall%3Dtrue) that reviewed over 200 instances of AI guidance and policy, written over the last few years.

It could be an almost impossible thing to resolve, to get a clean debate about.

So that brings me back to my incrementalism. I’m not involved in working towards AGI (though I have met those who are). I’m not in government setting policy (though I am sure there will be those involved or closely so, in attendance). I do, however, have a position of no small influence, within the organisation that I am a part of, and some small influence within the wider community that that organisation works within. Those facts give me both levers, and obligations. I have some levers of control, and I have an obligation to apply those judiciously. In fact, most of the folk attending the event have privileged obligations, in contrast to the general public. These are by virtue of probably a higher level of knowledge about what these systems are, and probably an involvement at some level in the work that they do.

So my message at the event is going to be simple. I’m going to briefly describe what we have done at BMJ. I’m going to ask folk to think about what control they have, what is ready to hand for them? I’m going to ask them what they are doing with those levers of control, what they think they should be doing? I want to get feedback on what we are doing at BMJ. I want to use the opportunity of a public forum to test my own thinking on this topic.

On to the anon … the other reason that I wanted to propose this session is that whether the threats of harm are oversold or not, whether the shadow cast by the threat of bad actors is as real and threatening as I have been reading or not, there are clear benefits that we can drive for ourselves, our organisations, and the missions those organisations serve, if we can understand how to work well with these tools. Indeed, If the threats I’ve alluded to in this paragraph are real, then we also have a real obligation to be on the side of good.

Organisations tend to be inherently resistant to change, as their decisions typically necessitate consensus from multiple parties. Adopting a cautious stance tends to appease the majority, although this caution is frequently excessive. Given the current climate of extensive hype and uncertainty, I believe that participating in open, constructive discussions represents the optimal path forward. Such dialogue enables open societies to reach well-informed decisions, which is precisely why I am eager to engage in conversations of this nature.

A side note about this blog

James Butcher was very kind to drop a link to this blog in his newsletter, and I noticed that I’ve acquired a few dozen new subscribers. I’ve been blogging consistently inconsistently since 2006. (You can go back and see some of my much older posts at partiallyattended. I mostly blog for myself, as the act of writing helps me think. If you find it interesting, and you want to let me know, or you want to comment on anything, you can email me.

A few years ago I moved over to blogging on the Hey email platform, which means I can post a blog post from my email client. I lost tagging, and a nice looking blog, but reduced the friction of posting so much that I post a lot more than before, and I value that a lot more than how the blog looks, or how discoverable it is.

What you can expect:

  • A fairly infrequent posting schedule, somewhere between 30 and 50 posts per year.
  • Posts about technology and publishing.
  • Mostly first drafts - had I the time, I’d work on these more, but I don’t.
  • Occasional posts about In Our Time, the radio series.
  • Occasional link posts.
  • Lots of typos.
the title was [[title]] The tags were: [[hashtags]]

Classifications from OpenAI:

  • event and conference
  • ai governance
  • technology
  • incrementalism
  • public debate
  • ai ethics
  • organisational change

  1. Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance  ↩︎

About Ian Mulvany

Hi, I'm Ian - I work on academic publishing systems. You can find out more about me at mulvany.net. I'm always interested in engaging with folk on these topics, if you have made your way here don't hesitate to reach out if there is anything you want to share, discuss, or ask for help with!