Ian Mulvany

September 7, 2022

When will researchers use AI to manipulate images?

image.png

(broken ceramic sample after a compression test - via Sylvain Deville generated by OpenAI)

Inspired by this twitter thread - https://twitter.com/DevilleSy/status/1567412785897676800?s=20&t=CcZeMymoXFtaRliENhytUA I was wondering what it would look like when researchers start to use new image generation tools to manipulate or fabricate data.

I don't have a good way to even know how to get data on whether this is happening yet, but at the moment I would guess it's not happening a lot yet as these tools have only been generally available for about two or three months.

Dumb image manipulation is rife, and is often done with copying and pasting. I do think that these tools will make those kinds of manipulations much easier (see this for an example of how that might work https://old.reddit.com/r/StableDiffusion/comments/wyduk1/show_rstablediffusion_integrating_sd_in_photoshop/)

I think they will certainly be used as part of generating papers for paper mills, but the underlying economic driver of creating fake papers that way is not going to change much as a result of the existence of these tools. They may make it easier for fake papers to get past image manipulation checks, but we don't have a huge number of these kinds of checks in place at the moment in any case. 

I think It will be more about the researchers who are already doing things they should not, and they will find that they can get away with that a little bit more easily than before.

So I see it as adding to a general small scale erosion, but not a catastrophic change, in how we deal with misconduct in science. When researchers publish work that can't be reproduced, if that work becomes critical to a main line of future investigation, the fraud will emerge though inability to replicate their original work. It just may take a bit longer to uncover this kind of fraud, but I don't think in the long run that it is going to be a huge problem.