This post by Adam Day makes the strong case that detecting AI generated text does not solve the fraudulent paper problem because AI text generators are now useful for genuine research.
https://clearskiesadam.medium.com/detecting-genai-beside-the-point-01f6c3d8e05c
I agree with Adam on this point, and I want to expand on two further ideas.
1) A strong and robust peer review system is a large surface area with many opportunities for modification or improvement. I fear that we a slightly locked in at present around scaling innovations in peer review, and in being able to equip every reviewer with the very best tools to help them with the job that they are doing.
2) Another future moment when detecting AI text generation may not matter is if these systems become sufficiently capable to create new claims about the world, and to write convincingly about such claims. We are closer than ever to this possibility.
https://clearskiesadam.medium.com/detecting-genai-beside-the-point-01f6c3d8e05c
I agree with Adam on this point, and I want to expand on two further ideas.
1) A strong and robust peer review system is a large surface area with many opportunities for modification or improvement. I fear that we a slightly locked in at present around scaling innovations in peer review, and in being able to equip every reviewer with the very best tools to help them with the job that they are doing.
2) Another future moment when detecting AI text generation may not matter is if these systems become sufficiently capable to create new claims about the world, and to write convincingly about such claims. We are closer than ever to this possibility.