This post (https://onezero.medium.com/how-to-recognize-when-tech-is-leading-us-down-a-slippery-slope-747116da2de) by Clive Thompson (https://clivethompson.medium.com) on technology slippery slopes is an excellent read.
As technologists we have a duty of care to think about the implications of the tools that we are creating, and what I like about this post is that it gives us a useful mental tool to help frame those conversations.
What is this product or service making easier? What new behaviours are being enabled by this product or service? What effects will those behaviours have on society, if they trend towards a limit?
This specific slippery slope that the article digs into here is around facial recognition technology, about making it so easy to identify people that the assumption of privacy gets stripped from the public sphere.
What are the affordances that we might look at in scholarly comms? It's easy to say that technology that augments peer review might be a candidate. We can imagine a future in which selection for publication is largely automated based on data points that bake in existing biases. Peer review, and the management of peer review, is time consuming, so this is a candidate, but changing behaviours around peer review does not require a technology solution, so it's probably a false candidate in terms of this argument.
Perhaps automation of literature reviews will be an area that is more likely to be a candidate here. Being able to summarise across the literature at scale, and to provide a quick gist of what a paper is about could really help the research process, but then what behaviours will that incentivise? More reading? Perhaps that's not such a bad thing.
Another area is around tooling to support expert advice. One specific area here is in clinical decision support. If we get to a situation where clinicians are making faster, but less considered, decisions, is there a risk there? It's probably not too risky, as this is a highly regulated domain of activity, and behaviours in this domain are unlikely to be adopted across society at large.
We may be in an industry that is too small to have to worry about this at widest scale, but where there are inbuilt biases in our systems, on the smaller scale, then there is every reason to keep alert to where those biases may get locked in through adoption of tooling that makes some of our processes easier.
As technologists we have a duty of care to think about the implications of the tools that we are creating, and what I like about this post is that it gives us a useful mental tool to help frame those conversations.
What is this product or service making easier? What new behaviours are being enabled by this product or service? What effects will those behaviours have on society, if they trend towards a limit?
This specific slippery slope that the article digs into here is around facial recognition technology, about making it so easy to identify people that the assumption of privacy gets stripped from the public sphere.
What are the affordances that we might look at in scholarly comms? It's easy to say that technology that augments peer review might be a candidate. We can imagine a future in which selection for publication is largely automated based on data points that bake in existing biases. Peer review, and the management of peer review, is time consuming, so this is a candidate, but changing behaviours around peer review does not require a technology solution, so it's probably a false candidate in terms of this argument.
Perhaps automation of literature reviews will be an area that is more likely to be a candidate here. Being able to summarise across the literature at scale, and to provide a quick gist of what a paper is about could really help the research process, but then what behaviours will that incentivise? More reading? Perhaps that's not such a bad thing.
Another area is around tooling to support expert advice. One specific area here is in clinical decision support. If we get to a situation where clinicians are making faster, but less considered, decisions, is there a risk there? It's probably not too risky, as this is a highly regulated domain of activity, and behaviours in this domain are unlikely to be adopted across society at large.
We may be in an industry that is too small to have to worry about this at widest scale, but where there are inbuilt biases in our systems, on the smaller scale, then there is every reason to keep alert to where those biases may get locked in through adoption of tooling that makes some of our processes easier.