Ian Mulvany

January 15, 2022

AI - oh my, interesting links from week 2 of 2022

# 2022 week2 - interesting links


Welcome to week two of 2022, here are some things across the web that caught my attention.

Congratulations to Tasha Mellins-Cohen who has been appointed project director for project counter, what a great appointment!!

There are a lot of preprint services out there - https://en.wikipedia.org/wiki/List_of_preprint_repositories and BMJ plays a critical role with MedArXiV.

This nice article in scientific american (https://www.scientificamerican.com/article/arxiv-org-reaches-a-milestone-and-a-reckoning/) gives an overview of the Ur-ArXiV  the physics ArXiV.

This outsize role testifies to arXiv’s success but also shows how the repository’s problems are not just its own—they are science’s, too.

In spite of its success, arXiv has continuously struggled with stability and resources. The server has undergone upheaval, moving its location within Cornell. Currently, thereis funding for only a handful of staff to help volunteer moderators handle up to 1,200 daily submissions. “We’re an old classic car, and the rust has finally come through, and the pistons are wearing out,” Sigurdsson says. “We are understaffed and underfunded—and have been for years.”

Martin Paul Eve is spot about thinking about building a global collection of open access articles. We are still on a journey, but what comes next? How to locate that library in a platform that we can build on top of. OpenAlex starts to give us a hint, I think.  https://eve.gd/2022/01/13/open-access-is-building-a-one-time-shared-international-library-collection/

Tl;dr papers is great https://www.tldrpapers.com/, as in it actually works, much of the time. It uses GPT3 to create lay summaries of research paper abstracts. I got early access to GPT3 last year and noodled around with this, but alas didn’t have the bandwidth to build anything on top of it. The ability of these language models to mirror our ability to summarise and, write back ourselves to ourselves, is well well beyond the uncanny valley. We are well across that now and heading up the winding mountain path towards the peaks of indeterminate weirdness and potential human irrelevance (slightly kidding here). We should all be thinking about how to harness these things for good.

Ok, that was a bit of AI optimism, for a counterpoint this post http://rodneybrooks.com/predictions-scorecard-2022-january-01/ is amazing. Rodney Brooks gives a scoring of his predictions for the future, focussed on self driving cars, robots, AI, and space. How he builds up his predictions is a masterclass in rational thinking about the potential future effects of technology. This is an essential read for technologists, some choice quotes:

Perhaps this is driven by the perceptions of what transformer based natural language systems can do. They are not intelligent but they can can fool an awful lot of the people an awful lot of the time.

I have often stated that I think the field of AI, despite the great practical successes recently of Deep Learning, is probably a few hundred years away from where most people think it is. We’re still back in phlogiston land, not having yet figured out the elements, including oxygen.

This is a charming piece about, of all things, blinking cursors. https://www.inverse.com/innovation/blinking-cursor-history. It contains this nugget
But the designer, Wozniak, made a trade-off that blinking characters were more important than lowercase letters.”

Imagine that next time you have to make a feature trade off in your product.

Ive been enjoying some of the videos on this course on visual group theory. A Rubiks''s cube is a group, my friends.


Ok, that’s the job lot for this week, many other things flew past me, and I have a innumerable number of browser tabs open. I need a robot to read them for me, tl;dr them for me, and write my blog post for me.

Last last thing, wordle has of course obsessed me, and as of this morning I achieved a balanced distribution: