The following items have been shared These are arranged with NEWEST first.
Hint: Use the search function in your browser to look for a name.
Eric has shared this on Neuromorphic AI:
And, finally, two papers about Algorithmic Autonomy:
Dave suggests this YouTube may be worth a discussion: https://youtu.be/T_2ZoMNzqHQ
Tony has suggested these two papers:
https://www.currentaffairs.org/news/ai-is-destroying-the-university-and-learning-itself
Peter Denning writes in CACM:
And another article which should particularly interest our statisticians:
An Article in the Guardian by staff of the ACLU (American Civil Liberties Union) about AI surveillance:
Don't be fooled - this affects us in the UK, as we are under the same surveillance.
AI is destroying the University and Learning itself
https://www.currentaffairs.org/news/ai-is-destroying-the-university-and-learning-itself
All about intelligence being a ubiquitous resource due to AI…
Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism
A Call to Action by Rutger Bregman in the Guardian:
Certificates in AI: Learn but Verify.
https://cacm.acm.org/research/certificates-in-ai-learn-but-verify/ is an important article about ensuring AI solutions meet a requirements specification. An introductory video on this page is a useful summary.
From Model Training to Model Raising
https://cacm.acm.org/opinion/from-model-training-to-model-raising proposes a way to reform AI model-training paradigms from post hoc alignment to intrinsic, identity-based development.
Against Imaginary Friends: Why Digital Companions Are No Solution to Social Isolation
https://cacm.acm.org/research/against-imaginary-friends-why-digital-companions-are-no-solution-to-social-isolation Again, the introductory video on this page is strongly recommended.
Tony recommends this daily newsletter on current activities in AI: https://www.rundown.ai/
The Guardian's summary of the latest Artificial Intelligence Safety Report:
An article in CACM deals with the effects of AI on Cybercrime:
And another from CACM about the value of Answers and Questions:
And Private Eye comments on the Government's AI Skills Hub
Two items from the New Scientist for the AI General Group:
Here is a link kindly provided by Daivd Burnham:
AI has supercharged scientists—but may have shrunk science https://www.science.org/content/article/ai-has-supercharged-scientists-may-have-shrunk-science?utm_source=sfmc&utm_medium=email&utm_content=alert&utm_campaign=SCIeToc&et_rid=325103282&et_cid=5855546
A recent article from CACM by Peter Denning
"AI companies will fail" from Cory Doctrow, the author of dozens of books, most recently Enshittification. Published in The Guardian.
Warnings about AI hype in the Guardian
Some papers from Communications of the ACM. These are both quite long reads, but not particularly technical.
Information Power! by Chris Bronk, who is an associate professor pf public affairs in the Hobby School of Public Affairs, University of Houston , Texas
Cyberpsychology's Influence on Modern Computing by Julie R. Ancis, who is is a distinguished professor at the New Jersey Institute of Technology, Newark, New Jersey.
A Long Read from the Guardian; This article is long, but is highly accessible:
A video of Di Cooke talking about a paper about the Human detection of AI: https://cacm.acm.org/videos/coin-toss and the paper:
Three interesting papers from Communications of the ACM.
A Blog paper on the the effects of generative content on the output of generative models:
A news item about gaps in Large Language Models:
Language Models can constitute a 'System 0' of thinking:
And something challenging from `the Guardian today (28/10/2025):
Here is a link to the 2025 Cockcroft-Rutherford lecture by Simon Johnson, one of the 2024 Nobel Prize winners in economics, the title is ‘Technology and Global Inequality in the Age of AI’ and he discusses how previous innovation has impacted people and lays out some possible futures for us with the adoption of AI (Please note: the first 4 minutes or so of this recording are muted, and contain nothing of value - fast forward until you see people on the stage).
Something to look at in the context of the Agenda for next Tuesday's General meeting (2nd September 2025): Velislava Hillman on how Big Tech has transformed the class room.
A thought-provoking article about Regulation and Control of AI, by a top security consultant, Bruce Schneider.
An article from CACM by Michael A. Cusumano (cusumano@mit.edu) is the SMR Distinguished Professor and former Deputy Dean at the Massachusetts Institute of Technology Sloan School of Management, Cambridge, MA, USA, and coauthor of The Business of Platforms (2019).
DeepSeek Inside: Origins, Technology, and Impact.
An article from the Guardian about the inevitability (or not) of Human level AI
A preliminary analysis of Deep Seek by Peter Whitham.
An article in the Guardian of 15th June 2025:
This is an article from New Scientist, which, as Sam says in his intro is which is very very worrying……
An article from Saturday's (31st May 2025) Guardian about creatives being put out of work.
Last meeting the Sub-group had an extensive discussion of the issue of copyright material in corpera. This is an article from the iPaper by Chris Stokel-Walker about this issue.
Here is another paper, this time by two professors of business schools, looking at whether present AI (Large Language Models) can support decision processes. This is important, as these results will be taught in business schools to people who have no technical understanding of what AI is doing, and need to know what it can't do.
A short article, which critiques the current approach to AI through LLMs, and suggests a target benchmark for a real AI:
Chapter 7 on Neural Networks for next meeting from Speech and Language Processing by Daniel Jurafsky & James H. Martin which is shared below:
A video had been circulated of "But what is a neural network?": https://www.youtube.com/watch?v=aircAruvnKk
