The AI (Tech) sub-group is a subset of the main AI group which explores technology aspects (what some people call 'Maths').
The intention is to save the main group from overly technical and complex discussions, whilst allowing the sub-group to get well into such things.
All the members of the sub-group are also members of the main group, and report interesting discussions back to the main group.
A summary page of AI Sharings is available.
AI (Tech) Subgroup meetings
-
Dave suggests this YouTube may be worth a discussion: https://youtu.be/T_2ZoMNzqHQ Tony has suggested these two papers: https://www.currentaffairs.org/news/ai-is-destroying-the-university-and-learning-itself Peter Denning writes in CACM: And another article which should particularly interest our statisticians:
-
Tony will continue his presentation frm last time. Peter has an item on non-neural networks AI. Tony has sent this to be shared: CETaS Report:AI Cooperation Trajectories: Adversaries and Geostrategic Competitors
-
There were eight members present. Tony began his presentation on AI techniques. An Article in the Guardian by staff of the ACLU (American Civil Liberties Union) about AI surveillance: Don't be fooled - this affects us in the UK, as we are under the same surveillance. Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism…
-
There were eight members at the meeting. There was a wide-ranging discussion. Peter Whitham introduced several papers briefly. Certificates in AI: Learn but Verify. https://cacm.acm.org/research/certificates-in-ai-learn-but-verify/ is an important article about ensuring AI solutions meet a requirements specification. An introductory video on this page is a useful summary. From Model Training to Model Raising https://cacm.acm.org/opinion/from-model-training-to-model-raising proposes…
-
At the last meeting we used the attached YouTube video as the basis for discussion, it is titled ‘15 New AI Inventions that Will Break Entire Industries’ and we watched each piece and then discussed the topic. We have dealt with 15 – 9 so there are 8 to go if we choose to do…
-
Sam Williamson led the meeting. Sam thought it went quite well and we talked for over two hours, which I take as a good sign, there were five people present. Here are the notes used: Basic description of How AI Works 1. The Core Concept: Pattern Recognition Unlike traditional software that follows rigid "if-then" rules,…
-
"AI companies will fail" from Cory Doctrow, the author of dozens of books, most recently Enshittification. Published in The Guardian. Warnings about AI hype in the Guardian
-
Some papers from Communications of the ACM. These are both quite long reads, but not particularly technical. Information Power! by Chris Bronk, who is an associate professor pf public affairs in the Hobby School of Public Affairs, University of Houston , Texas Cyberpsychology's Influence on Modern Computing by Julie R. Ancis, who is is a…
-
A Long Read from the Guardian; This article is long, but is highly accessible: A video of Di Cooke talking about a paper about the Human detection of AI: https://cacm.acm.org/videos/coin-toss and the paper: There were seven members at the meeting. We discussed whether AI could become intelligent to the point where it was capable of…
-
A number of interesting papers from Communications of the ACM. A Blog paper on the the effects of generative content on the output of generative models: A news item about gaps in Large Language Models: Language Models can constitute a 'System 0' of thinking: And something challenging from `the Guardian today (28/10//2025):
