The AI factor: Have we figured it out yet? – Vicky Saumell (IATEFL Brighton 2024 plenary)

I watched the recording of Vicky’s plenary from 16th April 2024. You can watch it yourself here if you want to (it’s just over 45 minutes):

My own feelings about AI are mostly apathy at the moment. I know that there’s a lot of fuss about it, and I know that it will be one of the tools that will affect my work now and in the future, but I’m waiting a little for the fuss to die down before I start exploring it much myself. If I’d attended the full conference in Brighton, I would have probably avoided AI talks, but I would definitely have attended Vicky’s talk, so I was glad I still get to watch it from home 🙂

These were some of the most interesting points for me from Vicky’s talk:

  • There’s an AI for that is a repository for AI tools to help you find what you need for specific tools. It has 12,261 AI tools on it at the moment – we can’t possibly keep up with that!
  • The carbon footprint of generative AI is huge, both in terms of training it and in terms of using it. This is the report Vicky used as one source for her talk. Image generation is particularly large. The environmental impact is energy usage, water usage, steam produced in data centres, and more. Vicky quotes one statistic: ‘ChatGPT gulps up 500 milliliters of water (a 16-ounce water bottle) for a series of between 5 to 50 prompts or questions’ (from this article on AP News).
  • Where do you stand? Vicky says there are 4 groups of responses to AI based on how it affects us: Deniers/Resisters, Indifferents (I’m here), Cautious optimists, Enthusiasts/Preachers. This definitely reflects what I’ve noticed about people’s responses to AI.
  • The ‘AI job impact index’ displays an impact score as a percentage: 0% = AI has no impact on the job, 100% = this job could theoretically be fully automated using AI based on current capabilities. The impact score for teachers is 20%.
  • AI is data driven, but unfortunately most of this data is WEIRD: Western, Educated, Industrialised, Rich and Democratic (a Stodd, Schatz and Stead 2023 coining). WEIRD curators and creators of AI are over-represented, leading to discrimination, sexist views and more in (some?) sets of AI data. Visual and historical models that feed into AI are outdated and don’t reflect the modern world.
  • Generative AI is standardising languages and ideologies, according to a British Council report. Most data is in English, and most models are trained on this.
  • Copyright is another issue. Who owns the copyright of AI-generated content? It’s not yet clear. Large Language Models (LLMs) have been trained using copyrighted data – it’s being debated in many courts.
  • AI detectors return a high frequency of false positives and have been shown to discriminate against non-native speakers of the language being examined.
  • Unequal access to AI is likely to further the digital divide.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.