AI education Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/ai-education/ Teach, learn and make with Raspberry Pi Tue, 02 May 2023 09:06:57 +0000 en-GB hourly 1 https://wordpress.org/?v=6.2.2 https://www.raspberrypi.org/app/uploads/2020/06/cropped-raspberrry_pi_logo-100x100.png AI education Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/ai-education/ 32 32 Experience AI: The excitement of AI in your classroom https://www.raspberrypi.org/blog/experience-ai-launch-lessons/ https://www.raspberrypi.org/blog/experience-ai-launch-lessons/#comments Tue, 18 Apr 2023 10:00:00 +0000 https://www.raspberrypi.org/?p=83694 We are delighted to announce that we’ve launched Experience AI, our new learning programme to help educators to teach, inspire, and engage young people in the subject of artificial intelligence (AI) and machine learning (ML). Experience AI is a new educational programme that offers cutting-edge secondary school resources on AI and machine learning for teachers…

The post Experience AI: The excitement of AI in your classroom appeared first on Raspberry Pi Foundation.

]]>
We are delighted to announce that we’ve launched Experience AI, our new learning programme to help educators to teach, inspire, and engage young people in the subject of artificial intelligence (AI) and machine learning (ML).

Experience AI is a new educational programme that offers cutting-edge secondary school resources on AI and machine learning for teachers and their students. Developed in partnership by the Raspberry Pi Foundation and DeepMind, the programme aims to support teachers in the exciting and fast-moving area of AI, and get young people passionate about the subject.

The importance of AI and machine learning education

Artificial intelligence and machine learning applications are already changing many aspects of our lives. From search engines, social media content recommenders, self-driving cars, and facial recognition software, to AI chatbots and image generation, these technologies are increasingly common in our everyday world.

Young people who understand how AI works will be better equipped to engage with the changes AI applications bring to the world, to make informed decisions about using and creating AI applications, and to choose what role AI should play in their futures. They will also gain critical thinking skills and awareness of how they might use AI to come up with new, creative solutions to problems they care about.

The AI applications people are building today are predicted to affect many career paths. In 2020, the World Economic Forum estimated that AI would replace some 85 million jobs by 2025 and create 97 million new ones. Many of these future jobs will require some knowledge of AI and ML, so it’s important that young people develop a strong understanding from an early age.

A group of young people investigate computer hardware together.
 Develop a strong understanding of the concepts of AI and machine learning with your learners.

Experience AI Lessons

Something we get asked a lot is: “How do I teach AI and machine learning with my class?”. To answer this question, we have developed a set of free lessons for secondary school students (age 11 to 14) that give you everything you need including lesson plans, slide decks, worksheets, and videos.

The lessons focus on relatable applications of AI and are carefully designed so that teachers in a wide range of subjects can use them. You can find out more about how we used research to shape the lessons and how we aim to avoid misconceptions about AI.

The lessons are also for you if you’re an educator or volunteer outside of a school setting, such as in a coding club.

The six lessons

  1. What is AI?: Learners explore the current context of artificial intelligence (AI) and how it is used in the world around them. Looking at the differences between rule-based and data-driven approaches to programming, they consider the benefits and challenges that AI could bring to society. 
  2. How computers learn: Learners focus on the role of data-driven models in AI systems. They are introduced to machine learning and find out about three common approaches to creating ML models. Finally the learners explore classification, a specific application of ML.
  3. Bias in, bias out: Learners create their own machine learning model to classify images of apples and tomatoes. They discover that a limited dataset is likely to lead to a flawed ML model. Then they explore how bias can appear in a dataset, resulting in biased predictions produced by a ML model.
  4. Decision trees: Learners take their first in-depth look at a specific type of machine learning model: decision trees. They see how different training datasets result in the creation of different ML models, experiencing first-hand what the term ‘data-driven’ means. 
  5. Solving problems with ML models: Learners are introduced to the AI project lifecycle and use it to create a machine learning model. They apply a human-focused approach to working on their project, train a ML model, and finally test their model to find out its accuracy.
  6. Model cards and careers: Learners finish the AI project lifecycle by creating a model card to explain their machine learning model. To finish off the unit, they explore a range of AI-related careers, hear from people working in AI research at DeepMind, and explore how they might apply AI and ML to their interests.

As part of this exciting first phase, we’re inviting teachers to participate in research to help us further develop the resources. All you need to do is sign up through our website, download the lessons, use them in your classroom, and give us your valuable feedback.

An educator points to an image on a student's computer screen.
 Ben Garside, one of our lead educators working on Experience AI, takes a group of students through one of the new lessons.

Support for teachers

We’ve designed the Experience AI lessons with teacher support in mind, and so that you can deliver them to your learners aged 11 to 14 no matter what your subject area is. Each of the lesson plans includes a section that explains new concepts, and the slide decks feature embedded videos in which DeepMind’s AI researchers describe and bring these concepts to life for your learners.

We will also be offering you a range of new teacher training opportunities later this year, including a free online CPD course — Introduction to AI and Machine Learning — and a series of AI-themed webinars.

Tell us your feedback

We will be inviting schools across the UK to test and improve the Experience AI lessons through feedback. We are really looking forward to working with you to shape the future of AI and machine learning education.

Visit the Experience AI website today to get started.

The post Experience AI: The excitement of AI in your classroom appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-launch-lessons/feed/ 4
How anthropomorphism hinders AI education https://www.raspberrypi.org/blog/ai-education-anthropomorphism/ https://www.raspberrypi.org/blog/ai-education-anthropomorphism/#comments Thu, 13 Apr 2023 14:59:33 +0000 https://www.raspberrypi.org/?p=83648 In the 1950s, Alan Turing explored the central question of artificial intelligence (AI). He thought that the original question, “Can machines think?”, would not provide useful answers because the terms “machine” and “think” are hard to define. Instead, he proposed changing the question to something more provable: “Can a computer imitate intelligent behaviour well enough…

The post How anthropomorphism hinders AI education appeared first on Raspberry Pi Foundation.

]]>
In the 1950s, Alan Turing explored the central question of artificial intelligence (AI). He thought that the original question, “Can machines think?”, would not provide useful answers because the terms “machine” and “think” are hard to define. Instead, he proposed changing the question to something more provable: “Can a computer imitate intelligent behaviour well enough to convince someone they are talking to a human?” This is commonly referred to as the Turing test.

It’s been hard to miss the newest generation of AI chatbots that companies have released over the last year. News articles and stories about them seem to be everywhere at the moment. So you may have heard of machine learning (ML) chatbots such as ChatGPT and LaMDA. These chatbots are advanced enough to have caused renewed discussions about the Turing Test and whether the chatbots are sentient.

Chatbots are not sentient

Without any knowledge of how people create such chatbots, it’s easy to imagine how someone might develop an incorrect mental model around these chatbots being living entities. With some awareness of Sci-Fi stories, you might even start to imagine what they could look like or associate a gender with them.

A person in front of a cloudy sky, seen through a refractive glass grid. Parts of the image are overlaid with a diagram of a neural network.
Image: Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC BY 4.0

The reality is that these new chatbots are applications based on a large language model (LLM) — a type of machine learning model that has been trained with huge quantities of text, written by people and taken from places such as books and the internet, e.g. social media posts. An LLM predicts the probable order of combinations of words, a bit like the autocomplete function on a smartphone. Based on these probabilities, it can produce text outputs. LLM chatbots run on servers with huge amounts of computing power that people have built in data centres around the world.

Our AI education resources for young people

AI applications are often described as “black boxes” or “closed boxes”: they may be relatively easy to use, but it’s not as easy to understand how they work. We believe that it’s fundamentally important to help everyone, especially young people, to understand the potential of AI technologies and to open these closed boxes to understand how they actually work.

As always, we want to demystify digital technology for young people, to empower them to be thoughtful creators of technology and to make informed choices about how they engage with technology — rather than just being passive consumers.

That’s the goal we have in mind as we’re working on lesson resources to help teachers and other educators introduce KS3 students (ages 11 to 14) to AI and ML. We will release these Experience AI lessons very soon.

Why we avoid describing AI as human-like

Our researchers at the Raspberry Pi Computing Education Research Centre have started investigating the topic of AI and ML, including thinking deeply about how AI and ML applications are described to educators and learners.

To support learners to form accurate mental models of AI and ML, we believe it is important to avoid using words that can lead to learners developing misconceptions around machines being human-like in their abilities. That’s why ‘anthropomorphism’ is a term that comes up regularly in our conversations about the Experience AI lessons we are developing.

To anthropomorphise: “to show or treat an animal, god, or object as if it is human in appearance, character, or behaviour”

https://dictionary.cambridge.org/dictionary/english/anthropomorphize

Anthropomorphising AI in teaching materials might lead to learners believing that there is sentience or intention within AI applications. That misconception would distract learners from the fact that it is people who design AI applications and decide how they are used. It also risks reducing learners’ desire to take an active role in understanding AI applications, and in the design of future applications.

Examples of how anthropomorphism is misleading

Avoiding anthropomorphism helps young people to open the closed box of AI applications. Take the example of a smart speaker. It’s easy to describe a smart speaker’s functionality in anthropomorphic terms such as “it listens” or “it understands”. However, we think it’s more accurate and empowering to explain smart speakers as systems developed by people to process sound and carry out specific tasks. Rather than telling young people that a smart speaker “listens” and “understands”, it’s more accurate to say that the speaker receives input, processes the data, and produces an output. This language helps to distinguish how the device actually works from the illusion of a persona the speaker’s voice might conjure for learners.

Eight photos of the same tree taken at different times of the year, displayed in a grid. The final photo is highly pixelated. Groups of white blocks run across the grid from left to right, gradually becoming aligned.
Image: David Man & Tristan Ferne / Better Images of AI / Trees / CC BY 4.0

Another example is the use of AI in computer vision. ML models can, for example, be trained to identify when there is a dog or a cat in an image. An accurate ML model, on the surface, displays human-like behaviour. However, the model operates very differently to how a human might identify animals in images. Where humans would point to features such as whiskers and ear shapes, ML models process pixels in images to make predictions based on probabilities.

Better ways to describe AI

The Experience AI lesson resources we are developing introduce students to AI applications and teach them about the ML models that are used to power them. We have put a lot of work into thinking about the language we use in the lessons and the impact it might have on the emerging mental models of the young people (and their teachers) who will be engaging with our resources.

It’s not easy to avoid anthropomorphism while talking about AI, especially considering the industry standard language in the area: artificial intelligence, machine learning, computer vision, to name but a few examples. At the Foundation, we are still training ourselves not to anthropomorphise AI, and we take a little bit of pleasure in picking each other up on the odd slip-up.

Here are some suggestions to help you describe AI better:

Avoid usingInstead use
Avoid using phrases such as “AI learns” or “AI/ML does”Use phrases such as “AI applications are designed to…” or “AI developers build applications that…
Avoid words that describe the behaviour of people (e.g. see, look, recognise, create, make)Use system type words (e.g. detect, input, pattern match, generate, produce)
Avoid using AI/ML as a countable noun, e.g. “new artificial intelligences emerged in 2022”Refer to ‘AI/ML’ as a scientific discipline, similarly to how you use the term “biology”

The purpose of our AI education resources

If we are correct in our approach, then whether or not the young people who engage in Experience AI grow up to become AI developers, we will have helped them to become discerning users of AI technologies and to be more likely to see such products for what they are: data-driven applications and not sentient machines.

If you’d like to get involved with Experience AI and use our lessons with your class, you can start by visiting us at experience-ai.org.

The post How anthropomorphism hinders AI education appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-education-anthropomorphism/feed/ 4
AI education resources: What do we teach young people? https://www.raspberrypi.org/blog/ai-education-resources-what-to-teach-seame-framework/ https://www.raspberrypi.org/blog/ai-education-resources-what-to-teach-seame-framework/#comments Tue, 28 Mar 2023 09:29:49 +0000 https://www.raspberrypi.org/?p=83513 People have many different reasons to think that children and teenagers need to learn about artificial intelligence (AI) technologies. Whether it’s that AI impacts young people’s lives today, or that understanding these technologies may open up careers in their future — there is broad agreement that school-level education about AI is important. But how do…

The post AI education resources: What do we teach young people? appeared first on Raspberry Pi Foundation.

]]>
People have many different reasons to think that children and teenagers need to learn about artificial intelligence (AI) technologies. Whether it’s that AI impacts young people’s lives today, or that understanding these technologies may open up careers in their future — there is broad agreement that school-level education about AI is important.

A young person writes Python code.

But how do you actually design lessons about AI, a technical area that is entirely new to young people? That was the question we needed to answer as we started Experience AI, our exciting collaboration with DeepMind, a leading AI company.

Our approach to developing AI education resources

As part of Experience AI, we are creating a free set of lesson resources to help teachers introduce AI and machine learning (ML) to KS3 students (ages 11 to 14). In England this area is not currently part of the national curriculum, but it’s starting to appear in all sorts of learning materials for young people. 

Two learners and a teacher in a physical computing lesson.

While developing the six Experience AI lessons, we took a research-informed approach. We built on insights from the series of research seminars on AI and data science education we had hosted in 2021 and 2022, and on research we ourselves have been conducting at the Raspberry Pi Computing Education Research Centre.

We reviewed over 500 existing resources that are used to teach AI and ML.

As part of this research, we reviewed over 500 existing resources that are used to teach AI and ML. We found that the vast majority of them were one-off activities, and many claimed to be appropriate for learners of any age. There were very few sets of lessons, or units of work, that were tailored to a specific age group. Activities often had vague learning objectives, or none at all. We rarely found associated assessment activities. These were all shortcomings we wanted to avoid in our set of lessons.

To analyse the content of AI education resources, we use a simple framework called SEAME. This framework is based on work I did in 2018 with Professor Paul Curzon at Queen Mary University of London, running professional development for educators on teaching machine learning.

The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works).
Click to enlarge.

The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works). We hope that it will be a useful tool for anyone who is interested in looking at resources to teach AI. 

What do AI education resources focus on?

The four levels of the SEAME framework do not indicate a hierarchy or sequence. Instead, they offer a way for teachers, resource developers, and researchers to talk about the focus of AI learning activities.

Social and ethical aspects (SE)

The SE level covers activities that relate to the impact of AI on everyday life, and to its implications for society. Learning objectives and their related resources categorised at this level introduce students to issues such as privacy or bias concerns, the impact of AI on employment, misinformation, and the potential benefits of AI applications.

A slide from a lesson about AI that describes an AI application related to timetables.
An example activity in the Experience AI lessons where learners think about the social and ethical issues of an AI application that predicts what subjects they might want to study. This activity is mostly focused on the social and ethical level of the SEAME framework, but also links to the applications and models levels.

Applications (A)

The A level refers to activities related to applications and systems that use AI or ML models. At this level, learners do not learn how to train models themselves, or how such models work. Learning objectives at this level include knowing a range of AI applications and starting to understand the difference between rule-based and data-driven approaches to developing applications.

Models (M)

The M level concerns the models underlying AI and ML applications. Learning objectives at this level include learners understanding the processes used to train and test models. For example, through resources focused on the M level, students could learn about the different learning paradigms of ML (i.e., supervised, unsupervised, or reinforcement learning).

A slide from a lesson about AI that describes an ML model to classify animals.
An example activity in the Experience AI lessons where students learn about classification. This activity is mostly focused on the models level of the SEAME framework, but also links to the social and ethical and the applications levels.

Engines (E)

The E level is related to the engines that make AI models work. This is the most hidden and complex level, and for school-aged learners may need to be taught using unplugged activities and visualisations. Learning objectives could include understanding the basic workings of systems such as data-driven decision trees and artificial neural networks.

Covering the four levels

Some learning activities may focus on a single level, but activities can also span more than one level. For example, an activity may start with learners trying out an existing ‘rock-paper-scissors’ application that uses an ML model to recognise hand shapes. This would cover the applications level. If learners then move on to train the model to improve its accuracy by adding more image data, they work at the models level.

A teacher helps a young person with a coding project.

Other activities cover several SEAME levels to address a specific concept. For example, an activity focussed on bias might start with an example of the societal impact of bias (SE level). Learners could then discuss the AI applications they use and reflect on how bias impacts them personally (A level). The activity could finish with learners exploring related data in a simple ML model and thinking about how representative the data is of all potential application users (M level).

The set of lessons on AI we are developing in collaboration with DeepMind covers all four levels of SEAME.

The set of Experience AI lessons we are developing in collaboration with DeepMind covers all four levels of SEAME. The lessons are based on carefully designed learning objectives and specifically targeted to KS3 students. Lesson materials include presentations, videos, student activities, and assessment questions.

The SEAME framework as a tool for research on AI education

For researchers, we think the SEAME framework will, for example, be useful to analyse school curriculum material to see whether some age groups have more learning activities available at one level than another, and whether this changes over time. We may find that primary school learners work mostly at the SE and A levels, and secondary school learners move between the levels with increasing clarity as they develop their knowledge. It may also be the case that some learners or teachers prefer activities focused on one level rather than another. However, we can’t be sure: research is needed to investigate the teaching and learning of AI and ML across all year groups.

That’s why we’re excited to welcome Salomey Afua Addo to the Raspberry Pi Computing Education Research Centre. Salomey joined the Centre as a PhD student in January, and her research will focus on approaches to the teaching and learning of AI. We’re looking forward to seeing the results of her work.

If you’d like to get involved with Experience AI as an educator and use our lessons with your class, you can start by visiting us at experience-ai.org.

The post AI education resources: What do we teach young people? appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-education-resources-what-to-teach-seame-framework/feed/ 1
Building a maths curriculum for a world shaped by computing https://www.raspberrypi.org/blog/maths-curriculum-conrad-wolfram-computing-ai-research-seminar/ Tue, 25 Oct 2022 08:17:08 +0000 https://www.raspberrypi.org/?p=81603 In the penultimate seminar in our series on cross-disciplinary computing, we were delighted to host Conrad Wolfram (European co-founder/CEO of Wolfram Research). Conrad has been an influential figure in the areas of AI, data science, and computation for over 30 years. The company he co-founded, Wolfram Research, develops computational technologies including the Wolfram programming language,…

The post Building a maths curriculum for a world shaped by computing appeared first on Raspberry Pi Foundation.

]]>
In the penultimate seminar in our series on cross-disciplinary computing, we were delighted to host Conrad Wolfram (European co-founder/CEO of Wolfram Research).

Conrad Wolfram.
Conrad Wolfram

Conrad has been an influential figure in the areas of AI, data science, and computation for over 30 years. The company he co-founded, Wolfram Research, develops computational technologies including the Wolfram programming language, which is used by the Mathematica and WolframAlpha programs. In the seminar, Conrad spoke about his work on developing a mathematics curriculum “for the AI age”.

In a computing classroom, a girl laughs at what she sees on the screen.

Computation is everywhere

In his talk, Conrad began by talking about the ubiquity of computation. He explained how computation (i.e. an operation that follows conditions to give a defined output) has transformed our everyday lives and led to the development of entire new sub-disciplines, such as computational medicine, computational marketing, and even computational agriculture. He then used the WolframAlpha tool to give several practical examples of applying high-level computation to problem-solving in different areas.

A line graph comparing the population of the UK with the number of sheep in New Zealand.
Yes, there are more people in the UK than sheep in New Zealand.

The power of computation for mathematics

Conrad then turned his attention to the main question of his talk: if computation has also changed real-world mathematics, how should school-based mathematics teaching respond? He suggested that, as computation has impacted all aspects of our daily lives, school subjects should be reformed to better prepare students for the careers of the future.

A diagram indicating that hand calculating takes up a lot of time in current maths classes.
Hand calculation methods are time-consuming.

His biggest criticism was the use of hand calculation methods in mathematics teaching. He proposed that a mathematics curriculum that “assumes computers exist” and uses computers (rather than humans) to compute answers would better support students to develop a deep understanding of mathematical concepts and principles. In other words, if students spent less time doing hand-calculation methods, they could devote more time to more complex problems.

What does computational problem-solving look like?

One interesting aspect of Conrad’s talk was how he modelled the process of solving problems using computation. In all of the example problems, he outlined that computational problem-solving follows the same four-step process:

  1. Define the question: Students think about the scope and details of the problem and define answerable questions to tackle.
  2. Abstract to computable form: Using the information provided, students translate the question into a precise abstract form, such as a diagram or algorithm, so that it can be solved by a computer-based agent.
  3. Computer answers: Using the power of computation, students solve the abstract question and resolve any issues during the computation process.
  4. Interpret results: Students reinterpret and recontextualise the abstract answer to derive useful results. If problems emerge, students refine or fix their work.

Depending on the problem, the process can be repeated multiple times until the desired solution is reached. Rather than being proposed as a static list of outcomes, the process was presented by Conrad as an iterative cycle than resembles an “ascending helix”:

A helix representing the iterative cycle of computational problem-solving.
The problem-solving ‘helix’ model.

A curriculum for a world with AI

In the later stages of his talk, Conrad talked about the development of a new computational curriculum to better define what a modern mathematics curriculum might look like. The platform that hosts the curriculum, named Computer-Based Math (or CBM), outlines the need to integrate computational thinking into mathematics in schools. For instance, one of the modules, How Fast Could I Cycle Stage 7 Of The An Post Rás?, asks students to develop a computational solution to a real-world problem. Following the four-step problem-solving process, students apply mathematical models, computational tools, and real-world data to generate a valid solution:

A module from Wolfram Research’s Computer-Based Maths curriculum.
Sample module from Computer-Based Math. Click to enlarge.

Some future challenges he remarked on included how a computer-based mathematics curriculum could be integrated with existing curricula or qualifications, at what ages computational mathematics should be taught, and what assessment, training, and hardware would be needed to support teachers to deliver such a curriculum. 

Conrad concluded the talk by arguing that the current need for computational literacy is similar to the need for mass literacy and pondering whether the UK could lead the push towards a new computational curriculum suitable for learners who grow up with AI technologies. This point provided food for thought during our discussion section, especially for teachers interested in embedding computation into their lessons, and for researchers thinking about the impact of AI in different fields. We’re grateful to Conrad for speaking about his work and mission — long may it continue!

You can catch up on Conrad’s talk with his slides and the talk’s recording:

More to explore

Conrad’s book, The Math(s) Fix: An Education Blueprint for the AI Age, gives more details on how he thinks data science, AI, and computation could be embedded into the modern maths curriculum.

You can also explore Wolfram Research’s Computer-Based Maths curriculum, which offers learning materials to help teachers embed computation in their maths lessons. 

Finally, try out Wolfram’s tools to solve everyday problems using computation. For example, you might ask WolframAlpha data-rich questions, which the tool converts from text input into a computable problem using natural language processing. (Two of my favourite example questions are: “How old was Leonardo when the Mona Lisa was painted?” and “What was the weather like when I was born?”)

Join our next seminar

In the final seminar of our series on cross-curricular computing, we welcome Dr Tracy Gardner and Rebecca Franks (Raspberry Pi Foundation) to present their ongoing work on computing education in non-formal settings. Sign up now to join us for this session on Tues 8 November:

We will shortly be announcing the theme of a brand-new series of research seminars starting in January 2023. The seminars will take place online on the first Tuesday of the month at 17:00–18:30 UK time.

The post Building a maths curriculum for a world shaped by computing appeared first on Raspberry Pi Foundation.

]]>
Experience AI with the Raspberry Pi Foundation and DeepMind https://www.raspberrypi.org/blog/experience-ai-deepmind-ai-education/ https://www.raspberrypi.org/blog/experience-ai-deepmind-ai-education/#comments Mon, 26 Sep 2022 15:00:13 +0000 https://www.raspberrypi.org/?p=81424 I am delighted to announce a new collaboration between the Raspberry Pi Foundation and a leading AI company, DeepMind, to inspire the next generation of AI leaders. The Raspberry Pi Foundation’s mission is to enable young people to realise their full potential through the power of computing and digital technologies. Our vision is that every…

The post Experience AI with the Raspberry Pi Foundation and DeepMind appeared first on Raspberry Pi Foundation.

]]>
I am delighted to announce a new collaboration between the Raspberry Pi Foundation and a leading AI company, DeepMind, to inspire the next generation of AI leaders.

Young people work together to investigate computer hardware.

The Raspberry Pi Foundation’s mission is to enable young people to realise their full potential through the power of computing and digital technologies. Our vision is that every young person — whatever their background — should have the opportunity to learn how to create and solve problems with computers.

With the rapid advances in artificial intelligence — from machine learning and robotics, to computer vision and natural language processing — it’s increasingly important that young people understand how AI is affecting their lives now and the role that it can play in their future. 

DeepMind logo.

Experience AI is a new collaboration between the Raspberry Pi Foundation and DeepMind that aims to help young people understand how AI works and how it is changing the world. We want to inspire young people about the careers in AI and help them understand how to access those opportunities, including through their subject choices. 

Experience AI 

More than anything, we want to make AI relevant and accessible to young people from all backgrounds, and to make sure that we engage young people from backgrounds that are underrepresented in AI careers. 

The collaboration has two strands: Inspire and Experiment. 

Inspire: To engage and inspire students about AI and its impact on the world, we are developing a set of free learning resources and materials including lesson plans, assembly packs, videos, and webinars, alongside training and support for educators. This will include an introduction to the technologies that enable AI; how AI models are trained; how to frame problems for AI to solve; the societal and ethical implications of AI; and career opportunities. All of this will be designed around real-world and relatable applications of AI, engaging a wide range of diverse interests and useful to teachers from different subjects.

In a computing classroom, two girls concentrate on their programming task.

Experiment: Building on the excitement generated through Inspire, we are also designing an AI challenge that will support young people to experiment with AI technologies and explore how these can be used to solve real-world problems. This will provide an opportunity for students to get hands-on with technology and data, along with support for educators. 

Our initial focus is learners aged 11 to 14 in the UK. We are working with teachers, students, and DeepMind engineers to ensure that the materials and learning experiences are engaging and accessible to all, and that they reflect the latest AI technologies and their application.

A woman teacher helps a young person with a coding project.

As with all of our work, we want to be research-led and the Raspberry Pi Foundation research team has been working over the past year to understand the latest research on what works in AI education.

Next steps 

Development of the learning materials is underway now, and we have released the full set of resources on experience-ai.org. We are currently piloting the challenge for release in September 2023.

The post Experience AI with the Raspberry Pi Foundation and DeepMind appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-deepmind-ai-education/feed/ 12
AI literacy research: Children and families working together around smart devices https://www.raspberrypi.org/blog/ai-literacy-children-families-working-together-ai-education-research/ https://www.raspberrypi.org/blog/ai-literacy-children-families-working-together-ai-education-research/#comments Thu, 21 Apr 2022 10:35:37 +0000 https://www.raspberrypi.org/?p=79248 Between September 2021 and March 2022, we’ve been partnering with The Alan Turing Institute to host a series of free research seminars about how to young people about AI and data science. In the final seminar of the series, we were excited to hear from Stefania Druga from the University of Washington, who presented on…

The post AI literacy research: Children and families working together around smart devices appeared first on Raspberry Pi Foundation.

]]>
Between September 2021 and March 2022, we’ve been partnering with The Alan Turing Institute to host a series of free research seminars about how to young people about AI and data science.

In the final seminar of the series, we were excited to hear from Stefania Druga from the University of Washington, who presented on the topic of AI literacy for families. Stefania’s talk highlighted the importance of families in supporting children to develop AI literacy. Her talk was a perfect conclusion to the series and very well-received by our audience.

Stefania Druga.
Stefania Druga, University of Washington

Stefania is a third-year PhD student who has been working on AI literacy in families, and since 2017 she has conducted a series of studies that she presented in her seminar talk. She presented some new work to us that was to be formally shared at the HCI conference in April, and we were very pleased to have a sneak preview of these results. It was a fascinating talk about the ways in which the interactions between parents and children using AI-based devices in the home, and the discussions they have while learning together, can facilitate an appreciation of the affordances of AI systems, and critical thinking about their limitations and fallibilities. You’ll find my summary as well as the seminar recording below.

“AI literacy practices and skills led some families to consider making meaningful use of AI devices they already have in their homes and redesign their interactions with them. These findings suggest that family has the potential to act as a third space for AI learning.”

– Stefania Druga

AI literacy: Growing up with AI systems, growing used to them

Back in 2017, interest in Alexa and other so-called ‘smart’, AI-based devices was just developing in the public, and such devices would have been very novel to most people. That year, Stefania and colleagues conducted a first pilot study of children’s and their parents’ interactions with ‘smart’ devices, including robots, talking dolls, and the sort of voice assistants we are used to now.

A slide from Stefania Druga's AI literacy seminar. Content is described in the blog text.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

Working directly with families, the researchers explored the level of understanding that children had about ‘smart’ devices, and were surprised by the level of insight very young children had into the potential of this type of technology.

In this AI literacy pilot study, Stefania and her colleagues found that:

  • Children perceived AI-based agents (i.e. ‘smart’ devices) as friendly and truthful
  • They treated different devices (e.g. two different Alexas) as completely independent
  • How ‘smart’ they found the device was dependent on age, with older children more likely to describe devices as ‘smart’

AI literacy: Influence of parents’ perceptions, influence of talking dolls

Stefania’s next study, undertaken in 2018, showed that parents’ perceptions of the implications and potential of ‘smart’ devices shaped what their children thought. Even when parents and children were interviewed separately, if the parent thought that, for example, robots were smarter than humans, then the child did too.

A slide from Stefania Druga's AI literacy seminar.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

Another part of this study showed that talking dolls could influence children’s moral decisions (e.g. “Should I give a child a pillow?”). In some cases, these ‘smart’ toys would influence the child more than another human. Some ‘smart’ dolls have been banned in some European countries because of security concerns. In the light of these concerns, Stefania pointed out how important it is to help children develop a critical understanding of the potential of AI-based technology, and what its fallibility and the limits of its guidance are.

A slide from Stefania Druga's AI literacy seminar.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

AI literacy: Programming ‘smart’ devices, algorithmic bias

Another study Stefania discussed involved children who programmed ‘smart’ devices. She used the children’s drawings to find out about their mental models of how the technology worked.

She found that when children had the opportunity to train machine learning models or ‘smart’ devices, they became more sceptical about the appropriate use of these technologies and asked better questions about when and for what they should be used. Another finding was that children and adults had different ideas about algorithmic bias, particularly relating to the meaning of fairness.

A parent and child work together at a Raspberry Pi computer.

AI literacy: Kinaesthetic activities, sharing discussions

The final study Stefania talked about was conducted with families online during the pandemic, when children were learning at home. 15 families, with in total 18 children (ages 5 to 11) and 16 parents, participated in five weekly sessions. A number of learning activities to demonstrate features of AI made up each of the sessions. These are all available at aiplayground.me.

A slide from Stefania Druga's AI literacy seminar, describing two research questions about how children and parents learn about AI together, and about how to design learning supports for family AI literacies.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

The fact that children and parents, or other family members, worked through the activities together seemed to generate fruitful discussions about the usefulness of AI-based technology. Many families were concerned about privacy and what was happening to their personal data when they were using ‘smart’ devices, and also expressed frustration with voice assistants that couldn’t always understand the way they spoke.

A slide from Stefania Druga's AI literacy seminar. Content described in the blog text.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

In one of the sessions, with a focus on machine learning, families were introduced to a kinaesthetic activity involving moving around their home to train a model. Through this activity, parents and children had more insight into the constraints facing machine learning. They used props in the home to experiment and find out ways of training the model better. In another session, families were encouraged to design their own devices on paper, and Stefania showed some examples of designs children had drawn.

A slide from Stefania Druga's AI literacy seminar. Content described in the blog text.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

This study identified a number of different roles that parents or other adults played in supporting children’s learning about AI, and found that embodied and tangible activities worked well for encouraging joint work between children and their families.

Find out more

You can catch up with Stefania’s seminar below in the video, and download her presentation slides.

More about Stefania’s work can be learned in her paper on children’s training of ML models and also in her latest paper about the five weekly AI literacy sessions with families.

Recordings and slides of all our previous seminars on AI education are available online for you, and you can see the list of AI education resources we’ve put together based on recommendations from seminar speakers and participants.

Join our next free research seminar

We are delighted to start a new seminar series on cross-disciplinary computing, with seminars in May, June, July, and September to look forward to. It’s not long now before we begin: Mark Guzdial will speak to us about task-specific programming languages (TSP) in history and mathematics classes on 3 May, 17.00 to 18.30pm local UK time. I can’t wait!

Sign up to receive the Zoom details for the seminar with Mark:

The post AI literacy research: Children and families working together around smart devices appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-literacy-children-families-working-together-ai-education-research/feed/ 1
The AI4K12 project: Big ideas for AI education https://www.raspberrypi.org/blog/ai-education-ai4k12-big-ideas-ai-thinking/ https://www.raspberrypi.org/blog/ai-education-ai4k12-big-ideas-ai-thinking/#comments Thu, 20 Jan 2022 10:48:51 +0000 https://www.raspberrypi.org/?p=77961 What is AI thinking? What concepts should we introduce to young people related to AI, including machine learning (ML), and data science? Should we teach with a glass-box or an opaque-box approach? These are the questions we’ve been grappling with since we started our online research seminar series on AI education at the Raspberry Pi…

The post The AI4K12 project: Big ideas for AI education appeared first on Raspberry Pi Foundation.

]]>
What is AI thinking? What concepts should we introduce to young people related to AI, including machine learning (ML), and data science? Should we teach with a glass-box or an opaque-box approach? These are the questions we’ve been grappling with since we started our online research seminar series on AI education at the Raspberry Pi Foundation, co-hosted with The Alan Turing Institute.

Over the past few months, we’d already heard from researchers from the UK, Germany, and Finland. This month we virtually travelled to the USA, to hear from Prof. Dave Touretzky (Carnegie Mellon University) and Prof. Fred G. Martin (University of Massachusetts Lowell), who have pioneered the influential AI4K12 project together with their colleagues Deborah Seehorn and Christina Gardner-McLure.

The AI4K12 project

The AI4K12 project focuses on teaching AI in K-12 in the US. The AI4K12 team have aligned their vision for AI education to the CSTA standards for computer science education. These Standards, published in 2017, describe what should be taught in US schools across the discipline of computer science, but they say very little about AI. This was the stimulus for starting the AI4K12 initiative in 2018. A number of members of the AI4K12 working group are practitioners in the classroom who’ve made a huge contribution in taking this project from ideas into the classroom.

Dave Touretzky presents the five big ideas of the AI4K12 project at our online research seminar.
Dave gave us an overview of the AI4K12 project (click to enlarge)

The project has a number of goals. One is to develop a curated resource directory for K-12 teachers, and another to create a community of K-12 resource developers. On the AI4K12.org website, you can find links to many resources and sign up for their mailing list. I’ve been subscribed to this list for a while now, and fascinating discussions and resources have been shared. 

Five Big Ideas of AI4K12

If you’ve heard of AI4K12 before, it’s probably because of the Five Big Ideas the team has set out to encompass the AI field from the perspective of school-aged children. These ideas are: 

  1. Perception — the idea that computers perceive the world through sensing
  2. Representation and reasoning — the idea that agents maintain representations of the world and use them for reasoning
  3. Learning — the idea that computers can learn from data
  4. Natural interaction — the idea that intelligent agents require many types of knowledge to interact naturally with humans
  5. Societal impact — the idea that artificial intelligence can impact society in both positive and negative ways

Sometimes we hear concerns that resources being developed to teach AI concepts to young people are narrowly focused on machine learning, particularly supervised learning for classification. It’s clear from the AI4K12 Five Big Ideas that the team’s definition of the AI field encompasses much more than one area of ML. Despite being developed for a US audience, I believe the description laid out in these five ideas is immensely useful to all educators, researchers, and policymakers around the world who are interested in AI education.

Fred Martin presents one of the five big ideas of the AI4K12 project at our online research seminar.
Fred explained how ‘representation and reasoning’ is a big idea in the AI field (click to enlarge)

During the seminar, Dave and Fred shared some great practical examples. Fred explained how the big ideas translate into learning outcomes at each of the four age groups (ages 5–8, 9–11, 12–14, 15–18). You can find out more about their examples in their presentation slides or the seminar recording (see below). 

I was struck by how much the AI4K12 team has thought about progression — what you learn when, and in which sequence — which we do really need to understand well before we can start to teach AI in any formal way. For example, looking at how we might teach visual perception to young people, children might start when very young by using a tool such as Teachable Machine to understand that they can teach a computer to recognise what they want it to see, then move on to building an application using Scratch plugins or Calypso, and then to learning the different levels of visual structure and understanding the abstraction pipeline — the hierarchy of increasingly abstract things. Talking about visual perception, Fred used the example of self-driving cars and how they represent images.

A diagram of the levels of visual structure.
Fred used this slide to describe how young people might learn abstracted elements of visual structure

AI education with an age-appropriate, glass-box approach

Dave and Fred support teaching AI to children using a glass-box approach. By ‘glass-box approach’ we mean that we should give students information about how AI systems work, and show the inner workings, so to speak. The opposite would be a ‘opaque-box approach’, by which we mean showing students an AI system’s inputs and the outputs only to demonstrate what AI is capable of, without trying to teach any technical detail.

AI4K12 advice for educators supporting K-12 students: 1. Use transparent AI demonstrations. 2. Help students build mental models. 3. Encourage students to build AI applications.
AI4K12 teacher guidelines for AI education

Our speakers are keen for learners to understand, at an age-appropriate level, what is going on “inside” an AI system, not just what the system can do. They believe it’s important for young people to build mental models of how AI systems work, and that when the young people get older, they should be able to use their increasing knowledge and skills to develop their own AI applications. This aligns with the views of some of our previous seminar speakers, including Finnish researchers Matti Tedre and Henriikka Vartiainen, who presented at our seminar series in November

What is AI thinking?

Dave addressed the question of what AI thinking looks like in school. His approach was to start with computational thinking (he used the example of the Barefoot project’s description of computational thinking as a starting point) and describe AI thinking as an extension that includes the following skills:

  • Perception 
  • Reasoning
  • Representation
  • Machine learning
  • Language understanding
  • Autonomous robots

Dave described AI thinking as furthering the ideas of abstraction and algorithmic thinking commonly associated with computational thinking, stating that in the case of AI, computation actually is thinking. My own view is that to fully define AI thinking, we need to dig a bit deeper into, for example, what is involved in developing an understanding of perception and representation.

An image demonstrating that AI systems for object recognition may not distinguish between a real banana on a desk and the photo of a banana on a laptop screen.
Image: Max Gruber / Better Images of AI / Ceci n’est pas une banane / CC-BY 4.0

Thinking back to Matti Tedre and Henriikka Vartainen’s description of CT 2.0, which focuses only on the ‘Learning’ aspect of the AI4K12 Five Big Ideas, and on the distinct ways of thinking underlying data-driven programming and traditional programming, we can see some differences between how the two groups of researchers describe the thinking skills young people need in order to understand and develop AI systems. Tedre and Vartainen are working on a more finely granular description of ML thinking, which has the potential to impact the way we teach ML in school.

There is also another description of AI thinking. Back in 2020, Juan David Rodríguez García presented his system LearningML at one of our seminars. Juan David drew on a paper by Brummelen, Shen, and Patton, who extended Brennan and Resnick’s CT framework of concepts, practices, and perspectives, to include concepts such as classification, prediction, and generation, together with practices such as training, validating, and testing.

What I take from this is that there is much still to research and discuss in this area! It’s a real privilege to be able to hear from experts in the field and compare and contrast different standpoints and views.

Resources for AI education

The AI4K12 project has already made a massive contribution to the field of AI education, and we were delighted to hear that Dave, Fred, and their colleagues have just been awarded the AAAI/EAAI Outstanding Educator Award for 2022 for AI4K12.org. An amazing achievement! Particularly useful about this website is that it links to many resources, and that the Five Big Ideas give a framework for these resources.

Through our seminars series, we are developing our own list of AI education resources shared by seminar speakers or attendees, or developed by us. Please do take a look.

Join our next seminar

Through these seminars, we’re learning a lot about AI education and what it might look like in school, and we’re having great discussions during the Q&A section.

On Tues 1 February at 17:00–18:30 GMT, we’ll hear from Tara Chklovski, who will talk about AI education in the context of the Sustainable Development Goals. To participate, click the button below to sign up, and we will send you information about joining. I really hope you’ll be there for this seminar!

The schedule of our upcoming seminars is online. You can also (re)visit past seminars and recordings on the blog.

The post The AI4K12 project: Big ideas for AI education appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-education-ai4k12-big-ideas-ai-thinking/feed/ 2
How do we develop AI education in schools? A panel discussion https://www.raspberrypi.org/blog/ai-education-schools-panel-uk-policy/ https://www.raspberrypi.org/blog/ai-education-schools-panel-uk-policy/#comments Tue, 30 Nov 2021 14:11:05 +0000 https://www.raspberrypi.org/?p=77394 AI is a broad and rapidly developing field of technology. Our goal is to make sure all young people have the skills, knowledge, and confidence to use and create AI systems. So what should AI education in schools look like? To hear a range of insights into this, we organised a panel discussion as part…

The post How do we develop AI education in schools? A panel discussion appeared first on Raspberry Pi Foundation.

]]>
AI is a broad and rapidly developing field of technology. Our goal is to make sure all young people have the skills, knowledge, and confidence to use and create AI systems. So what should AI education in schools look like?

To hear a range of insights into this, we organised a panel discussion as part of our seminar series on AI and data science education, which we co-host with The Alan Turing Institute. Here our panel chair Tabitha Goldstaub, Co-founder of CogX and Chair of the UK government’s AI Council, summarises the event. You can also watch the recording below.

As part of the Raspberry Pi Foundation’s monthly AI education seminar series, I was delighted to chair a special panel session to broaden the range of perspectives on the subject. The members of the panel were:

  • Chris Philp, UK Minister for Tech and the Digital Economy
  • Philip Colligan, CEO of the Raspberry Pi Foundation 
  • Danielle Belgrave, Research Scientist, DeepMind
  • Caitlin Glover, A level student, Sandon School, Chelmsford
  • Alice Ashby, student, University of Brighton

The session explored the UK government’s commitment in the recently published UK National AI Strategy stating that “the [UK] government will continue to ensure programmes that engage children with AI concepts are accessible and reach the widest demographic.” We discussed what it will take to make this a reality, and how we will ensure young people have a seat at the table.

Two teenage girls do coding during a computer science lesson.

Why AI education for young people?

It was clear that the Minister felt it is very important for young people to understand AI. He said, “The government takes the view that AI is going to be one of the foundation stones of our future prosperity and our future growth. It’s an enabling technology that’s going to have almost universal applicability across our entire economy, and that is why it’s so important that the United Kingdom leads the world in this area. Young people are the country’s future, so nothing is complete without them being at the heart of it.”

A teacher watches two female learners code in Code Club session in the classroom.

Our panelist Caitlin Glover, an A level student at Sandon School, reiterated this from her perspective as a young person. She told us that her passion for AI started initially because she wanted to help neurodiverse young people like herself. Her idea was to start a company that would build AI-powered products to help neurodiverse students.

What careers will AI education lead to?

A theme of the Foundation’s seminar series so far has been how learning about AI early may impact young people’s career choices. Our panelist Alice Ashby, who studies Computer Science and AI at Brighton University, told us about her own process of deciding on her course of study. She pointed to the fact that terms such as machine learning, natural language processing, self-driving cars, chatbots, and many others are currently all under the umbrella of artificial intelligence, but they’re all very different. Alice thinks it’s hard for young people to know whether it’s the right decision to study something that’s still so ambiguous.

A young person codes at a Raspberry Pi computer.

When I asked Alice what gave her the courage to take a leap of faith with her university course, she said, “I didn’t know it was the right move for me, honestly. I took a gamble, I knew I wanted to be in computer science, but I wanted to spice it up.” The AI ecosystem is very lucky that people like Alice choose to enter the field even without being taught what precisely it comprises.

We also heard from Danielle Belgrave, a Research Scientist at DeepMind with a remarkable career in AI for healthcare. Danielle explained that she was lucky to have had a Mathematics teacher who encouraged her to work in statistics for healthcare. She said she wanted to ensure she could use her technical skills and her love for math to make an impact on society, and to really help make the world a better place. Danielle works with biologists, mathematicians, philosophers, and ethicists as well as with data scientists and AI researchers at DeepMind. One possibility she suggested for improving young people’s understanding of what roles are available was industry mentorship. Linking people who work in the field of AI with school students was an idea that Caitlin was eager to confirm as very useful for young people her age.

We need investment in AI education in school

The AI Council’s Roadmap stresses how important it is to not only teach the skills needed to foster a pool of people who are able to research and build AI, but also to ensure that every child leaves school with the necessary AI and data literacy to be able to become engaged, informed, and empowered users of the technology. During the panel, the Minister, Chris Philp, spoke about the fact that people don’t have to be technical experts to come up with brilliant ideas, and that we need more people to be able to think creatively and have the confidence to adopt AI, and that this starts in schools. 

A class of primary school students do coding at laptops.

Caitlin is a perfect example of a young person who has been inspired about AI while in school. But sadly, among young people and especially girls, she’s in the minority by choosing to take computer science, which meant she had the chance to hear about AI in the classroom. But even for young people who choose computer science in school, at the moment AI isn’t in the national Computing curriculum or part of GCSE computer science, so much of their learning currently takes place outside of the classroom. Caitlin added that she had had to go out of her way to find information about AI; the majority of her peers are not even aware of opportunities that may be out there. She suggested that we ensure AI is taught across all subjects, so that every learner sees how it can make their favourite subject even more magical and thinks “AI’s cool!”.

A primary school boy codes at a laptop with the help of an educator.

Philip Colligan, the CEO here at the Foundation, also described how AI could be integrated into existing subjects including maths, geography, biology, and citizenship classes. Danielle thoroughly agreed and made the very good point that teaching this way across the school would help prepare young people for the world of work in AI, where cross-disciplinary science is so important. She reminded us that AI is not one single discipline. Instead, many different skill sets are needed, including engineering new AI systems, integrating AI systems into products, researching problems to be addressed through AI, or investigating AI’s societal impacts and how humans interact with AI systems.

On hearing about this multitude of different skills, our discussion turned to the teachers who are responsible for imparting this knowledge, and to the challenges they face. 

The challenge of AI education for teachers

When we shifted the focus of the discussion to teachers, Philip said: “If we really want to equip every young person with the knowledge and skills to thrive in a world that shaped by these technologies, then we have to find ways to evolve the curriculum and support teachers to develop the skills and confidence to teach that curriculum.”

Teenage students and a teacher do coding during a computer science lesson.

I asked the Minister what he thought needed to happen to ensure we achieved data and AI literacy for all young people. He said, “We need to work across government, but also across business and society more widely as well.” He went on to explain how important it was that the Department for Education (DfE) gets the support to make the changes needed, and that he and the Office for AI were ready to help.

Philip explained that the Raspberry Pi Foundation is one of the organisations in the consortium running the National Centre for Computing Education (NCCE), which is funded by the DfE in England. Through the NCCE, the Foundation has already supported thousands of teachers to develop their subject knowledge and pedagogy around computer science.

A recent study recognises that the investment made by the DfE in England is the most comprehensive effort globally to implement the computing curriculum, so we are starting from a good base. But Philip made it clear that now we need to expand this investment to cover AI.

Young people engaging with AI out of school

Philip described how brilliant it is to witness young people who choose to get creative with new technologies. As an example, he shared that the Foundation is seeing more and more young people employ machine learning in the European Astro Pi Challenge, where participants run experiments using Raspberry Pi computers on board the International Space Station. 

Three teenage boys do coding at a shared computer during a computer science lesson.

Philip also explained that, in the Foundation’s non-formal CoderDojo club network and its Coolest Projects tech showcase events, young people build their dream AI products supported by volunteers and mentors. Among these have been autonomous recycling robots and AI anti-collision alarms for bicycles. Like Caitlin with her company idea, this shows that young people are ready and eager to engage and create with AI.

We closed out the panel by going back to a point raised by Mhairi Aitken, who presented at the Foundation’s research seminar in September. Mhairi, an Alan Turing Institute ethics fellow, argues that children don’t just need to learn about AI, but that they should actually shape the direction of AI. All our panelists agreed on this point, and we discussed what it would take for young people to have a seat at the table.

A Black boy uses a Raspberry Pi computer at school.

Alice advised that we start by looking at our existing systems for engaging young people, such as Youth Parliament, student unions, and school groups. She also suggested adding young people to the AI Council, which I’m going to look into right away! Caitlin agreed and added that it would be great to make these forums virtual, so that young people from all over the country could participate.

The panel session was full of insight and felt very positive. Although the challenge of ensuring we have a data- and AI-literate generation of young people is tough, it’s clear that if we include them in finding the solution, we are in for a bright future. 

What’s next for AI education at the Raspberry Pi Foundation?

In the coming months, our goal at the Foundation is to increase our understanding of the concepts underlying AI education and how to teach them in an age-appropriate way. To that end, we will start to conduct a series of small AI education research projects, which will involve gathering the perspectives of a variety of stakeholders, including young people. We’ll make more information available on our research pages soon.

In the meantime, you can sign up for our upcoming research seminars on AI and data science education, and peruse the collection of related resources we’ve put together.

The post How do we develop AI education in schools? A panel discussion appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-education-schools-panel-uk-policy/feed/ 3
Should we teach AI and ML differently to other areas of computer science? A challenge https://www.raspberrypi.org/blog/research-seminar-data-centric-ai-ml-teaching-in-school/ https://www.raspberrypi.org/blog/research-seminar-data-centric-ai-ml-teaching-in-school/#comments Thu, 14 Oct 2021 11:13:51 +0000 https://www.raspberrypi.org/?p=76460 Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host a series of free research seminars about how to teach AI and data science to young people. In the second seminar of the series, we were excited to hear from Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from the…

The post Should we teach AI and ML differently to other areas of computer science? A challenge appeared first on Raspberry Pi Foundation.

]]>
Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host a series of free research seminars about how to teach AI and data science to young people.

In the second seminar of the series, we were excited to hear from Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from the University of Paderborn, Germany, who presented on the topic of teaching AI and machine learning (ML) from a data-centric perspective. Their talk raised the question of whether and how AI and ML should be taught differently from other themes in the computer science curriculum at school.

Machine behaviour — a new field of study?

The rationale behind the speakers’ work is a concept they call hybrid interaction system, referring to the way that humans and machines interact. To explain this concept, Carsten referred to a 2019 article published in Nature by Iyad Rahwan and colleagues: Machine behaviour. The article’s authors propose that the study of AI agents (complex and simple algorithms that make decisions) should be a separate, cross-disciplinary field of study, because of the ubiquity and complexity of AI systems, and because these systems can have both beneficial and detrimental impacts on humanity, which can be difficult to evaluate. (Our previous seminar by Mhairi Aitken highlighted some of these impacts.) The authors state that to study this field, we need to draw on scientific practices from across different fields, as shown below:

Machine behaviour as a field sits at the intersection of AI engineering and behavioural science. Quantitative evidence from machine behaviour studies feeds into the study of the impact of technology, which in turn feeds questions and practices into engineering and behavioural science.
The interdisciplinarity of machine behaviour. (Image taken from Rahwan et al [1])

In establishing their argument, the authors compare the study of animal behaviour and machine behaviour, citing that both fields consider aspects such as mechanism, development, evolution and function. They describe how part of this proposed machine behaviour field may focus on studying individual machines’ behaviour, while collective machines and what they call ‘hybrid human-machine behaviour’ can also be studied. By focusing on the complexities of the interactions between machines and humans, we can think both about machines shaping human behaviour and humans shaping machine behaviour, and a sort of ‘co-behaviour’ as they work together. Thus, the authors conclude that machine behaviour is an interdisciplinary area that we should study in a different way to computer science.

Carsten and his team said that, as educators, we will need to draw on the parameters and frameworks of this machine behaviour field to be able to effectively teach AI and machine learning in school. They argue that our approach should be centred on data, rather than on code. I believe this is a challenge to those of us developing tools and resources to support young people, and that we should be open to these ideas as we forge ahead in our work in this area.

Ideas or artefacts?

In the interpretation of computational thinking popularised in 2006 by Jeanette Wing, she introduces computational thinking as being about ‘ideas, not artefacts’. When we, the computing education community, started to think about computational thinking, we moved from focusing on specific technology — and how to understand and use it — to the ideas or principles underlying the domain. The challenge now is: have we gone too far in that direction?

Carsten argued that, if we are to understand machine behaviour, and in particular, human-machine co-behaviour, which he refers to as the hybrid interaction system, then we need to be studying   artefacts as well as ideas.

Throughout the seminar, the speakers reminded us to keep in mind artefacts, issues of bias, the role of data, and potential implications for the way we teach.

Studying machine learning: a different focus

In addition, Carsten highlighted a number of differences between learning ML and learning other areas of computer science, including traditional programming:

  1. The process of problem-solving is different. Traditionally, we might try to understand the problem, derive a solution in terms of an algorithm, then understand the solution. In ML, the data shapes the model, and we do not need a deep understanding of either the problem or the solution.
  2. Our tolerance of inaccuracy is different. Traditionally, we teach young people to design programs that lead to an accurate solution. However, the nature of ML means that there will be an error rate, which we strive to minimise. 
  3. The role of code is different. Rather than the code doing the work as in traditional programming, the code is only a small part of a real-world ML system. 

These differences imply that our teaching should adapt too.

A graphic demonstrating that in machine learning as compared to other areas of computer science, the process of problem-solving, tolerance of inaccuracy, and role of code is different.
Click to enlarge.

ProDaBi: a programme for teaching AI, data science, and ML in secondary school

In Germany, education is devolved to state governments. Although computer science (known as informatics) was only last year introduced as a mandatory subject in lower secondary schools in North Rhine-Westphalia, where Paderborn is located, it has been taught at the upper secondary levels for many years. ProDaBi is a project that researchers have been running at Paderborn University since 2017, with the aim of developing a secondary school curriculum around data science, AI, and ML.

The ProDaBi curriculum includes:

  • Two modules for 11- to 12-year-olds covering decision trees and data awareness (ethical aspects), introduced this year
  • A short course for 13-year-olds covering aspects of artificial intelligence, through the game Hexapawn
  • A set of modules for 14- to 15-year-olds, covering data science, data exploration, decision trees, neural networks, and data awareness (ethical aspects), using Jupyter notebooks
  • A project-based course for 18-year-olds, including the above topics at a more advanced level, using Codap and Jupyter notebooks to develop practical skills through projects; this course has been running the longest and is currently in its fourth iteration

Although the ProDaBi project site is in German, an English translation is available.

Learning modules developed as part of the ProDaBi project.
Modules developed as part of the ProDaBi project

Our speakers described example activities from three of the modules:

  • Hexapawn, a two-player game inspired by the work of Donald Michie in 1961. The purpose of this activity is to support learners in reflecting on the way the machine learns. Children can then relate the activity to the behavior of AI agents such as autonomous cars. An English version of the activity is available. 
  • Data cards, a series of activities to teach about decision trees. The cards are designed in a ‘Top Trumps’ style, and based on food items, with unplugged and digital elements. 
  • Data awareness, a module focusing on the amount of data an individual can generate as they move through a city, in this case through the mobile phone network. Children are encouraged to reflect on personal data in the context of the interaction between the human and data-driven artefact, and how their view of the world influences their interpretation of the data that they are given.

Questioning how we should teach AI and ML at school

There was a lot to digest in this seminar: challenging ideas and some new concepts, for me anyway. An important takeaway for me was how much we do not yet know about the concepts and skills we should be teaching in school around AI and ML, and about the approaches that we should be using to teach them effectively. Research such as that being carried out in Paderborn, demonstrating a data-centric approach, can really augment our understanding, and I’m looking forward to following the work of Carsten and his team.

Carsten and colleagues ended with this summary and discussion point for the audience:

“‘AI education’ requires developing an adequate picture of the hybrid interaction system — a kind of data-driven, emergent ecosystem which needs to be made explicitly to understand the transformative role as well as the technological basics of these artificial intelligence tools and how they are related to data science.”

You can catch up on the seminar, including the Q&A with Carsten and his colleagues, here:

Join our next seminar

This seminar really extended our thinking about AI education, and we look forward to introducing new perspectives from different researchers each month. At our next seminar on Tuesday 2 November at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Professor Matti Tedre and Henriikka Vartiainen (University of Eastern Finland). The two Finnish researchers will talk about emerging trajectories in ML education for K-12. We look forward to meeting you there.

Carsten and their colleagues are also running a series of seminars on AI and data science: you can find out about these on their registration page.

You can increase your own understanding of machine learning by joining our latest free online course!


[1] Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., … & Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486.

The post Should we teach AI and ML differently to other areas of computer science? A challenge appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/research-seminar-data-centric-ai-ml-teaching-in-school/feed/ 2
What’s a kangaroo?! AI ethics lessons for and from the younger generation https://www.raspberrypi.org/blog/ai-ethics-lessons-education-children-research/ Mon, 20 Sep 2021 10:01:21 +0000 https://www.raspberrypi.org/?p=74641 Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host speakers from the UK, Finland, Germany, and the USA presenting a series of free research seminars about AI and data science education for young people. These rapidly developing technologies have a huge and growing impact on our lives, so it’s…

The post What’s a kangaroo?! AI ethics lessons for and from the younger generation appeared first on Raspberry Pi Foundation.

]]>
Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host speakers from the UK, Finland, Germany, and the USA presenting a series of free research seminars about AI and data science education for young people. These rapidly developing technologies have a huge and growing impact on our lives, so it’s important for young people to understand them both from a technical and a societal perspective, and for educators to learn how to best support them to gain this understanding.

Mhairi Aitken.

In our first seminar we were beyond delighted to hear from Dr Mhairi Aitken, Ethics Fellow at The Alan Turing Institute. Mhairi is a sociologist whose research examines social and ethical dimensions of digital innovation, particularly relating to uses of data and AI. You can catch up on her full presentation and the Q&A with her in the video below.

Why we need AI ethics

The increased use of AI in society and industry is bringing some amazing benefits. In healthcare for example, AI can facilitate early diagnosis of life-threatening conditions and provide more accurate surgery through robotics. AI technology is also already being used in housing, financial services, social services, retail, and marketing. Concerns have been raised about the ethical implications of some aspects of these technologies, and Mhairi gave examples of a number of controversies to introduce us to the topic.

“Ethics considers not what we can do but rather what we should do — and what we should not do.”

Mhairi Aitken

One such controversy in England took place during the coronavirus pandemic, when an AI system was used to make decisions about school grades awarded to students. The system’s algorithm drew on grades awarded in previous years to other students of a school to upgrade or downgrade grades given by teachers; this was seen as deeply unfair and raised public consciousness of the real-life impact that AI decision-making systems can have.

An AI system was used in England last year to make decisions about school grades awarded to students — this was seen as deeply unfair.

Another high-profile controversy was caused by biased machine learning-based facial recognition systems and explored in Shalini Kantayya’s documentary Coded Bias. Such facial recognition systems have been shown to be much better at recognising a white male face than a black female one, demonstrating the inequitable impact of the technology.

What should AI be used for?

There is a clear need to consider both the positive and negative impacts of AI in society. Mhairi stressed that using AI effectively and ethically is not just about mitigating negative impacts but also about maximising benefits. She told us that bringing ethics into the discussion means that we start to move on from what AI applications can do to what they should and should not do. To outline how ethics can be applied to AI, Mhairi first outlined four key ethical principles:

  • Beneficence (do good)
  • Nonmaleficence (do no harm)
  • Autonomy
  • Justice

Mhairi shared a number of concrete questions that ethics raise about new technologies including AI: 

  • How do we ensure the benefits of new technologies are experienced equitably across society?
  • Do AI systems lead to discriminatory practices and outcomes?
  • Do new forms of data collection and monitoring threaten individuals’ privacy?
  • Do new forms of monitoring lead to a Big Brother society?
  • To what extent are individuals in control of the ways they interact with AI technologies or how these technologies impact their lives?
  • How can we protect against unjust outcomes, ensuring AI technologies do not exacerbate existing inequalities or reinforce prejudices?
  • How do we ensure diverse perspectives and interests are reflected in the design, development, and deployment of AI systems? 

Who gets to inform AI systems? The kangaroo metaphor

To mitigate negative impacts and maximise benefits of an AI system in practice, it’s crucial to consider the context in which the system is developed and used. Mhairi illustrated this point using the story of an autonomous vehicle, a self-driving car, developed in Sweden in 2017. It had been thoroughly safety-tested in the country, including tests of its ability to recognise wild animals that may cross its path, for example elk and moose. However, when the car was used in Australia, it was not able to recognise kangaroos that hopped into the road! Because the system had not been tested with kangaroos during its development, it did not know what they were. As a result, the self-driving car’s safety and reliability significantly decreased when it was taken out of the context in which it had been developed, jeopardising people and kangaroos.

A parent kangaroo with a young kangaroo in its pouch stands on grass.
Mitigating negative impacts and maximising benefits of AI systems requires actively involving the perspectives of groups that may be affected by the system — ‘kangoroos’ in Mhairi’s metaphor.

Mhairi used the kangaroo example as a metaphor to illustrate ethical issues around AI: the creators of an AI system make certain assumptions about what an AI system needs to know and how it needs to operate; these assumptions always reflect the positions, perspectives, and biases of the people and organisations that develop and train the system. Therefore, AI creators need to include metaphorical ‘kangaroos’ in the design and development of an AI system to ensure that their perspectives inform the system. Mhairi highlighted children as an important group of ‘kangaroos’. 

AI in children’s lives

AI may have far-reaching consequences in children’s lives, where it’s being used for decision-making around access to resources and support. Mhairi explained the impact that AI systems are already having on young people’s lives through these systems’ deployment in children’s education, in apps that children use, and in children’s lives as consumers.

A young child sits at a table using a tablet.
AI systems are already having an impact on children’s lives.

Children can be taught not only that AI impacts their lives, but also that it can get things wrong and that it reflects human interests and biases. However, Mhairi was keen to emphasise that we need to find out what children know and want to know before we make assumptions about what they should be taught. Moreover, engaging children in discussions about AI is not only about them learning about AI, it’s also about ethical practice: what can people making decisions about AI learn from children by listening to their views and perspectives?

AI research that listens to children

UNICEF, the United Nations Children’s Fund, has expressed concerns about the impact of new AI technologies used on children and young people. They have developed the UNICEF Requirements for Child-Centred AI.

Unicef Requirements for Child-Centred AI: Support childrenʼs development and well-being. Ensure inclusion of and for children. Prioritise fairness and non-discrimination for children. Protect childrenʼs data and privacy. Ensure safety for children. Provide transparency, explainability, and accountability for children. Empower governments and businesses with knowledge of AI and childrenʼs rights. Prepare children for present and future developments in AI. Create an enabling environment for child-centred AI. Engage in digital cooperation.
UNICEF’s requirements for child-centred AI, as presented by Mhairi. Click to enlarge.

Together with UNICEF, Mhairi and her colleagues working on the Ethics Theme in the Public Policy Programme at The Alan Turing Institute are engaged in new research to pilot UNICEF’s Child-Centred Requirements for AI, and to examine how these impact public sector uses of AI. A key aspect of this research is to hear from children themselves and to develop approaches to engage children to inform future ethical practices relating to AI in the public sector. The researchers hope to find out how we can best engage children and ensure that their voices are at the heart of the discussion about AI and ethics.

We all learned a tremendous amount from Mhairi and her work on this important topic. After her presentation, we had a lively discussion where many of the participants relayed the conversations they had had about AI ethics and shared their own concerns and experiences and many links to resources. The Q&A with Mhairi is included in the video recording.

What we love about our research seminars is that everyone attending can share their thoughts, and as a result we learn so much from attendees as well as from our speakers!

It’s impossible to cover more than a tiny fraction of the seminar here, so I do urge you to take the time to watch the seminar recording. You can also catch up on our previous seminars through our blogs and videos.

Join our next seminar

We have six more seminars in our free series on AI, machine learning, and data science education, taking place every first Tuesday of the month. At our next seminar on Tuesday 5 October at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from the University of Paderborn, Germany, who will be presenting on the topic of teaching AI and machine learning (ML) from a data-centric perspective (find out more here). Their talk will raise the questions of whether and how AI and ML should be taught differently from other themes in the computer science curriculum at school.

Sign up now and we’ll send you the link to join on the day of the seminar — don’t forget to put the date in your diary.

I look forward to meeting you there!

In the meantime, we’re offering a brand-new, free online course that introduces machine learning with a practical focus — ideal for educators and anyone interested in exploring AI technology for the first time.

The post What’s a kangaroo?! AI ethics lessons for and from the younger generation appeared first on Raspberry Pi Foundation.

]]>