AI literacy Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/ai-literacy/ Teach, learn and make with Raspberry Pi Tue, 02 May 2023 09:06:57 +0000 en-GB hourly 1 https://wordpress.org/?v=6.2.2 https://www.raspberrypi.org/app/uploads/2020/06/cropped-raspberrry_pi_logo-100x100.png AI literacy Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/ai-literacy/ 32 32 Experience AI: The excitement of AI in your classroom https://www.raspberrypi.org/blog/experience-ai-launch-lessons/ https://www.raspberrypi.org/blog/experience-ai-launch-lessons/#comments Tue, 18 Apr 2023 10:00:00 +0000 https://www.raspberrypi.org/?p=83694 We are delighted to announce that we’ve launched Experience AI, our new learning programme to help educators to teach, inspire, and engage young people in the subject of artificial intelligence (AI) and machine learning (ML). Experience AI is a new educational programme that offers cutting-edge secondary school resources on AI and machine learning for teachers…

The post Experience AI: The excitement of AI in your classroom appeared first on Raspberry Pi Foundation.

]]>
We are delighted to announce that we’ve launched Experience AI, our new learning programme to help educators to teach, inspire, and engage young people in the subject of artificial intelligence (AI) and machine learning (ML).

Experience AI is a new educational programme that offers cutting-edge secondary school resources on AI and machine learning for teachers and their students. Developed in partnership by the Raspberry Pi Foundation and DeepMind, the programme aims to support teachers in the exciting and fast-moving area of AI, and get young people passionate about the subject.

The importance of AI and machine learning education

Artificial intelligence and machine learning applications are already changing many aspects of our lives. From search engines, social media content recommenders, self-driving cars, and facial recognition software, to AI chatbots and image generation, these technologies are increasingly common in our everyday world.

Young people who understand how AI works will be better equipped to engage with the changes AI applications bring to the world, to make informed decisions about using and creating AI applications, and to choose what role AI should play in their futures. They will also gain critical thinking skills and awareness of how they might use AI to come up with new, creative solutions to problems they care about.

The AI applications people are building today are predicted to affect many career paths. In 2020, the World Economic Forum estimated that AI would replace some 85 million jobs by 2025 and create 97 million new ones. Many of these future jobs will require some knowledge of AI and ML, so it’s important that young people develop a strong understanding from an early age.

A group of young people investigate computer hardware together.
 Develop a strong understanding of the concepts of AI and machine learning with your learners.

Experience AI Lessons

Something we get asked a lot is: “How do I teach AI and machine learning with my class?”. To answer this question, we have developed a set of free lessons for secondary school students (age 11 to 14) that give you everything you need including lesson plans, slide decks, worksheets, and videos.

The lessons focus on relatable applications of AI and are carefully designed so that teachers in a wide range of subjects can use them. You can find out more about how we used research to shape the lessons and how we aim to avoid misconceptions about AI.

The lessons are also for you if you’re an educator or volunteer outside of a school setting, such as in a coding club.

The six lessons

  1. What is AI?: Learners explore the current context of artificial intelligence (AI) and how it is used in the world around them. Looking at the differences between rule-based and data-driven approaches to programming, they consider the benefits and challenges that AI could bring to society. 
  2. How computers learn: Learners focus on the role of data-driven models in AI systems. They are introduced to machine learning and find out about three common approaches to creating ML models. Finally the learners explore classification, a specific application of ML.
  3. Bias in, bias out: Learners create their own machine learning model to classify images of apples and tomatoes. They discover that a limited dataset is likely to lead to a flawed ML model. Then they explore how bias can appear in a dataset, resulting in biased predictions produced by a ML model.
  4. Decision trees: Learners take their first in-depth look at a specific type of machine learning model: decision trees. They see how different training datasets result in the creation of different ML models, experiencing first-hand what the term ‘data-driven’ means. 
  5. Solving problems with ML models: Learners are introduced to the AI project lifecycle and use it to create a machine learning model. They apply a human-focused approach to working on their project, train a ML model, and finally test their model to find out its accuracy.
  6. Model cards and careers: Learners finish the AI project lifecycle by creating a model card to explain their machine learning model. To finish off the unit, they explore a range of AI-related careers, hear from people working in AI research at DeepMind, and explore how they might apply AI and ML to their interests.

As part of this exciting first phase, we’re inviting teachers to participate in research to help us further develop the resources. All you need to do is sign up through our website, download the lessons, use them in your classroom, and give us your valuable feedback.

An educator points to an image on a student's computer screen.
 Ben Garside, one of our lead educators working on Experience AI, takes a group of students through one of the new lessons.

Support for teachers

We’ve designed the Experience AI lessons with teacher support in mind, and so that you can deliver them to your learners aged 11 to 14 no matter what your subject area is. Each of the lesson plans includes a section that explains new concepts, and the slide decks feature embedded videos in which DeepMind’s AI researchers describe and bring these concepts to life for your learners.

We will also be offering you a range of new teacher training opportunities later this year, including a free online CPD course — Introduction to AI and Machine Learning — and a series of AI-themed webinars.

Tell us your feedback

We will be inviting schools across the UK to test and improve the Experience AI lessons through feedback. We are really looking forward to working with you to shape the future of AI and machine learning education.

Visit the Experience AI website today to get started.

The post Experience AI: The excitement of AI in your classroom appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-launch-lessons/feed/ 4
AI education resources: What do we teach young people? https://www.raspberrypi.org/blog/ai-education-resources-what-to-teach-seame-framework/ https://www.raspberrypi.org/blog/ai-education-resources-what-to-teach-seame-framework/#comments Tue, 28 Mar 2023 09:29:49 +0000 https://www.raspberrypi.org/?p=83513 People have many different reasons to think that children and teenagers need to learn about artificial intelligence (AI) technologies. Whether it’s that AI impacts young people’s lives today, or that understanding these technologies may open up careers in their future — there is broad agreement that school-level education about AI is important. But how do…

The post AI education resources: What do we teach young people? appeared first on Raspberry Pi Foundation.

]]>
People have many different reasons to think that children and teenagers need to learn about artificial intelligence (AI) technologies. Whether it’s that AI impacts young people’s lives today, or that understanding these technologies may open up careers in their future — there is broad agreement that school-level education about AI is important.

A young person writes Python code.

But how do you actually design lessons about AI, a technical area that is entirely new to young people? That was the question we needed to answer as we started Experience AI, our exciting collaboration with DeepMind, a leading AI company.

Our approach to developing AI education resources

As part of Experience AI, we are creating a free set of lesson resources to help teachers introduce AI and machine learning (ML) to KS3 students (ages 11 to 14). In England this area is not currently part of the national curriculum, but it’s starting to appear in all sorts of learning materials for young people. 

Two learners and a teacher in a physical computing lesson.

While developing the six Experience AI lessons, we took a research-informed approach. We built on insights from the series of research seminars on AI and data science education we had hosted in 2021 and 2022, and on research we ourselves have been conducting at the Raspberry Pi Computing Education Research Centre.

We reviewed over 500 existing resources that are used to teach AI and ML.

As part of this research, we reviewed over 500 existing resources that are used to teach AI and ML. We found that the vast majority of them were one-off activities, and many claimed to be appropriate for learners of any age. There were very few sets of lessons, or units of work, that were tailored to a specific age group. Activities often had vague learning objectives, or none at all. We rarely found associated assessment activities. These were all shortcomings we wanted to avoid in our set of lessons.

To analyse the content of AI education resources, we use a simple framework called SEAME. This framework is based on work I did in 2018 with Professor Paul Curzon at Queen Mary University of London, running professional development for educators on teaching machine learning.

The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works).
Click to enlarge.

The SEAME framework gives you a simple way to group learning objectives and resources related to teaching AI and ML, based on whether they focus on social and ethical aspects (SE), applications (A), models (M), or engines (E, i.e. how AI works). We hope that it will be a useful tool for anyone who is interested in looking at resources to teach AI. 

What do AI education resources focus on?

The four levels of the SEAME framework do not indicate a hierarchy or sequence. Instead, they offer a way for teachers, resource developers, and researchers to talk about the focus of AI learning activities.

Social and ethical aspects (SE)

The SE level covers activities that relate to the impact of AI on everyday life, and to its implications for society. Learning objectives and their related resources categorised at this level introduce students to issues such as privacy or bias concerns, the impact of AI on employment, misinformation, and the potential benefits of AI applications.

A slide from a lesson about AI that describes an AI application related to timetables.
An example activity in the Experience AI lessons where learners think about the social and ethical issues of an AI application that predicts what subjects they might want to study. This activity is mostly focused on the social and ethical level of the SEAME framework, but also links to the applications and models levels.

Applications (A)

The A level refers to activities related to applications and systems that use AI or ML models. At this level, learners do not learn how to train models themselves, or how such models work. Learning objectives at this level include knowing a range of AI applications and starting to understand the difference between rule-based and data-driven approaches to developing applications.

Models (M)

The M level concerns the models underlying AI and ML applications. Learning objectives at this level include learners understanding the processes used to train and test models. For example, through resources focused on the M level, students could learn about the different learning paradigms of ML (i.e., supervised, unsupervised, or reinforcement learning).

A slide from a lesson about AI that describes an ML model to classify animals.
An example activity in the Experience AI lessons where students learn about classification. This activity is mostly focused on the models level of the SEAME framework, but also links to the social and ethical and the applications levels.

Engines (E)

The E level is related to the engines that make AI models work. This is the most hidden and complex level, and for school-aged learners may need to be taught using unplugged activities and visualisations. Learning objectives could include understanding the basic workings of systems such as data-driven decision trees and artificial neural networks.

Covering the four levels

Some learning activities may focus on a single level, but activities can also span more than one level. For example, an activity may start with learners trying out an existing ‘rock-paper-scissors’ application that uses an ML model to recognise hand shapes. This would cover the applications level. If learners then move on to train the model to improve its accuracy by adding more image data, they work at the models level.

A teacher helps a young person with a coding project.

Other activities cover several SEAME levels to address a specific concept. For example, an activity focussed on bias might start with an example of the societal impact of bias (SE level). Learners could then discuss the AI applications they use and reflect on how bias impacts them personally (A level). The activity could finish with learners exploring related data in a simple ML model and thinking about how representative the data is of all potential application users (M level).

The set of lessons on AI we are developing in collaboration with DeepMind covers all four levels of SEAME.

The set of Experience AI lessons we are developing in collaboration with DeepMind covers all four levels of SEAME. The lessons are based on carefully designed learning objectives and specifically targeted to KS3 students. Lesson materials include presentations, videos, student activities, and assessment questions.

The SEAME framework as a tool for research on AI education

For researchers, we think the SEAME framework will, for example, be useful to analyse school curriculum material to see whether some age groups have more learning activities available at one level than another, and whether this changes over time. We may find that primary school learners work mostly at the SE and A levels, and secondary school learners move between the levels with increasing clarity as they develop their knowledge. It may also be the case that some learners or teachers prefer activities focused on one level rather than another. However, we can’t be sure: research is needed to investigate the teaching and learning of AI and ML across all year groups.

That’s why we’re excited to welcome Salomey Afua Addo to the Raspberry Pi Computing Education Research Centre. Salomey joined the Centre as a PhD student in January, and her research will focus on approaches to the teaching and learning of AI. We’re looking forward to seeing the results of her work.

If you’d like to get involved with Experience AI as an educator and use our lessons with your class, you can start by visiting us at experience-ai.org.

The post AI education resources: What do we teach young people? appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-education-resources-what-to-teach-seame-framework/feed/ 1
Experience AI with the Raspberry Pi Foundation and DeepMind https://www.raspberrypi.org/blog/experience-ai-deepmind-ai-education/ https://www.raspberrypi.org/blog/experience-ai-deepmind-ai-education/#comments Mon, 26 Sep 2022 15:00:13 +0000 https://www.raspberrypi.org/?p=81424 I am delighted to announce a new collaboration between the Raspberry Pi Foundation and a leading AI company, DeepMind, to inspire the next generation of AI leaders. The Raspberry Pi Foundation’s mission is to enable young people to realise their full potential through the power of computing and digital technologies. Our vision is that every…

The post Experience AI with the Raspberry Pi Foundation and DeepMind appeared first on Raspberry Pi Foundation.

]]>
I am delighted to announce a new collaboration between the Raspberry Pi Foundation and a leading AI company, DeepMind, to inspire the next generation of AI leaders.

Young people work together to investigate computer hardware.

The Raspberry Pi Foundation’s mission is to enable young people to realise their full potential through the power of computing and digital technologies. Our vision is that every young person — whatever their background — should have the opportunity to learn how to create and solve problems with computers.

With the rapid advances in artificial intelligence — from machine learning and robotics, to computer vision and natural language processing — it’s increasingly important that young people understand how AI is affecting their lives now and the role that it can play in their future. 

DeepMind logo.

Experience AI is a new collaboration between the Raspberry Pi Foundation and DeepMind that aims to help young people understand how AI works and how it is changing the world. We want to inspire young people about the careers in AI and help them understand how to access those opportunities, including through their subject choices. 

Experience AI 

More than anything, we want to make AI relevant and accessible to young people from all backgrounds, and to make sure that we engage young people from backgrounds that are underrepresented in AI careers. 

The collaboration has two strands: Inspire and Experiment. 

Inspire: To engage and inspire students about AI and its impact on the world, we are developing a set of free learning resources and materials including lesson plans, assembly packs, videos, and webinars, alongside training and support for educators. This will include an introduction to the technologies that enable AI; how AI models are trained; how to frame problems for AI to solve; the societal and ethical implications of AI; and career opportunities. All of this will be designed around real-world and relatable applications of AI, engaging a wide range of diverse interests and useful to teachers from different subjects.

In a computing classroom, two girls concentrate on their programming task.

Experiment: Building on the excitement generated through Inspire, we are also designing an AI challenge that will support young people to experiment with AI technologies and explore how these can be used to solve real-world problems. This will provide an opportunity for students to get hands-on with technology and data, along with support for educators. 

Our initial focus is learners aged 11 to 14 in the UK. We are working with teachers, students, and DeepMind engineers to ensure that the materials and learning experiences are engaging and accessible to all, and that they reflect the latest AI technologies and their application.

A woman teacher helps a young person with a coding project.

As with all of our work, we want to be research-led and the Raspberry Pi Foundation research team has been working over the past year to understand the latest research on what works in AI education.

Next steps 

Development of the learning materials is underway now, and we have released the full set of resources on experience-ai.org. We are currently piloting the challenge for release in September 2023.

The post Experience AI with the Raspberry Pi Foundation and DeepMind appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-deepmind-ai-education/feed/ 12
Classroom activities to discuss machine learning accuracy and ethics | Hello World #18 https://www.raspberrypi.org/blog/classroom-activity-machine-learning-accuracy-ethics-hello-world-18/ Wed, 10 Aug 2022 14:17:38 +0000 https://www.raspberrypi.org/?p=80874 In Hello World issue 18, available as a free PDF download, teacher Michael Jones shares how to use Teachable Machine with learners aged 13–14 in your classroom to investigate issues of accuracy and ethics in machine learning models. Machine learning: Accuracy and ethics The landscape for working with machine learning/AI/deep learning has grown considerably over…

The post Classroom activities to discuss machine learning accuracy and ethics | Hello World #18 appeared first on Raspberry Pi Foundation.

]]>
In Hello World issue 18, available as a free PDF download, teacher Michael Jones shares how to use Teachable Machine with learners aged 13–14 in your classroom to investigate issues of accuracy and ethics in machine learning models.

Machine learning: Accuracy and ethics

The landscape for working with machine learning/AI/deep learning has grown considerably over the last couple of years. Students are now able to develop their understanding from the hard-coded end via resources such as Machine Learning for Kids, get their hands dirty using relatively inexpensive hardware such as the Nvidia Jetson Nano, and build a classification machine using the Google-driven Teachable Machine resources. I have used all three of the above with my students, and this article focuses on Teachable Machine.

For this module, I’m more concerned with the fuzzy end of AI, including how credible AI decisions are, and the elephant-in-the-room aspect of bias and potential for harm.

Michael Jones

For the worried, there is absolutely no coding involved in this resource; the ‘machine’ behind the portal does the hard work for you. For my Year 9 classes (students aged 13 to 14) undertaking a short, three-week module, this was ideal. The coding is important, but was not my focus. For this module, I’m more concerned with the fuzzy end of AI, including how credible AI decisions are, and the elephant-in-the-room aspect of bias and potential for harm.

Getting started with Teachable Machine activities

There are three possible routes to use in Teachable Machine, and my focus is the ‘Image Project’, and within this, the ‘Standard image model’. From there, you are presented with a basic training scenario template — see Hello World issue 16 (pages 84–86) for a step-by-step set-up and training guide. For this part of the project, my students trained the machine to recognise different breeds of dog, with border collie, labrador, saluki, and so on as classes. Any AI system devoted to recognition requires a substantial set of training data. Fortunately, there are a number of freely available data sets online (for example, download a folder of dog photos separated by breed by accessing helloworld.cc/dogdata). Be warned, these can be large, consisting of thousands of images. If you have more time, you may want to set students off to collect data to upload using a camera (just be aware that this can present safeguarding considerations). This is a key learning point with your students and an opportunity to discuss the time it takes to gather such data, and variations in the data (for example, images of dogs from the front, side, or top).

Drawing of a machine learning ars rover trying to decide whether it is seeing an alien or a rock.
Image recognition is a common application of machine learning technology.

Once you have downloaded your folders, upload the images to your Teachable Machine project. It is unlikely that you will be able to upload a whole subfolder at once — my students have found that the optimum number of images seems to be twelve. Remember to build this time for downloading and uploading into your lesson plan. This is a good opportunity to discuss the need for balance in the training data. Ask questions such as, “How likely would the model be to identify a saluki if the training set contained 10 salukis and 30 of the other dogs?” This is a left-field way of dropping the idea of bias into the exploration of AI — more on that later!

Accuracy issues in machine learning models

If you have got this far, the heavy lifting is complete and Google’s training engine will now do the work for you. Once you have set your model on its training, leave the system to complete its work — it takes seconds, even on large sets of data. Once it’s done, you should be ready to test you model. If all has gone well and a webcam is attached to your computer, the Output window will give a prediction of what is being viewed. Again, the article in Hello World issue 16 takes you through the exact steps of this process. Make sure you have several images ready to test. See Figure 1a for the response to an image of a saluki presented to the model. As you might expect, it is showing as a 100 percent prediction.

Screenshots from Teachable Machine showing photos of dogs classified as specific breeds with different degrees of confidence by a machine learning model.
Figure 1: Outputs of a Teachable Machine model classifying photos of dog breeds. 1a (left): Photo of a saluki. 1b (right): Photo of a Samoyed and two people.

It will spark an interesting discussion if you now try the same operation with an image with items other than the one you’re testing in it. For example see Figure 1b, in which two people are in the image along with the Samoyed dog. The model is undecided, as the people are affecting the outcome. This raises the question of accuracy. Which features are being used to identify the dogs as border collie and saluki? Why are the humans in the image throwing the model off the scent?

Getting closer to home, training a model on human faces provides an opportunity to explore AI accuracy through the question of what might differentiate a female from a male face. You can find a model at helloworld.cc/maleorfemale that contains 5418 images almost evenly spread across male and female faces (see Figure 2). Note that this model will take a little longer to train.

Screenshot from Teachable Machine showing two datasets of photos of faces labeled either male or female.
Figure 2: Two photo sets of faces labeled either male or female, uploaded to Teachable Machine.

Once trained, try the model out. Props really help — a top hat, wig, and beard give the model a testing time (pun intended). In this test (see Figure 3), I presented myself to the model face-on and, unsurprisingly, I came out as 100 percent male. However, adding a judge’s wig forces the model into a rethink, and a beard produces a variety of results, but leaves the model unsure. It might be reasonable to assume that our model uses hair length as a strong feature. Adding a top hat to the ensemble brings the model back to a 100 percent prediction that the image is of a male.

Screenshots from Teachable Machine showing two datasets of a model classifying photos of the same face as either male or female with different degrees of confidence, based on the face is wearing a wig, a fake beard, or a tophat.
Figure 3: Outputs of a Teachable Machine model classifying photos of the author’s face as male or female with different degrees of confidence. Click to enlarge.

Machine learning uses a best-fit principle. The outputs, in this case whether I am male or female, have a greater certainty of male (65 percent) versus a lesser certainty of female (35 percent) if I wear a beard (Figure 3, second image from the right). Remove the beard and the likelihood of me being female increases by 2 percent (Figure 3, second image from the left).

Bias in machine learning models

Within a fairly small set of parameters, most human faces are similar. However, when you start digging, the research points to there being bias in AI (whether this is conscious or unconscious is a debate for another day!). You can exemplify this by firstly creating classes with labels such as ‘young smart’, ‘old smart’, ‘young not smart’, and ‘old not smart’. Select images that you think would fit the classes, and train them in Teachable Machine. You can then test the model by asking your students to find images they think fit each category. Run them against the model and ask students to debate whether the AI is acting fairly, and if not, why they think that is. Who is training these models? What images are they receiving? Similarly, you could create classes of images of known past criminals and heroes. Train the model before putting yourself in front of it. How far up the percentage scale are you towards being a criminal? It soon becomes frighteningly worrying that unless you are white and seemingly middle class, AI may prove problematic to you, from decisions on financial products such as mortgages through to mistaken arrest and identification.

It soon becomes frighteningly worrying that unless you are white and seemingly middle class, AI may prove problematic to you, from decisions on financial products such as mortgages through to mistaken arrest and identification.

Michael Jones

Encourage your students to discuss how they could influence this issue of race, class, and gender bias — for example, what rules would they use for identifying suitable images for a data set? There are some interesting articles on this issue that you can share with your students at helloworld.cc/aibias1 and helloworld.cc/aibias2.

Where next with your learners?

In the classroom, you could then follow the route of building models that identify letters for words, for example. One of my students built a model that could identify a range of spoons and forks. You may notice that Teachable Machine can also be run on Arduino boards, which adds an extra dimension. Why not get your students to create their own AI assistant that responds to commands? The possibilities are there to be explored. If you’re using webcams to collect photos yourself, why not create a system that will identify students? If you are lucky enough to have a set of identical twins in your class, that adds just a little more flavour! Teachable Machine offers a hands-on way to demonstrate the issues of AI accuracy and bias, and gives students a healthy opportunity for debate.

Michael Jones is director of Computer Science at Northfleet Technology College in the UK. He is a Specialist Leader of Education and a CS Champion for the National Centre for Computing Education.

More resources for AI and data science education

At the Foundation, AI education is one of our focus areas. Here is how we are supporting you and your learners in this area already:

An image demonstrating that AI systems for object recognition do not distinguish between a real banana on a desk and the photo of a banana on a laptop screen.
  • Computing education researchers are working to answer the many open questions about what good AI and data science education looks like for young people. To learn more, you can watch the recordings from our research seminar series focused on this. We ourselves are working on research projects in this area and will share the results freely with the computing education community.
  • You can find a list of free educational resources about these topics that we’ve collated based on our research seminars, seminar participants’ recommendations, and our own work.

The post Classroom activities to discuss machine learning accuracy and ethics | Hello World #18 appeared first on Raspberry Pi Foundation.

]]>
AI literacy research: Children and families working together around smart devices https://www.raspberrypi.org/blog/ai-literacy-children-families-working-together-ai-education-research/ https://www.raspberrypi.org/blog/ai-literacy-children-families-working-together-ai-education-research/#comments Thu, 21 Apr 2022 10:35:37 +0000 https://www.raspberrypi.org/?p=79248 Between September 2021 and March 2022, we’ve been partnering with The Alan Turing Institute to host a series of free research seminars about how to young people about AI and data science. In the final seminar of the series, we were excited to hear from Stefania Druga from the University of Washington, who presented on…

The post AI literacy research: Children and families working together around smart devices appeared first on Raspberry Pi Foundation.

]]>
Between September 2021 and March 2022, we’ve been partnering with The Alan Turing Institute to host a series of free research seminars about how to young people about AI and data science.

In the final seminar of the series, we were excited to hear from Stefania Druga from the University of Washington, who presented on the topic of AI literacy for families. Stefania’s talk highlighted the importance of families in supporting children to develop AI literacy. Her talk was a perfect conclusion to the series and very well-received by our audience.

Stefania Druga.
Stefania Druga, University of Washington

Stefania is a third-year PhD student who has been working on AI literacy in families, and since 2017 she has conducted a series of studies that she presented in her seminar talk. She presented some new work to us that was to be formally shared at the HCI conference in April, and we were very pleased to have a sneak preview of these results. It was a fascinating talk about the ways in which the interactions between parents and children using AI-based devices in the home, and the discussions they have while learning together, can facilitate an appreciation of the affordances of AI systems, and critical thinking about their limitations and fallibilities. You’ll find my summary as well as the seminar recording below.

“AI literacy practices and skills led some families to consider making meaningful use of AI devices they already have in their homes and redesign their interactions with them. These findings suggest that family has the potential to act as a third space for AI learning.”

– Stefania Druga

AI literacy: Growing up with AI systems, growing used to them

Back in 2017, interest in Alexa and other so-called ‘smart’, AI-based devices was just developing in the public, and such devices would have been very novel to most people. That year, Stefania and colleagues conducted a first pilot study of children’s and their parents’ interactions with ‘smart’ devices, including robots, talking dolls, and the sort of voice assistants we are used to now.

A slide from Stefania Druga's AI literacy seminar. Content is described in the blog text.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

Working directly with families, the researchers explored the level of understanding that children had about ‘smart’ devices, and were surprised by the level of insight very young children had into the potential of this type of technology.

In this AI literacy pilot study, Stefania and her colleagues found that:

  • Children perceived AI-based agents (i.e. ‘smart’ devices) as friendly and truthful
  • They treated different devices (e.g. two different Alexas) as completely independent
  • How ‘smart’ they found the device was dependent on age, with older children more likely to describe devices as ‘smart’

AI literacy: Influence of parents’ perceptions, influence of talking dolls

Stefania’s next study, undertaken in 2018, showed that parents’ perceptions of the implications and potential of ‘smart’ devices shaped what their children thought. Even when parents and children were interviewed separately, if the parent thought that, for example, robots were smarter than humans, then the child did too.

A slide from Stefania Druga's AI literacy seminar.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

Another part of this study showed that talking dolls could influence children’s moral decisions (e.g. “Should I give a child a pillow?”). In some cases, these ‘smart’ toys would influence the child more than another human. Some ‘smart’ dolls have been banned in some European countries because of security concerns. In the light of these concerns, Stefania pointed out how important it is to help children develop a critical understanding of the potential of AI-based technology, and what its fallibility and the limits of its guidance are.

A slide from Stefania Druga's AI literacy seminar.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

AI literacy: Programming ‘smart’ devices, algorithmic bias

Another study Stefania discussed involved children who programmed ‘smart’ devices. She used the children’s drawings to find out about their mental models of how the technology worked.

She found that when children had the opportunity to train machine learning models or ‘smart’ devices, they became more sceptical about the appropriate use of these technologies and asked better questions about when and for what they should be used. Another finding was that children and adults had different ideas about algorithmic bias, particularly relating to the meaning of fairness.

A parent and child work together at a Raspberry Pi computer.

AI literacy: Kinaesthetic activities, sharing discussions

The final study Stefania talked about was conducted with families online during the pandemic, when children were learning at home. 15 families, with in total 18 children (ages 5 to 11) and 16 parents, participated in five weekly sessions. A number of learning activities to demonstrate features of AI made up each of the sessions. These are all available at aiplayground.me.

A slide from Stefania Druga's AI literacy seminar, describing two research questions about how children and parents learn about AI together, and about how to design learning supports for family AI literacies.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

The fact that children and parents, or other family members, worked through the activities together seemed to generate fruitful discussions about the usefulness of AI-based technology. Many families were concerned about privacy and what was happening to their personal data when they were using ‘smart’ devices, and also expressed frustration with voice assistants that couldn’t always understand the way they spoke.

A slide from Stefania Druga's AI literacy seminar. Content described in the blog text.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

In one of the sessions, with a focus on machine learning, families were introduced to a kinaesthetic activity involving moving around their home to train a model. Through this activity, parents and children had more insight into the constraints facing machine learning. They used props in the home to experiment and find out ways of training the model better. In another session, families were encouraged to design their own devices on paper, and Stefania showed some examples of designs children had drawn.

A slide from Stefania Druga's AI literacy seminar. Content described in the blog text.
A slide from Stefania’s AI literacy seminar. Click to enlarge.

This study identified a number of different roles that parents or other adults played in supporting children’s learning about AI, and found that embodied and tangible activities worked well for encouraging joint work between children and their families.

Find out more

You can catch up with Stefania’s seminar below in the video, and download her presentation slides.

More about Stefania’s work can be learned in her paper on children’s training of ML models and also in her latest paper about the five weekly AI literacy sessions with families.

Recordings and slides of all our previous seminars on AI education are available online for you, and you can see the list of AI education resources we’ve put together based on recommendations from seminar speakers and participants.

Join our next free research seminar

We are delighted to start a new seminar series on cross-disciplinary computing, with seminars in May, June, July, and September to look forward to. It’s not long now before we begin: Mark Guzdial will speak to us about task-specific programming languages (TSP) in history and mathematics classes on 3 May, 17.00 to 18.30pm local UK time. I can’t wait!

Sign up to receive the Zoom details for the seminar with Mark:

The post AI literacy research: Children and families working together around smart devices appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-literacy-children-families-working-together-ai-education-research/feed/ 1
What’s a kangaroo?! AI ethics lessons for and from the younger generation https://www.raspberrypi.org/blog/ai-ethics-lessons-education-children-research/ Mon, 20 Sep 2021 10:01:21 +0000 https://www.raspberrypi.org/?p=74641 Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host speakers from the UK, Finland, Germany, and the USA presenting a series of free research seminars about AI and data science education for young people. These rapidly developing technologies have a huge and growing impact on our lives, so it’s…

The post What’s a kangaroo?! AI ethics lessons for and from the younger generation appeared first on Raspberry Pi Foundation.

]]>
Between September 2021 and March 2022, we’re partnering with The Alan Turing Institute to host speakers from the UK, Finland, Germany, and the USA presenting a series of free research seminars about AI and data science education for young people. These rapidly developing technologies have a huge and growing impact on our lives, so it’s important for young people to understand them both from a technical and a societal perspective, and for educators to learn how to best support them to gain this understanding.

Mhairi Aitken.

In our first seminar we were beyond delighted to hear from Dr Mhairi Aitken, Ethics Fellow at The Alan Turing Institute. Mhairi is a sociologist whose research examines social and ethical dimensions of digital innovation, particularly relating to uses of data and AI. You can catch up on her full presentation and the Q&A with her in the video below.

Why we need AI ethics

The increased use of AI in society and industry is bringing some amazing benefits. In healthcare for example, AI can facilitate early diagnosis of life-threatening conditions and provide more accurate surgery through robotics. AI technology is also already being used in housing, financial services, social services, retail, and marketing. Concerns have been raised about the ethical implications of some aspects of these technologies, and Mhairi gave examples of a number of controversies to introduce us to the topic.

“Ethics considers not what we can do but rather what we should do — and what we should not do.”

Mhairi Aitken

One such controversy in England took place during the coronavirus pandemic, when an AI system was used to make decisions about school grades awarded to students. The system’s algorithm drew on grades awarded in previous years to other students of a school to upgrade or downgrade grades given by teachers; this was seen as deeply unfair and raised public consciousness of the real-life impact that AI decision-making systems can have.

An AI system was used in England last year to make decisions about school grades awarded to students — this was seen as deeply unfair.

Another high-profile controversy was caused by biased machine learning-based facial recognition systems and explored in Shalini Kantayya’s documentary Coded Bias. Such facial recognition systems have been shown to be much better at recognising a white male face than a black female one, demonstrating the inequitable impact of the technology.

What should AI be used for?

There is a clear need to consider both the positive and negative impacts of AI in society. Mhairi stressed that using AI effectively and ethically is not just about mitigating negative impacts but also about maximising benefits. She told us that bringing ethics into the discussion means that we start to move on from what AI applications can do to what they should and should not do. To outline how ethics can be applied to AI, Mhairi first outlined four key ethical principles:

  • Beneficence (do good)
  • Nonmaleficence (do no harm)
  • Autonomy
  • Justice

Mhairi shared a number of concrete questions that ethics raise about new technologies including AI: 

  • How do we ensure the benefits of new technologies are experienced equitably across society?
  • Do AI systems lead to discriminatory practices and outcomes?
  • Do new forms of data collection and monitoring threaten individuals’ privacy?
  • Do new forms of monitoring lead to a Big Brother society?
  • To what extent are individuals in control of the ways they interact with AI technologies or how these technologies impact their lives?
  • How can we protect against unjust outcomes, ensuring AI technologies do not exacerbate existing inequalities or reinforce prejudices?
  • How do we ensure diverse perspectives and interests are reflected in the design, development, and deployment of AI systems? 

Who gets to inform AI systems? The kangaroo metaphor

To mitigate negative impacts and maximise benefits of an AI system in practice, it’s crucial to consider the context in which the system is developed and used. Mhairi illustrated this point using the story of an autonomous vehicle, a self-driving car, developed in Sweden in 2017. It had been thoroughly safety-tested in the country, including tests of its ability to recognise wild animals that may cross its path, for example elk and moose. However, when the car was used in Australia, it was not able to recognise kangaroos that hopped into the road! Because the system had not been tested with kangaroos during its development, it did not know what they were. As a result, the self-driving car’s safety and reliability significantly decreased when it was taken out of the context in which it had been developed, jeopardising people and kangaroos.

A parent kangaroo with a young kangaroo in its pouch stands on grass.
Mitigating negative impacts and maximising benefits of AI systems requires actively involving the perspectives of groups that may be affected by the system — ‘kangoroos’ in Mhairi’s metaphor.

Mhairi used the kangaroo example as a metaphor to illustrate ethical issues around AI: the creators of an AI system make certain assumptions about what an AI system needs to know and how it needs to operate; these assumptions always reflect the positions, perspectives, and biases of the people and organisations that develop and train the system. Therefore, AI creators need to include metaphorical ‘kangaroos’ in the design and development of an AI system to ensure that their perspectives inform the system. Mhairi highlighted children as an important group of ‘kangaroos’. 

AI in children’s lives

AI may have far-reaching consequences in children’s lives, where it’s being used for decision-making around access to resources and support. Mhairi explained the impact that AI systems are already having on young people’s lives through these systems’ deployment in children’s education, in apps that children use, and in children’s lives as consumers.

A young child sits at a table using a tablet.
AI systems are already having an impact on children’s lives.

Children can be taught not only that AI impacts their lives, but also that it can get things wrong and that it reflects human interests and biases. However, Mhairi was keen to emphasise that we need to find out what children know and want to know before we make assumptions about what they should be taught. Moreover, engaging children in discussions about AI is not only about them learning about AI, it’s also about ethical practice: what can people making decisions about AI learn from children by listening to their views and perspectives?

AI research that listens to children

UNICEF, the United Nations Children’s Fund, has expressed concerns about the impact of new AI technologies used on children and young people. They have developed the UNICEF Requirements for Child-Centred AI.

Unicef Requirements for Child-Centred AI: Support childrenʼs development and well-being. Ensure inclusion of and for children. Prioritise fairness and non-discrimination for children. Protect childrenʼs data and privacy. Ensure safety for children. Provide transparency, explainability, and accountability for children. Empower governments and businesses with knowledge of AI and childrenʼs rights. Prepare children for present and future developments in AI. Create an enabling environment for child-centred AI. Engage in digital cooperation.
UNICEF’s requirements for child-centred AI, as presented by Mhairi. Click to enlarge.

Together with UNICEF, Mhairi and her colleagues working on the Ethics Theme in the Public Policy Programme at The Alan Turing Institute are engaged in new research to pilot UNICEF’s Child-Centred Requirements for AI, and to examine how these impact public sector uses of AI. A key aspect of this research is to hear from children themselves and to develop approaches to engage children to inform future ethical practices relating to AI in the public sector. The researchers hope to find out how we can best engage children and ensure that their voices are at the heart of the discussion about AI and ethics.

We all learned a tremendous amount from Mhairi and her work on this important topic. After her presentation, we had a lively discussion where many of the participants relayed the conversations they had had about AI ethics and shared their own concerns and experiences and many links to resources. The Q&A with Mhairi is included in the video recording.

What we love about our research seminars is that everyone attending can share their thoughts, and as a result we learn so much from attendees as well as from our speakers!

It’s impossible to cover more than a tiny fraction of the seminar here, so I do urge you to take the time to watch the seminar recording. You can also catch up on our previous seminars through our blogs and videos.

Join our next seminar

We have six more seminars in our free series on AI, machine learning, and data science education, taking place every first Tuesday of the month. At our next seminar on Tuesday 5 October at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Professor Carsten Schulte, Yannik Fleischer, and Lukas Höper from the University of Paderborn, Germany, who will be presenting on the topic of teaching AI and machine learning (ML) from a data-centric perspective (find out more here). Their talk will raise the questions of whether and how AI and ML should be taught differently from other themes in the computer science curriculum at school.

Sign up now and we’ll send you the link to join on the day of the seminar — don’t forget to put the date in your diary.

I look forward to meeting you there!

In the meantime, we’re offering a brand-new, free online course that introduces machine learning with a practical focus — ideal for educators and anyone interested in exploring AI technology for the first time.

The post What’s a kangaroo?! AI ethics lessons for and from the younger generation appeared first on Raspberry Pi Foundation.

]]>