SDG 4. QUALITY EDUCATION

Universities’ responses to ethical issues will define the next few years – AI must be part of the conversation

robotic-hand-pressing-keyboard-laptop.jp

The past year has both emphasised the importance of technology in higher education (HE), and exposed universities’ weaknesses and vulnerabilities around ethical issues.

From concerns around student wellbeing to accusations of sexual misconduct, and reports of discrimination around race, gender identityantisemitism, and accessibility, the sector has come under fire. The urgency with which it addresses this – and the ways in which institutions respond and recalibrate – may well define the next few years in education.

Given this context, does Artificial Intelligence (AI) feature on most universities’ list of priorities? Probably not, but it should. While scrutinising policies and approaches, acknowledging the need for a root-and-branch rethink of how we work with students, AI must be part of the conversation.

Digging deep

Technology has a key role to play in supporting and enhancing experiences in HE. So, in considering racial discrimination on campus, let’s look at evidence of racial discrimination in some AI algorithms; when considering representation of minority groups, let’s consider how universities manage their own data to reduce risk of bias. We need to think about how algorithms are written by edtech and AI start-ups. How can universities known whether products are potentially discriminatory before they start using them? And do they know those products’ limitations? Because problems can arise from using AI in contexts it wasn’t designed for. It’s also important to ask what suppliers have done to prove their products have been tested on a truly diverse population.

As part of my role as director of edtech at Jisc, I’m helping to build a new National Centre for AI in Tertiary Education, which will address these questions. Working with universities, colleges, start-ups, technology companies and experts in education and AI, the centre will dig deep. Are products effective? Are they ethical? We’re working with the Institute for Ethical AI, adapting their framework for UK education environments, and every AI tool tested by the centre will have to meet those standards. Only if they fit with a culture of teaching and learning will they be recommended.

Four key tests

I know there’s a widespread lack of trust in AI, especially when coupled with a suspected lack of regulation. There’s a perception that AI threatens to replace important and highly valued human elements of education with inferior robot experiences. And – through the pandemic, particularly – students have raised concerns about a decline in the quality of teaching, so that’s important to address.

Then there are issues of data and transparency. Are we being spied on and monitored by AI? Is our data being used without our knowledge?

Are we being spied on and monitored by AI? Is our data being used without our knowledge?

The third area is whether AI is making decisions we humans don’t understand and can’t appeal. Is it, for example, ‘deciding’ whether a student is doing ‘well’ or ‘badly’? How has it come to that conclusion? How can we challenge it? And who has access to that information – peers, teacher, potential employers?

Finally, there’s the question of discrimination. We need to ask, when an algorithm is making judgements, assessments or decisions, has it been tested on a diverse enough group to enable it to do that job properly and fairly? How does it cope with different approaches and learning styles, or with different levels of access to technology?

The right fit

To answer all those questions directly, we’re putting AI products through pilots, looking for those that teachers like using and say save them time. We want to know the AI tools we recommend can augment or release human effort, rather than replace it. We want to see that student satisfaction went up as a result of using the AI product – or that attainment went up, or that drop-out rates went down. We need to see data used in an ethical way, enabling institutions to take responsibility for their decision to use AI, and allowing them to give students control, as and where appropriate. We need to see that the humans engaged in the pilot understood the decisions the AI tool made and could appeal them, with a clear process in place. And that the product was fair and balanced, with no evidence of discrimination or any other issues. This is about delivering tangible, ethical benefits to help the sector move forward.

We want to know the AI tools we recommend can augment or release human effort, rather than replace it

It isn’t a quick fix, though; universities have work to do. Even armed with a list of approved AI products, institutions will have to ask their own questions about what’s right for their students and their environment. That will be tough, especially as universities grapple with their responses to COVID in terms of teaching, learning, assessment and delivery of education. Sharing experiences will help.

As for culture, when it comes to adopting AI in education, that is often the hardest part. A university could follow ethics guidelines to the letter yet still fail to implement AI in a way that their students and staff are happy with. AI is there to augment the human experience, not replace it – and understanding that meaningfully means looking at both the letter and the spirit of any set of guidelines, and framing them around each unique institution.

Reaping the rewards

Bringing AI into conversations around ethics in universities isn’t future-gazing. Right now, the education sector faces huge challenges in digital delivery, changes in approach, evolving campuses, and issues of discrimination, harassment, and a lack of parity in opportunity and experience. AI is a useful tool, which could form part of a solution to those issues, if used correctly. Crucially too, it presents opportunities we can’t afford to miss – so let’s pull together and welcome progress. If we don’t, there’s little doubt that others – be they other countries, challenger universities, start-ups, or corporates looking to deliver complex training and lifelong learning – will reap the greatest benefits instead of us.


You might also like: Shielding universities from ransomware with cloud backups


 

The post Universities’ responses to ethical issues will define the next few years – AI must be part of the conversation appeared first on Education Technology.






Read more on Education Technology

You may also like

Comments are closed.