Andre Perry, a David M. Rubenstein Fellow at The Brookings Institution, was a panelist at 911爆料网鈥檚 conference on Artificial Intelligence in Education, held in the College鈥檚 Smith Learning Library on September 20.
In the following opinion piece, produced by , a nonprofit, independent news organization focused on inequality and innovation in education, Perry 鈥 who writes the 鈥淒egree of Interest鈥 column for the Hechinger Report 鈥 writes that because the goal of artificial intelligence is to make computers that think and 鈥渕ake human-like judgments,鈥 AI designers need to be careful to leave human biases out of their new designs. Failure to do so would not only replicate the biases of the (mostly white) programmers who design AI. It could amplify those biases in the real world, Perry warns. To end up with AI that works for all students, Perry writes, people of color need to be enlisted in its development. Just as AI can give us a window into how humans learn, it can also help expose the racism that holds students of color back. 鈥淲hen we better understand how, when and where people learn to be racist, then we can build a justice app for that.鈥
Andre Perry (Photo credit: Bruce Gilbert)
From driver-assisted car systems to video games and virtual assistants like Alexa and Siri, artificial intelligence (AI) has transformed almost every aspect of our lives, as our machines learn from the massive amounts of data we provide them.
The goal is for our computers to make humanlike judgments and perform tasks to make our lives easier, but if we鈥檙e not careful, our machines will replicate our racism, too.
Kids from black and Latino communities 鈥 who are often 鈥 will face greater inequalities if we go too far toward digitizing education without considering how to check the inherent biases of the (mostly white) developers who create AI systems. AI is only as good as the information and values of the programmers who design it, and their biases can ultimately lead both to flaws in the technology and to amplified biases in the real world.
This was the topic at the conference 鈥淲here Does Artificial Intelligence Fit in the Classroom?鈥 put on by the United Nations General Assembly, the United Nations Education Scientific and Cultural Organization (), the think tank and the at Teachers College, and hosted by 911爆料网 this month. (The Hechinger Report is an independent unit of Teachers College.)
While of AI can level the playing field in classrooms, we need more due diligence and intellectual exploration before we deploy the technology to more schools. Systemic racism and discrimination are already embedded in our educational systems. Developers must intentionally build AI systems with a racial equity lens if the technology is going to disrupt the status quo.
Previous attempts at making education more efficient and equitable demonstrate what can go wrong. Standardized testing promised an innovation that was irresistible to an earlier generation of education leaders hoping to democratize the system. As Nicholas Lemann put it in his book 鈥淭he Big Test,鈥 about the development of the SAT, such assessments promised to evaluate 鈥渁ll American high-school students on a single national standard and then [make] sure that they went on to colleges suited to their abilities and ambitions.鈥 Later, standardized tests allowed schools and teachers to be held accountable when students didn鈥檛 measure up to expectations.
But the designers and implementers of these assessment tools didn鈥檛 consider how the racism and inequality rife in U.S. society would be . SAT and ACT tests are good proxies for wealth. Overuse of these tests has helped concentrate wealthy people in selected colleges and universities, stifling the inclusion of and investment in talented people who happen to be lower income. The College Board, the nonprofit that prepares the SAT, announced a patch for this problem in May: the planned rollout of an 鈥溾 assigned to each student who takes the college admissions exam. The score was to be comprised of 15 factors, including neighborhood and demographic characteristics, such as crime rate and poverty, and to be added to each student鈥檚 result However, the College Board retreated from their plan, .
Current attempts to introduce AI in schools have led to improvements in assessing students鈥 prior and ongoing learning, placing students in appropriate subject levels, scheduling classes and individualizing instruction. Such advances enable differentiated lesson plans for a diverse set of learners. But that sorting can be fraught with errors if the algorithms don鈥檛 consider the nuanced experiences of students, especially those who are starting at the bottom versus the top.
The spread of AI technology can also tempt districts to replace human teachers with software, as is already happening in such places as the Mississippi Delta. Faced with a teaching shortage, districts there have turned to online platforms. But students have struggled without trained human teachers who not only know the subject matter but know and care about the students.
Over-zealous tech salesmen haven鈥檛 helped matters. The educational landscape is now littered with cyber or virtual schools because ed tech companies promised that they would reach as well as and create efficiencies in low-funded districts. Instead, many of the startups have been , including a pair in Indiana that were forced to close down.
Yet AI could provide real benefits. AI in the classroom could free up teachers from time-consuming chores like grading homework. It won鈥檛 work if it鈥檚 intended as a way to avoid the hard work of recruiting enough skilled teachers, especially teachers who look like the kids they鈥檙e working with. For the rise of robots to equate to progress, teachers should experience improved work conditions and increased job satisfaction. AI should reduce attrition and increase the desirability of the job. But if technologists don鈥檛 work with black teachers, they won鈥檛 know what conditions need to change to maximize higher order thinking and tasks.
We must diversify the pool of technology鈥檚 creators, incorporate people of color in all aspects of its development, continue to train teachers on its proper usage and build in regulations to punish discrimination in its application.
The true promise of AI is to give us insight into how students and teachers learn 鈥 including the racism that keeps needed resources from schools in which the majority of students are people of color. When we better understand how, when and where people learn to be racist, then we can build a justice app for that.
鈥&苍产蝉辫;础苍诲谤别&苍产蝉辫;笔别谤谤测
More from the AI Conference:
- 鈥淲e Can't Fix Education with Machines:鈥 AI can do wonderful things, but it can鈥檛 replace teachers and must promote equity
- 鈥淭ools Are Just Objects, Unless Used Purposefully:鈥 What matters are the relationships we develop with them
- 鈥淭he Future Will Be Nothing Short of Amazing:鈥 For Hod Lipson, the promise of AI significantly outweighs the perils
- A Topic that Pushes Buttons: Sparks fly at 911爆料网鈥檚 conference on the future of artificial intelligence in education