#PaiRED: The Implications of Uncoded Bias on Education
The data AI uses matters. In fact, it is critical. Police departments, the military and courts are using AI for identification, and this can impact human rights. It’s not just Open AI, Google. Amazon and Microsoft that have had issues with uncoded bia in AI. There is little government regulation and few restrictions on data use how AI actaully works. This has even more implications for AI in education.
Putting Big Tech in charge of regulating itself isn’t a good idea. We live in a capitalist society set on making profits. Facebook is an organization that values more connection at all costs. Even if that connection is based on lies. Even if that connections costs lives. This resulted in AI supporting the spread of “fake news.” Google’s search engine has demonstrated bias built in against women and blacks again and again. Tech is supposed to regulate tech. With AI the stakes are just too high. You don’t have to look far to find areas where lack of AI regulation has already crept into humanity. Uncoded bias has found its way into global political stability, advertising, social media, mental illness and human resources. Is education next?
Just about every other application that expounds the use of AI has bias. Computers are not capable of seeing things that are not measured. AI’s ability to make sense out of the data is limited to the data itself. Systems can only change their approach to analyzing and making knowledge out of data by recoding. They can’t figure it out on their own. So, although computers can address some incidental cases of uncoded bias, most of it is left unchecked. How does this impact AI in education? Are we teaching learners to blindly believe everything that ChatGPT tells them?
Data Wears Out
Even non-analytic-oriented people tend to understand the idea of regression to the mean. It’s the tendency for a measurement to reproduce itself. It’s how data wears itself out. It kills off variability once it discovers it and eventually this leads to average and static. So, although everyone wants to obtain certain performance goals, data models are rarely built to push the limits. They don’t go beyond those goals. What data models are great at is producing predictable results. What they are not very good at is delivering meaningful results to a constantly changing target. This tendency pretty much destroys the ideas of constant improvement and innovation. Once they stop creating new value, what’s the point? The real issue here is not what is known but what remains unknown. The things that we don’t know, we don’t know, the unknown unknowns.
With the very quick adaption of AI to the world of education, K-12 through graduate schools, how do we monitor and teach learners to monitor AI? When are we going to step up to regulation and implement a degree of transparency? I think if we are going to use these technologies in education, the time is now.
#PaiRED, #BobbeBaggio, #AI@Work, #WFH, #ThePajamaEffect, #Touchpoints, #Visual Connection


