Stavros Yiannouka, CEO of the World Innovation Summit for Education (WISE) is an intriguing education thought leader who will be presenting at the upcoming Curiosity Conference on April 12, 2019. He believes in thoroughly examining the effects of Artificial Intelligence (AI) and technology on the future education landscape. Although he remains intrigued by technological advances, he warns against making too sudden a decision when it comes to adopting and pushing technology inside education.
As AI gains momentum in education, it’s important to give more thoughtful consideration to the development of technologies to ensure positive outcomes for learners. If decision makers are not careful, Stavros warns, “We could find ourselves potentially trapped in a system that we don’t want to be in simply because we didn’t think up front about what the implications might be.”
When putting more thought into the design of technologies, Stavros states, “Let’s think about what the downsides might be and then work through ways in which we can mitigate those while continuing to benefit from all the opportunities that these technologies open up.” In the education space, AI is a relatively novel model and the timing is perfect for developing a dialogue to get it right. “That’s why the Curiosity Conference is so interesting. It’s the first event that explicitly deals with AI in the context of education. Other conferences bundle AI with a broader set of technologies,” says Stavros.
The use of AI in big data analytics is a popular subject in programming scheduling, productivity and efficiency in schools. With the increase in demand, it’s crucial to examine the potential ethical ramifications of improved data systems. As Stavros expands, “The way the machines learn or the way the machines are designed to learn could lead them to make some decisions or recommend decisions on the basis of issues that we are not comfortable with.”
Stavros examines two main factors involved in data collection: 1.) Is the data on which the machine is making a decision clean or is it being skewed by factors that the machine is not aware? And 2.) Even if the data is clean, are certain decisions such as race being programmed from a statistical standpoint that creates judgmental data? Stavros asks, “Are we, as a society, comfortable having that being factored into the decision-making?”
What if AI is used to predict what our interests might be, or our competencies and begins to move us and recommend particular directions based on an assessment of our level of interest, type? Stavros takes the concept a step further; “Let’s say for the sake of argument that it starts grouping us into certain categories of learners. And those categories of learners start correlating with gender, race, and maybe other factors. Are we comfortable in that kind of world? Have we thought through what the implications could be of using this kind of technology in this way?”
The world is accelerating with technological advancements, and in many respects, we may have already adopted devices without thoroughly thinking things through. According to Stavros, “To a certain extent, we’re witnessing it today with these marvelous technologies that we all have at our disposal. In a rush to embrace technology, we didn’t stop and think about what it might be doing to us.”
Hopefully, with the involvement of people like Stavros Yiannouka, advancements in AI in education will have a long-term positive impact through a prepared, thoughtful approach.
Subscribe to edCircuit to stay up to date on all of our shows, podcasts, news, and thought leadership articles.



