https://newsletter.en.creamermedia.com
  

Block A, Sunninghill Place, 9 Simba Road, Sunninghill, 2191

  
Syspro image

Students should use AI to interrogate knowledge, not just answer questions, IT experts advise

IT industry experts discuss the role of AI in education. Video: Shadwyn Dickinson; Editing: Nicholas Boyd.

31st May 2024

     

Font size: - +

AI-based systems can provide personalised learning experiences, as well as identify students' strengths and weaknesses and adjust teaching to better facilitate and accelerate learning, IT industry experts said during a discussion on AI in education hosted by Regenesys Business School, in Sandton, on May 28.

They stressed, however, that using AI-based systems, such as generative AI (genAI), to only generate answers without a student applying critical thinking and reasoning does not help to develop skills and is merely skimming the surface.

Therefore, students must use AI systems to interrogate knowledge. Becoming a deep thinker meant students must consciously decide to want to know more and then use AI systems to become experts on the subject matter, said IT multinational IBM managing partner Riaz Osman.

"IBM and other companies have applied AI in various successful business use cases, including generating personalised customer experiences and targeted marketing.

"In terms of learning, the experience is similar where, once you have gathered data about the specific learners, you can use AI to build specific academic programmes that are geared for that specific learner and her or his preferred learning style, such as a fast or slow learner, among others. We can use AI to create personalised learning experiences," he said.

It is not about whether AI should be used in education, but how its use can be accelerated, said industrial IT company Syspro chief people and strategy officer Terence Moolman.

"Resources in the education sector are becoming an increasing challenge. Learner needs have also diversified, and common education platforms and facilities do not cater to this requirement. Even as employers, we understand that people do not learn in the same way or at the same pace," he said.

"The speed of learning has become more important and concepts are complex. However, businesses cannot have people taking months to upskill for new jobs. We must find more effective ways."

AI can enable learning with different learning experiences, and organisations need to improve the effectiveness of learning. However, the intended outcome of what the education sector wants to achieve with the introduction of AI must be determined.

"It is not only about giving tons of information, but about the teaching the concepts that we want people to know and how we can make this process more productive and effective. In this way, we can get people to enjoy and love learning," he noted.

In the working and corporate environment, deeper subject matter expertise was required. Competition was increasing worldwide and organisations could use computing technologies to do mundane tasks. Therefore, it was up to each individual to continuously elevate their cognitive abilities, said Osman.

"Educators are seeing high levels of plagiarism from genAI systems. But the industry is also making progress in identifying these and developing ways to differentiate between AI- and human-generated content," said Internet services multinational Google Africa chief marketing officer Dr Mzamo Masito.

"Students love shortcuts and some lazy students will plagiarise. But people cannot make their way to the top of their industry by using AI to generate their answers. If students are not thinking, their critical reasoning will decline. Students may get short-term benefits, but long-term negative consequences," he said.

For students and teachers who were interested in catching up on the new innovations in AI, there were a wide range of certified courses, including those provided by Microsoft, Google and Amazon Web Services (AWS), among others, which they could complete at their own pace to see what was new in AI, he advised.

"Many of these courses are affordable, such as $10 or $20, and also provide certificates. This will enable students to develop their AI skills and educators to better teach their students," he said.

Further, humanity could not rely solely on AI, and this was something that needed to be impressed on young people from the outset, said enterprise resource technology company SAP global government affairs and Africa corporate social responsibility director Sunil Geness.

"There are faults, such as genAI systems hallucinating [producing content that is similar to expected outcomes but based on false information or not based on factual information]. Everything from genAI systems must be validated as factually accurate," he said.

"This is something we need to teach the youth, as they still need to develop logical thinking. AI has a role to play and value to add to many industries but must never usurp human agency. These systems need human supervision."

Students must not accept generated answers as valid without question. The premise of science was that nothing could be assumed to be correct, but must be validated through experiments and tests, said Moolman.

"It is our responsibility as students not to accept what is put in front of us as fact. We have to reach our own conclusions by finding evidence and reinforcing hypotheses.

"Students can ask genAI systems what are some of the opposing views to a particular question, or what are additional findings and thereby make their learning experience rich, rather than only accepting the first answer," he advised.

Cloud services multinational AWS South Africa enterprise sales manager Brett Dunn told the audience that he had not been a great student at university. However, he was naturally curious, and wanted to learn how things work.

"When the OpenAI revolution came about, I felt it was the ideal opportunity for me to enroll in university again, and potentially use genAI to overcome any learning deficits I may have.

"At AWS, we leverage foundational models that each team or person can use to build out their own AI systems. Then, data in terms of learning styles, preferences, and academic strengths and weaknesses can be leveraged to build an academic model," he said.

These foundational models were derived from AI safety and research company Anthropic's AI system building components Claude family of models, and were part of AWS' genAI development platform Bedrock, he added.

"Once trained, the academic model can act as a supervisor and pre-empt and predict how to progress through the curriculum, thereby creating a personalised learning experience."

"Learning in this way, I felt that I was more focused and my curiosity piqued. It was not only about leveraging an AI service to get data, but to promote a longer-term learning experience for myself," said Dunn.

Meanwhile, a critical part of how organisations use AI for education was by identifying where it could be most impactful in daily productivity, said Moolman.

"From an organisational perspective, the demand for skills is not being met in the general labour market, and we need deeper specialisation. We can only achieve this by doing more specialised training."

Organisations need to help people become specialists as quickly as possible and must identify how they can use AI to achieve their training and business objectives.

"Similar to flight simulations, we can use AI-based learning to create safe spaces in which people can make mistakes and learn. Also like a simulator, as the users progress, we can add complexities and use the simulated environment as a learning space even while training on critical business systems."

However, AI must be reliable and trustworthy. This comes down to data. Data is critical for AI algorithms to analyse and generate outputs, said Geness.

This was why guardrails and ethical considerations were important in the design, accessibility and use of AI-based systems, said Osman.

"At IBM, we have Watsonx.governance, which is used to ensure public and corporate data is free of bias and prejudice.

"We are also backing up our AI systems with guarantees. IBM and other organisations are saying that we stand by the models we have created and guarantee that they are free of prejudice, and are open and transparent," he highlighted.

Google, AWS, SAP and IBM and other companies have similar guarantees, standards and controls in place.

"We must move the world forward in a just manner, and ethical considerations are important," Osman said.

Further, many organisations are developing large language modules and training models on a variety of data, which means that AI systems will have a variety of perspectives and the aim is to ensure that its perspective is ethical and truthful.

"Some companies are providing guarantees, and these are the extents we need to go to. This is because if we go back to the AI model, we can and must be held liable for any errors. When working with public domain AI models and services, it is up to individuals and organisations to determine how they will use information from these systems.

"We must ensure that, as organisations, we use AI to differentiate ourselves and provide more innovative solutions to customers, but we must strive to do that in an ethical manner and with legal guarantees."

Edited by Chanel de Bruyn
Creamer Media Senior Deputy Editor Online

Comments

sq:0.089 0.476s - 162pq - 2rq
Subscribe Now