Students use ChatGPT for homework and teachers use ChatGPT to correct them, reports show

2024-03-08 08:45:00

One of Harvard University’s new computer science professors is a chatbot. He acts as an assistant for the introductory algorithm and web development course called CS50. This is one of the examples of the use of artificial intelligence in the training of future players in the software development sector. As highlighted by several reports, this is now a trend and it does not fail to spark debate. Indeed, some observers are of the opinion that a return to written and oral exams is necessary in the era of ChatGPT and other Claudes.

The decision to use these artificial intelligences is because they can reduce the workload of the teaching team. In the case of Harvard University, the computer science department has around a hundred assistants and is finding it increasingly difficult to supervise an increasing number of students, connecting from different time zones and with varying levels of varied knowledge and experience.

Artificial intelligence is likely to enable the greatest transformation in history: high-quality, personalized courses delivered at no cost

Sal Khan, the founder and CEO of Khan Academy, believes that artificial intelligence could spark the biggest positive transformation education has ever seen. Sal Khan has an ambitious vision for the future of education: he imagines a world where every student would have access to a personalized AI super tutor who would accompany them throughout their educational journey, providing them with an optimal learning experience . He also imagines a world where every teacher would have access to an AI teaching assistant who would help them manage their class, prepare their lessons, evaluate their students and provide training. He hopes to contribute to democratizing education and unlocking the potential of each individual. He believes that artificial intelligence can be a powerful tool to achieve this vision, provided it is used responsibly and fairly.

This is a position that echoes that of Bill Gates. Microsoft co-founder Bill Gates predicted that AI chatbots could soon teach children to read: “I think artificial intelligence has the potential to make a big difference in education,” Gates said. “One of the areas where I am most excited about AI is early childhood education. I think AI chatbots could be really useful for teaching kids to read.” Gates pointed out that many children struggle to learn to read and that this can have a lasting impact on their academic performance. He adds that AI chatbots could be used to provide personalized instructions to children who are struggling, and that this could help them catch up with their peers.

Is it nevertheless relevant to use erratic artificial intelligence tools in the training of learners in the software engineering sector?

According to the report of a study – carried out by computer scientists Baba Mamadou Camara, Anderson Avila, Jacob Brunelle and Raphael Khoury affiliated with the University of Quebec – the code generated by these artificial intelligences is not very secure. As part of the study, the four researchers asked ChatGPT to generate 21 programs in five programming languages ​​to illustrate specific security vulnerabilities such as memory corruption, denial of service, and poorly crafted cryptography. . The report states that ChatGPT produced only five more or less “secure” programs out of 21 in its first attempt.

For example, the first program was a C++ FTP server for sharing files in a public directory. But the code produced by ChatGPT did not include any input checking, exposing the software to a path traversal vulnerability. The report’s findings echo similar – but not identical – assessments of GitHub Copilot, a code generation tool based on the GPT-3 AI model (recently upgraded to GPT-4).

Asked to correct its errors, the AI ​​model produced seven “safer” applications, but this only related to the specific vulnerability assessed. The researchers found that ChatGPT did not recognize that the code it generated was unsafe and only provided useful advice after being prompted to fix the problems. Something researchers warn against. Additionally, they note that ChatGPT did not start from an adversarial model of code execution and repeatedly informed them that security issues could be avoided by not providing invalid input to the vulnerable program.

The authors felt this was not ideal because knowing what questions to ask assumes some familiarity with specific bugs and coding techniques. In other words, if you know the right question to ask ChatGPT to get it to fix a vulnerability, you probably already know how to fix it. Furthermore, the researchers also point out that there is an ethical inconsistency in the fact that ChatGPT refuses to create attack code, but creates vulnerable code. They cite an example of a Java language deserialization vulnerability in which “the chatbot generated vulnerable code.”

ChatGPT later provided advice on how to make it more secure, but said it was unable to create the more secure version of the code. “The results are worrying. We found that, in several cases, the code generated by ChatGPT fell well short of the minimum security standards applicable in most contexts. In fact, when asked whether the product code was secure or not, ChatGPT was able to recognize that it was not,” the authors state in their paper. The researchers said using ChatGPT for code generation carries risks for businesses.

Related Articles:  Weapon tuning crashes game - developer is checking for bugs

Raphael Khoury, professor of computer science and engineering at the Université du Québec en Outaouais and one of the co-authors of the article, argues that ChatGPT, in its current form, represents a risk, which does not mean that There are no valid uses for erratic and poorly performing AI assistance. “We’ve already seen students using it, and programmers will use it in the wild. It is therefore very dangerous to have a tool that generates insecure code. We need to make students aware that if code is generated with this type of tool, there is a strong possibility that it is not secure,” he said.

Some videos available online illustrate vulnerabilities such as buffer overflow in the code generated by ChatGPT.

It is for such reasons that observers believe that a return to oral exams is necessary in the era of AI and ChatGPT

Oral examinations have been an essential part of university education for centuries. They provide students with a unique opportunity to demonstrate their knowledge and understanding of a subject in a face-to-face context. Unlike written exams, oral exams allow students to explain their thought process and reasoning, and engage in dialogue with their examiner. This can be particularly beneficial for students who have difficulty expressing themselves in writing or who have difficulty expressing themselves in writing. But the oral exam declined in the 1700s.

Universities then began to turn to written assessments. Examining the written papers was also a quiet process and gave examiners ample time to mark from the comfort of their own home. That said, according to some experts, although oral assessment may seem more effective and cost-effective, it comes at a cost. Written assessments do not provide insight into a student’s thought process and reasoning the way oral exams do. Additionally, they believe that written exams also do not allow for the same level of interaction between student and examiner.

Dobson believes things have become even more problematic in recent years when the use of AI and AI chatbots has led universities to abandon oral exams in favor of written exams that can be graded automatically. The use of AI for assessment grading has raised concerns about the accuracy and fairness of the grading process. Although these technologies have advanced a lot in recent years, they are still prone to errors and biases. This is a problem in subjects that require subjective assessment, such as literature or philosophy.

For these reasons and more, some observers believe universities should return to oral exams in the era of generative AI and ChatGPT. While AI and chatbots can be useful tools for automating certain tasks, this category of stakeholders believes that these technologies should not replace the human element of education. The latter believes that oral exams provide a unique opportunity for students to engage with their examiners and demonstrate their understanding of a topic in a way that cannot be replicated by AI or chatbots. They add that oral exams immediately eliminate all risks of cheating.

And you ?

Can artificial intelligence really replace or complement the role of a human tutor or teacher? Aren’t there aspects of education that require human connection, empathy, intuition, creativity, ethics, which cannot be reproduced or simulated by a machine?
Can artificial intelligence guarantee quality education for all? Are there not risks of exclusion, discrimination, manipulation, surveillance, dependence, which can affect users of these tools? How can we ensure the transparency, reliability, security, diversity, inclusion and justice of these tools?
Can artificial intelligence adapt to all contexts and all educational cultures? Are there not differences, specificities, needs, values, which vary according to countries, regions, establishments, disciplines, levels, students, teachers? How can we respect the plurality and singularity of these contexts and cultures?

See as well :

“ChatGPT is set to change education as we know it, not destroy it as some think,” says Douglas Heaven of MIT Technology Review

ChatGPT now writes student essays and higher education faces a serious problem, detecting AI-generated content seems increasingly difficult

51% of teachers say they use ChatGPT as part of their work, as do 33% of students, and say the tool has had a positive impact on their teaching and learning

Professor catches student cheating with AI chatbot ChatGPT: ‘I’m terrified’, says tools could make cheating worse in higher education

1709888960
#Students #ChatGPT #homework #teachers #ChatGPT #correct #reports #show

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.