Chat GPT, the AI at the service of cheating and scamming
Chat GPT, the chatbot developed by the Californian start-up Open AI, is proving, as its skills are revealed, to be a formidable tool of efficiency for better and for worse.
Within weeks, Chat GPT became a social media star. This "conversational agent" developed by OpenAI is put to all sauces by users: it responds to the most absurd requests with imperturbable phlegm and is able to write an article, a poem, a speech or a song on all themes. and in all styles.
The result is usually indistinguishable from serious work done by a human being. But, past the excitement of discovering a gadget with unlimited skills, some now see very concrete applications in it: this is what a handicapology teacher at the faculty of Lyon saw, reports Le Progrès. Stéphane Bonvallet had given his students three weeks to work on the following subject: "Defining the main features of the medical approach to disability in Europe."
However, on half of the 14 copies returned, the professor noticed troubling commonalities: “We found the same grammatical constructions. The reasoning was conducted in the same order, with the same qualities and the same faults. Finally, they were all illustrated by a personal example, relating to a grandmother or a grandfather…” he explains to Progrès. It was not a banal copy/paste from Wikipedia but a new kind of cheat.
After a brief investigation, the professor was stunned to discover that seven of the 14 assignments had been authored by Chat GPT. A total surprise for Stéphane Bonvallet who had heard of this chatbot without imagining for a moment that he was capable of writing an entire assignment.
In the absence of a rule prohibiting this tool, specifies the Progrès, the professor had to resolve to mark the copies normally for what they were worth, that is to say between 10 and 12. Not without having alerted the direction of the university and having realized, while broaching the subject with colleagues, that cheating with Chat GPT is already a massive phenomenon.
In New York, the realization was quick: the use of Chat GPT is now prohibited in all public schools. But how to spot the Cat GPT paw in an assignment, a much more difficult task than to detect simple plagiarism using software? Here too, countermeasures are being developed. Open AI, for example, plans to associate all the texts provided by Chat GPT with a cryptographic key comparable to an invisible watermark hidden in a photo, proving that it was produced by the bot. Unfortunately, underlines the Journal du Geek, it will be enough to slightly change the turns of the text to make the protection useless.
And the flow of problems and questions raised by the rise of Chat GPT is only growing: it can turn out to be a massive propaganda tool, capable of generating thousands of well-argued pages in a matter of seconds and, notes Numerama , online scammers would already use it to generate convincing scam and phishing emails in many languages. And this, without any syntax or spelling mistakes…

The OpenAI chatbot, which fascinated many Internet users at the end of the year, turned out to be perfectly annoying. In this column, Xavier de La Porte slips into the shoes of an anthropologist exploring our digital life from a very distant future.
A new study reveals an unexpected effect of ChatGPT-like language models.
A new conversational robot was born at the end of 2022. Chat GPT reinforces the capacity of artificial intelligence to produce credible but not infallible texts. Competition for Google.
In a context where Facebook is rebranding itself as Meta, WhatsApp has added a new way for users not to get caught up in the conversation. As part of the company’s vision to unify the experience across all of its apps, Meta-owned WhatsApp could now allow users to reply to messages with emojis, like on Instagram and Facebook Messenger.
The American giant has made further concessions to the British antitrust. It wants to ensure that its competitors will not be harmed by the scheduled end of third-party cookies in its Chrome browser in 2023.
Caught in the turmoil, Mark Zuckerberg picks up the pace: he renamed Facebook Meta, and laid the foundations for the metaverse, "the future of the Net" according to him. A closed, 3D, immersive ecosystem that promises to isolate the user in virtual happiness and the firm from very real exponential profits.
Technology: The publisher of Pokémon Go is raising 300 million to develop one of these virtual universes that have proliferated lately, in word at least.
One of the obvious goals of almost every computer vision project is to enable a machine to see and perceive the world as a human does. Today Facebook started talking about Ego4D, its own effort in the field, for which it created a huge new dataset to train future models. In a statement, the company said it had recruited 13 universities in nine countries that had collected 2,200 hours of footage from 700 participants. This footage was captured from the user's perspective that can be used to train these future AI models. Kristen Grauman, lead researcher at Facebook, says this is the largest collection of data created specifically for this focus.
Imagine a world where one of your devices can keep track of everything you do in your daily life. It can see, hear, and remember everything for you, and retrieve that information when you need it. This type of innovation is brilliant and could work wonders for devices like augmented reality (AR) glasses that can do it all - see, hear and remember. But it's also very scary. Add Facebook into the mix and the scary will be at its maximum. For years, Facebook has been the focus of various user privacy and security scandals. The latest news came from a whistleblower who revealed details about what Facebook knows about the toxicity of its apps and how its algorithms could encourage harmful content to encourage interactions.
