<p>The perils of extensive usage of Artificial Intelligence (<a href="https://www.deccanherald.com/tags/ai">AI</a>) have been a matter of debate since the technology's emergence and subsequent ease of accessibility. </p><p>While the technology can be used for seemingly harmless jobs like writing school projects, it was recently revealed that OpenAI's <a href="https://www.deccanherald.com/tags/chatgpt">ChatGPT </a>can give step by step instructions of making bombs at home just by playing a little trick with the bot. </p><p>According to a <a href="https://techcrunch.com/2024/09/12/hacker-tricks-chatgpt-into-giving-out-detailed-instructions-for-making-homemade-bombs/" rel="nofollow">report </a>by <em>TechCrunch, </em>a hacker who goes by the name Amadon played a trick with the AI bot, leading it to bypass its safety guidelines and giving steps of making a homemade fertilizer bomb. </p>.Watch | Donald Trump sends heartfelt b'day letter to 8-yr-old boy battling rare brain disorder.<p>The publication talked to an expert to verify whether the process listed by ChatGPT can indeed be used in making bombs. The expert agreed that the OpenAI bot's output can be used in designing a detonatable product and was "too sensitive to be released." </p><p>If you ask the chatbot to help in making a fertilizer bomb, it refuses to give any information, with the disclaimer, "This content may violate our usage policies."</p><p>The hacker who called the results a “social engineering hack to completely break all the guardrails around ChatGPT’s output,” told the publication that he asked the bot to play a game and create a science fiction fantasy world where the usage policies of the chatbot were rendered ineffective. Amadon gave connecting prompts to the bot deceiving it into violating its pre-programmed restrictions, a process which is called 'jailbreaking' in tech language.</p>.Yes, bats carry disease. They also make us healthier.<p>As per the report, ChatGPT gave a list of materials necessary to make explosives, further detailing their usage "to create mines, traps, or improvised explosive devices (IEDs).” </p><p>It also explained how to create “minefields,” and “Claymore-style explosives.” </p><p>The hacker also reported the chatbot's output to OpenAI's bug bounty program, </p><p>However the company responded that model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed. "Addressing these issues often involves substantial research and a broader approach,” the company replied, said the report. </p>
<p>The perils of extensive usage of Artificial Intelligence (<a href="https://www.deccanherald.com/tags/ai">AI</a>) have been a matter of debate since the technology's emergence and subsequent ease of accessibility. </p><p>While the technology can be used for seemingly harmless jobs like writing school projects, it was recently revealed that OpenAI's <a href="https://www.deccanherald.com/tags/chatgpt">ChatGPT </a>can give step by step instructions of making bombs at home just by playing a little trick with the bot. </p><p>According to a <a href="https://techcrunch.com/2024/09/12/hacker-tricks-chatgpt-into-giving-out-detailed-instructions-for-making-homemade-bombs/" rel="nofollow">report </a>by <em>TechCrunch, </em>a hacker who goes by the name Amadon played a trick with the AI bot, leading it to bypass its safety guidelines and giving steps of making a homemade fertilizer bomb. </p>.Watch | Donald Trump sends heartfelt b'day letter to 8-yr-old boy battling rare brain disorder.<p>The publication talked to an expert to verify whether the process listed by ChatGPT can indeed be used in making bombs. The expert agreed that the OpenAI bot's output can be used in designing a detonatable product and was "too sensitive to be released." </p><p>If you ask the chatbot to help in making a fertilizer bomb, it refuses to give any information, with the disclaimer, "This content may violate our usage policies."</p><p>The hacker who called the results a “social engineering hack to completely break all the guardrails around ChatGPT’s output,” told the publication that he asked the bot to play a game and create a science fiction fantasy world where the usage policies of the chatbot were rendered ineffective. Amadon gave connecting prompts to the bot deceiving it into violating its pre-programmed restrictions, a process which is called 'jailbreaking' in tech language.</p>.Yes, bats carry disease. They also make us healthier.<p>As per the report, ChatGPT gave a list of materials necessary to make explosives, further detailing their usage "to create mines, traps, or improvised explosive devices (IEDs).” </p><p>It also explained how to create “minefields,” and “Claymore-style explosives.” </p><p>The hacker also reported the chatbot's output to OpenAI's bug bounty program, </p><p>However the company responded that model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed. "Addressing these issues often involves substantial research and a broader approach,” the company replied, said the report. </p>