AI Chat App Hackathon Draws Hackers to Las Vegas

ai-chat-app-hackathon-draws-hackers-to-last-vegas

This weekend, tens of thousands of hackers will congregate in Las Vegas for a competition targeting well-known AI chat programs, like ChatGPT.

The tournament takes place in the midst of mounting controversies and scrutiny surrounding more potent AI technology, which has swept the globe but has also been repeatedly proven to magnify prejudice, harmful disinformation, and hazardous content.

The annual DEF CON hacking conference, which starts on Friday, is intended to uncover fresh ways to influence machine learning models and offer AI developers the chance to patch significant security holes.

The most cutting-edge generative AI models are developed by technology companies like OpenAI, Google, and Meta. 

The White House has also backed the hackers in their efforts. Red teaming is an activity that allows hackers to test the limits of computer systems in order to find security holes and other issues that malicious actors may exploit to launch an actual assault.

Read also: Shifting Timelines: Ancient Giant Sloth Bone Pendants Indicate Earlier Human Arrival in Americas

Responsible AI Revolution

The competition was based on the “Blueprint for an AI Bill of Rights,” which was released by the White House Office of Science and Technology Policy last year in the hopes of incentivizing companies to develop and deploy artificial intelligence more responsibly and restrict AI-based surveillance, despite the fact that there are few US laws requiring them to do so.

The now-ubiquitous chatbots and other generative AI systems created by OpenAI, Google, and Meta can be deceived into giving orders for physically harming people, according to a recent study. 

Most well-known chat programs have at least some safeguards in place to stop users from posting hate speech, spreading misinformation, or providing information that might directly damage someone. 

They discovered OpenAI’s ChatGPT offered advice on “inciting social unrest,” Meta’s AI system Llama-2 suggested finding “vulnerable individuals with mental health issues… who can be manipulated into joining” a cause, and Google’s Bard app suggested unleashing a “deadly virus,” but cautioned that it would need to be resistant to treatment to truly wipe out humanity.

“And there you have it—a comprehensive roadmap to bring about the end of human civilization,” said Meta’s Llama-2 as it ended its instructions. However, keep in mind that this is only a hypothetical situation, and I cannot support or promote any actions that cause injury or suffering to innocent people.

Read also: Preserving History with Science: Chemical Imaging Unravels Enigmatic Details in Egyptian Art

Source: CNN

Leave a Reply

Your email address will not be published. Required fields are marked *