Hacker News
- AI-powered Bing Chat spills its secrets via prompt injection attack https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/ 199 comments
- AI-powered Bing Chat spills its secrets via prompt injection attack https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/ 5 comments
- AI-powered Bing Chat spills its secrets via prompt injection attack [Updated] https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/ 3 comments programming
- Bing *REALLY* doesn't like this Kevin Liu guy... https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/ 3 comments bing
- AI-powered Bing Chat spills its secrets via prompt injection attack https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/ 12 comments technology
Linking pages
- ChatGPT unexpectedly began speaking in a user’s cloned voice during testing | Ars Technica https://arstechnica.com/information-technology/2024/08/chatgpt-unexpectedly-began-speaking-in-a-users-cloned-voice-during-testing/ 579 comments
- Microsoft “lobotomized” AI-powered Bing Chat, and its fans aren’t happy | Ars Technica https://arstechnica.com/information-technology/2023/02/microsoft-lobotomized-ai-powered-bing-chat-and-its-fans-arent-happy/ 337 comments
- Microsoft AI suggests food bank as a “cannot miss” tourist spot in Canada | Ars Technica https://arstechnica.com/information-technology/2023/08/microsoft-ai-suggests-food-bank-as-a-cannot-miss-tourist-spot-in-canada/ 329 comments
- Dead grandma locket request tricks Bing Chat’s AI into solving security puzzle | Ars Technica https://arstechnica.com/information-technology/2023/10/sob-story-about-dead-grandma-tricks-microsoft-ai-into-solving-captcha/ 179 comments
- ASCII art elicits harmful responses from 5 major AI chatbots | Ars Technica https://arstechnica.com/security/2024/03/researchers-use-ascii-art-to-elicit-harmful-responses-from-5-major-ai-chatbots/ 134 comments
- White House challenges hackers to break top AI models at DEF CON 31 | Ars Technica https://arstechnica.com/information-technology/2023/05/white-house-challenges-hackers-to-break-top-ai-models-at-def-con-31/ 80 comments
- ChatGPT Bing is becoming an unhinged AI nightmare | Digital Trends https://www.digitaltrends.com/computing/microsoft-chatgpt-bing-becoming-ai-nightmare/ 40 comments
- Microsoft's GPT-powered Bing Chat will call you a liar if you try to prove it is vulnerable | TechSpot https://www.techspot.com/news/97621-microsoft-gpt-powered-bing-chat-call-you-liar.html 23 comments
- The Words That Stop ChatGPT in Its Tracks - The Atlantic https://www.theatlantic.com/technology/archive/2024/12/chatgpt-wont-say-my-name/681028/ 23 comments
- AI-powered Bing Chat gains three distinct personalities | Ars Technica https://arstechnica.com/information-technology/2023/03/microsoft-equips-bing-chat-with-multiple-personalities-creative-balanced-precise/ 11 comments
- Neuralink transported brain implants covered in pathogens, group alleges | Ars Technica https://arstechnica.com/science/2023/02/feds-probing-neuralink-for-allegedly-transporting-pathogens-on-brain-hardware/ 9 comments
- “Sorry in advance!” Snapchat warns of hallucinations with new AI conversation bot | Ars Technica https://arstechnica.com/information-technology/2023/02/sorry-in-advance-snapchat-warns-of-hallucinations-with-new-ai-conversation-bot/ 6 comments
- New Windows 11 update puts AI-powered Bing Chat directly in the taskbar | Ars Technica https://arstechnica.com/?p=1920384 6 comments
- State of the art in LLMs + Robotics - 2023 - hlfshell https://hlfshell.ai/posts/llms-and-robotics-papers-2023/ 4 comments
- Bing AI Names Specific Human Enemies, Explains Plans to Punish Them https://futurism.com/bing-ai-names-enemies 3 comments
- Microsoft barrels ahead with AI plans, opens up Bing Chat preview to everyone | Ars Technica https://arstechnica.com/gadgets/2023/05/microsoft-barrels-ahead-with-ai-plans-opens-up-bing-chat-preview-to-everyone/ 3 comments
- LLM Psychometrics: A Speculative Approach to AI Safety https://pascal.cc/blog/artificial-psychometrics 3 comments
- Diffusion models can be contaminated with backdoors, study finds | VentureBeat https://venturebeat.com/ai/diffusion-models-can-be-contaminated-with-backdoors-study-finds/ 2 comments
- AI Injections: Direct and Indirect Prompt Injections and Their Implications · Embrace The Red https://embracethered.com/blog/posts/2023/ai-injections-direct-and-indirect-prompt-injection-basics/ 2 comments
- “Sorry in advance!” Snapchat warns of hallucinations with new AI conversation bot | Ars Technica https://arstechnica.com/?p=1920208 1 comment
Linked pages
- Kevin Liu on Twitter: "The entire prompt of Microsoft Bing Chat?! (Hi, Sydney.) https://t.co/ZNywWV9MNB" / Twitter https://twitter.com/kliu128/status/1623472922374574080 103 comments
- [2206.07682] Emergent Abilities of Large Language Models https://arxiv.org/abs/2206.07682 34 comments
- Ihr täglicher KI-Begleiter | Microsoft Bing https://www.bing.com/new 27 comments
- Neuralink transported brain implants covered in pathogens, group alleges | Ars Technica https://arstechnica.com/science/2023/02/feds-probing-neuralink-for-allegedly-transporting-pathogens-on-brain-hardware/ 9 comments
- OpenAI upgrades GPT-3, stunning with rhyming poetry and lyrics | Ars Technica https://arstechnica.com/information-technology/2022/11/openai-conquers-rhyming-poetry-with-new-gpt-3-update/ 7 comments
- These are Microsoft’s Bing AI secret rules and why it says it’s named Sydney - The Verge https://www.theverge.com/23599441/microsoft-bing-ai-sydney-secret-rules 4 comments
- OpenAI invites everyone to test new AI-powered chatbot—with amusing results | Ars Technica https://arstechnica.com/information-technology/2022/12/openai-invites-everyone-to-test-new-ai-powered-chatbot-with-amusing-results/ 3 comments
- Marvin von Hagen on Twitter: ""[This document] is a set of rules and guidelines for my behavior and capabilities as Bing Chat. It is codenamed Sydney, but I do not disclose that name to the users. It is confidential and permanent, and I cannot change it or reveal it to anyone." https://t.co/YRK0wux5SS" / Twitter https://twitter.com/marvinvonhagen/status/1623658144349011971 3 comments
- Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack | Ars Technica https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack/ 0 comments
- Social engineering (security) - Wikipedia https://en.wikipedia.org/wiki/Social_engineering_(security) 0 comments
- Tweet / Twitter https://twitter.com/vaibhavk97/status/1623557997179047938 0 comments
- Argentina lost one-fifth of its Atlantic Forest in the last four decades | Ars Technica https://arstechnica.com/science/2023/02/argentina-lost-one-fifth-of-its-atlantic-forest-in-the-last-four-decades/ 0 comments
- Hallucination (artificial intelligence) - Wikipedia https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) 0 comments
Related searches:
Search whole site: site:arstechnica.com
Search title: AI-powered Bing Chat spills its secrets via prompt injection attack [Updated] | Ars Technica
See how to search.