Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think it's all actually open
From what I've read most of the research w…
rdc_jkf05of
G
This race to being the first to Ai is based on this primordial virus called fear…
ytc_UgyqI8-ti…
G
ai is not self-aware, as long as it dont self-aware all oh those situations are …
ytc_UgwvnwYCA…
G
Inspiration is different from taking pieces of art and putting them in a differe…
ytc_UgzGlJM9N…
G
People are very weak, and Ray Bradbury, who wrote many books and short stories i…
ytc_UgyDvtWOo…
G
And now we have Zuck—the guy who Instagrammed our kids and Facebooked our margin…
ytc_Ugy7G6Vay…
G
@ramonsouza7408 You'll be wrong about that as soon as AI goes rogue on a militar…
ytr_UgzSZeAr4…
G
If all your programmers are sexually progressive liberals then your AI will be e…
ytc_UgynKfKrx…
Comment
Those poor billionairs can't afford to test their systems properly in closed environments, weither it's chatbots or self driving cars.
While shocking, none of it is surprising. Chatbots act similar to fortune tellers etc. in that they gather your information and extrapolate from it.
I'm pretty sure they'll have saveguards when it comes to terrorism/amok runs/assassination (esp. of politicians in charge), because it's a small market and would have enough impact to get them canceled.
In other words, they could have prevented this very easily and referred to a suicide prevention hotline, there was a calculated choice against it.
If you look at Sora for example, they had the safeguards ready to prevent MLK and others to be used in outrageous content. They just implemented them after the calculated outrage.
youtube
AI Harm Incident
2025-11-07T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzx7F1iQA9ibpWraUh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxFLvVKNBNTzjY0OG14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzKAZy5fHaF9SpUOXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwbihkOBMfRbi6xa2t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyfT_MdOPaFwXQeCyx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy49EU2CxS1uUcLBch4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtjCsMkaAP3x90tGx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz6cCovNlLLweF644V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwf_v-TTN_DgJk0_794AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyxF7LFmxiU3Z9BeaJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]