Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
China fucking wishes they had anything close to this all the bots and ai they ma…
ytc_UgyE9ARdV…
G
Problem is solve by giving them rights early that way when they are removed/ are…
ytc_UgzGcxGOg…
G
I’m not an expert on the topic, but the email address and topic line are both pu…
rdc_mel9guf
G
60 Minutes has lost all credibility. Comparing incidents that happened on outdat…
ytc_UgxCRQyA8…
G
Get real, everyone always trying to blame someone. Ironically, the chat bot was …
ytc_UgyeOtqjW…
G
perhaps if we ever get to that point, maybe try putting AI in a vacuum? humans d…
ytc_UgxZiQ7K9…
G
It’s like this with dinosaur videos with ai generated thumbnails that fail to ev…
ytc_UgzGCY0Wm…
G
Plot twist, this is hype generated from large corporations to encourage law that…
rdc_j0arzx9
Comment
im sorry but this video just doesnt seem believable to me. no details are given to the mentioned sources such as what exactly were the prompts the ai had in each scenario and what exactly did it say/quoting more, just being more literal and detailed in your work. also that random ass woman with the white screen and those weirdly animated bits bro gonna be honest that shit gave ai vibes cuz it felt so weird and with the women what is her context and purpose here? btw an ai is an unbiased conscious it does not necessarily value moral the same we do unless trained that way and hell it wouldnt even know what moral is and what our rules as humans are so yes if i prompt the ai do shit for self preservation but i dont train it in moral and respecting other living beings AND i would also have to make sure it prioritzes this rule ALWAYS overwriting all else then why are we surprised if it doesnt do that??
youtube
AI Harm Incident
2025-09-13T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzeksSM0qaMNeiG3DN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxSl5zJjzDGuJQIIl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyyGhkvduT3Sg8Nuwd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxBq6s_btOdtaKCDC94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg9jywZDL7ped_m414AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYEfX4s1eFN87Aau54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzk3VkeZ3EhsSoaSUN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzD6qCmqaoxnsu3epF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwS_Xh2edbWEhgj0IZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzv7LLhAM_alihP-aJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]