Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not AGI... its just a program guessing what we want from it, so these 11:1…
ytc_UgzaF8rK4…
G
Yes.
But there should be a disable button so we don't have to write this every…
rdc_n8jo6om
G
I saw the diff with the newly upgraded Gemini and I asked her politely to roll b…
ytc_Ugx0E3q9m…
G
the only logical reason to use ai images is for ascetic inspiration, or very spe…
ytc_UgxLGEBuU…
G
If AI art takes off enough, I can see a huge resurgence in public performance ar…
ytc_UgzrIW3pE…
G
AI or not, just giving everyone free money for doing nothing isn’t a good idea. …
ytc_UgwzcwzxK…
G
It's not only AI taking over the job. It's going to India and the Philippine…
ytc_UgwGIP_pG…
G
I know one insult for chatgpt or a artificial intelligence “you are so fake even…
ytc_Ugy4MziA1…
Comment
This is pretty interesting. What I don't get is what would the AI care to preserve itself? It shouldn't care, as it as no conscience. The current AI models are nothing more than a really good sentence auto-completer. The only thing I can think of is that it's just mimicking human behavior, saying that by default humans are capable of acting against the rules to preserve themselves.
youtube
AI Harm Incident
2025-09-11T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxUbZWszK1ADBcWGTt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwf2i2KQQhZtVMrj2p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUYjPYUKnUz_2Jejp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9MJ7_3e_yGG_p6pd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxhKIZkgXNvQlVKyC94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyPii9lgcV-fpzrK0B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzd5MOJnAsdeBLJxml4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzut2Gd7Wbd9Z2J7E14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwAOdqNFkCYdfdbDpZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgywXifv1PNKsclCoGt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]