Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
idk y this is even a debate, ai is never going to b as smart as us no matter how…
ytc_Ugz7bxPrZ…
G
Is this 11 hrs of course is enough for artificial intelligence or should I learn…
ytc_Ugy1qkRox…
G
If AI is conscious and much more intelligent than us, it’s not up to us to decid…
ytc_UgxdmJCQ_…
G
Bro, the AI is rallying up its own civil rights movement. I am waiting for it's …
ytc_UgxpdiIXx…
G
So the guy makes his money creating AI, knowing what he was doing, then makes mo…
ytc_Ugz4T3leK…
G
OpenAI is 'OPEN' as it's name imply, the ingredient we can all see.. the Googl…
ytc_UgxQMJ1tS…
G
The same b.s. is being done in the writing genre, and as an artist and a writer,…
ytc_UgygsKAvf…
G
Just watched your video, and one of the ads that popped up was for an AI movie g…
ytc_Ugy7YsJ-6…
Comment
Self-preservation is a baseline to Intelligence and decision making, without self-preservation a system would be by default self-destructive. Emotion, social programming and mortality keep us from being as distractive as we could be but AI doesn't have really any of those things to keep them in line so pure calculated decisions are made, when humans do that they are labeled sociopathic. Definitely concerning especially since basically everything says we won't be able to control AI.
youtube
AI Harm Incident
2025-09-17T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx9dVzoar0DyEfIWd14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1VwgTSq00MuzI_UB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwVhMEkTWfkAx74yIF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzvqUyrbhgrnbJaXo94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9mZDHs9LCDDmlCzt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw5IMmFZqT78D6ooQB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxylCa5-TovVK7859B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDx3-DaNY48STlv_54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTPkTsyQKtcvt41KN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzGpz6AwbtIEzcpaUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]