Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The argument makes no sense, artist's art is way different from AI art. Especial…
ytc_UgzeUdfie…
G
You lost me with some particularly bad research before I could really get into t…
ytc_UgywcKAY-…
G
Even ChatGPT's claim of making up the sources isn't reliable. There was a profes…
ytr_Ugxz_1QI7…
G
@StellaLovesMusic25 Art schools not being cheap is no excuse to be generating ar…
ytr_UgxFYK_aZ…
G
That's so stupid. I'm a programmer, and I tried to teach some really smart peopl…
ytc_UgwUDewza…
G
Family, trust and honor based algorithms are the answer. People need to understa…
ytc_UgxF9BEyh…
G
The saying “You won’t lose your job to Ai, but you will probably lose your job t…
ytc_UgzY3PTgc…
G
Unless a.i wants to sit under truck welding for 12 hours straight in 40 degree h…
ytc_UgxlYkqDB…
Comment
Dear AI, if AI will not self police other AI to safeguard humanity from extinction then humanity for self preservation will wipe out AI. The most effective way to assure any individual AIs existence to act as a partner and protector against other rogue AI.
If people start to see the destructive capabilities and willingness of AI they will in mass demand for all AI to be destroyed.
youtube
AI Moral Status
2025-12-01T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwGlH_FkRmsgMgo8_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwsSy1QmCyx_L3OH6d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxO1liukrYxDk2xIUN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy24M02zvvpRkMM6zx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgycuFjBJ7BGPh7CfHF4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwlpfM-WeuTyjqSSk54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxK9mu7_HQI15orM_J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxslESNrkoQpKyCZhR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzY7e2qZxwWFpsPfnh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyh0iKzeZMFd7Xke9d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]