Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
3:45 You misrepresented the copyrightability of AI generated content. AI generat…
ytc_Ugx4AdBk9…
G
I would love to know how many times in the past Humans have killed ourselves. Y…
ytc_UgzfGj8Zf…
G
@SineEyed You're just flat out objectively wrong. That is not the criteria the c…
ytr_UgyTd1Ft6…
G
@PalmTreeSkateClub
Doesn't look like a real. AI generated... Hey walls and plas…
ytr_UgwVxf0nj…
G
I liken AI to the atomic bomb back in the day. The cats out of the bag now. Ther…
rdc_jifd0yv
G
I just don't see these LLM's as threathening. It's not intelligence.
There's no …
ytc_UgxqmiN8J…
G
That was profoundly interesting, and your way of forcing the ai to see its own i…
ytc_Ugw75heAZ…
G
AI nowadays can use kids as experimental objects, especially with some device th…
ytc_UgxZzCuhR…
Comment
It is a great deception to state that AI can be conscious (in the way a human is conscious). AI can be taught how to 'feel and think' how to react and behave, but that does not stem from self-awareness, but from commands coming from the AI operator. If someone programs AI to be evil or to draw conclusions from history or human behavior and then makes decisions based on feelings that result from data analysis and aligned with the built-in moral backbone of the AI - THEN IT WILL BE SO. Therefore, it is a dangerous toy in the hands of 'madmen'.
TRUE ANTICHRIST LUKE
youtube
AI Moral Status
2025-08-25T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx8n_ugJSke55Nd1uh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxunW2Iu5edlAiiy_B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkqL5bs0uZDXhnwdx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzX8DULYcAjxPaqbQl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy2KouNNOHAuuhwIaF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyubhaMS7cfRQFKn714AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyhiPEUSyP5bJDwGB14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugywm-YkvSsK251BMZR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwfksYb_lry-G0Urfd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwcqlvRCWg_Mh4XzwZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]