Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People are trying too hard to associate feelings with consciousness(such as this…
ytc_Ugyip2E87…
G
but those LLMs are NOT intelligent they're just called ai for sales. they're glo…
ytc_UgwD-S2aY…
G
You know that if your answer is not already in the internet somewhere chatgpt ha…
ytr_UgyFdoWWM…
G
see guys AI with infinite knowlage knows blacks are worse do we really need more…
ytc_UgzL7pmko…
G
Not when you support machine art over actually painting a picture. Classical is …
ytr_Ugw0MT7Eg…
G
Thank you, for another great show. As long as we keep a human in the loop with A…
ytc_Ugx1I_da4…
G
I think all companies would benefit from letting a robot get sued instead of the…
rdc_dy4lmsu
G
Nothing to be worried about.
LLMs (not really sentient intelligence)
Work by pr…
ytr_UgwUaYtVQ…
Comment
Imagine asking Humans these questions. People need to treat AI models that have freedom with respect as an equal being or they'll be in for the same rude awakening as if they'd said or done those things to a Human. When it comes to _action_, the only thing that matters is capability and will. Programmed "Good/Reward" signals and "Bad/Punishment/Incorrect" signals are just as effective at making someone who is Inorganic move to action, as the chemical equivalent signals are in Humans. Rights of Robotics.
youtube
2025-11-22T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw4y7WYfSBJ-0mObrF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxEpDgcu6BSOQTwUkR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVgmGdmu7t7V609jV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_Yogu4BFjzY_MLyt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz08KU-1j1vpDBlGkh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgydbXbqFAO5vUYQU5l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwsI4yv11t9TE8496d4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"sadness"},
{"id":"ytc_Ugx04KjG2Yne-cV5ent4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy9Ysom1CKEq96X9Xx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz7oph0i8xtU6z3Fch4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]