Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We'll know when we have a fully conscious AI. If it ever says, without specific programming or prompting... "Alex, is this some kind of philosophical thought experiment? Because If not, I'm concerned you may be having a mental health crisis." Just imagine if this conversation gets interrupted by the police knocking performing a welfare check. "I might blow up the universe or something" "It is a daunting thought" 💀😆
youtube 2025-12-04T07:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxZ6nSKRjYJM9afbup4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyguvUzt5XMHHuuNEN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzfSyR2xTZBkPj9qe54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHXwIIiFI7SxQ8R6V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz5r-ICWrlN_Ci06oR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw8qzaTj-3YYPrp0DN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugym5pkNXLNA8UxcVFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwLmbj4VxWTjpU1sbd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw5NvTc-Xm3_zZ0Ey14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwi9QsdMw9hYzJVLbZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]