Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The bit you blew past about AI is that it doesn't store facts and just "knows" what it has been. It doesn't always give you the right answer. It doesn't really know anything. That's why I avoid AI as much as I can. I can't trust it to give me the right answer every time, so I can't trust it, period.
youtube AI Moral Status 2025-04-05T17:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxcl5Gm5gmGebn4ZuR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwofssQtEpNj91SnDp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxlsPtRgEE2gA400ml4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgysjU1al9jbqAPlpFd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx0YIPlFqC0PhIzFUl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-RITWLnxh6kqDOup4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjFV9YkG6v8l5k0Fp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzww4Q7Eu4Z1gK8g9N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxDsiqybE_Xc2cFe8d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugylr7j8RY8vBO-AeWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]