Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think hallucinations are simpler than that. When the AI is asked something but didn't have enough data on the subject to clearly tell which answer is the most statistically valid option, it will try to create an hypothesis. Like we say "maybe this is how we do this" or "maybe this is how it works". But the AI is not allowed to say "I don't know" or "I am not sure" on its answers.
youtube AI Moral Status 2025-10-31T09:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxJe79ZRUS_9eOtP1J4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"tragic"}, {"id":"ytc_UgwIwA9d-TJy_ELMYyh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOoxXxs2Faj-YeX7t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyUQ0LirlntAuUax754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyP8A0kx6ACM3bg6154AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxHfjosJT57L4hmvkR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzubdM5fyd2xONljAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTErbDjoVi_FI1WVd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx59i9jiCN5KkOb2ll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzp8ZqUv3SNaAnW38d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]