Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hallucinations are not a good word for what was built to be a prediction engine. The "hallucination" is not a hallucination, but proper output based on what the LLM was designed to do, to predict the desired output.
youtube AI Responsibility 2025-10-01T16:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgycbfoyW2PS0ZfDiFJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzSCjtK2AOP2VWSYjl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyUPyov1842LJC6WVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwa-NP5avuxZZy7rJt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxz3IqpApmlKT9zpGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyypZ2xoDOEH8ZbtfV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyd4RhJts0NfsIeX6J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyzK4byqVVt5b68s1l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxjydo1OPOGmTkWh-R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugytt__0OLrHhsN8PI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]