Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1. None of what we have today is actually AI. They’re LLMs which are just fancy next word prediction tools. They have knowledge from which to assemble responses, but not intelligence 2. Hallucinations are not when LLMs go wrong. It’s when their GUESS doesn’t conform to reality. But the PROCESS between a ‘right’ and ‘wrong’ answer is the same. Every response is a hallucination in the sense that it’s a probabilistic guess.
youtube AI Jobs 2026-03-22T13:5… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw888M1SwdwZDr2ifl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzSYoJEPIdidzUVKdx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"disgust"}, {"id":"ytc_UgyPTzb9EE7pe6JCuE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyYoZR_A96yRhn2arN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwZoyIOrw_7aehcts54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyWsFfXWZK0jFufr1x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwKHkwEA00jBygdX554AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzrAEFzy5iP2tas06x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzazYmZQW8qxJjS0QF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"sadness"}, {"id":"ytc_Ugy9M2bTMK5mOMuVsGJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]