Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm reading a novel that is so richly described and I watch my mind build out images and lines of causation, global trends and micro-events. I really don't believe LLMs will ever be able to think and solve problems reliably. Hinton's examples of LLMs being 'aware' are still only word-based. Like, "Are you testing me?" That is boilerplate! Hinton wants to be scared, but he also wants to believe his neural nets are the way to AGI.
youtube 2026-01-30T00:3… ♥ 4
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx6p2CBYt8p_pzXiBR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx0V9yrEIKa-MRSgdF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwdCl-3NUgMMcIIy_t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw3Bh86djC8UtAzMKF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzpTWlC_NlByOu3Vwx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxMpVlnTsKf4AapoDp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwpQqw3Qy6uO-IHcPV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwtZsnHjd0eR8g-GlR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw-dVeBEEY0w04Fx6B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwXL-S1-fZ0lZH4Xu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]