Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm really struggling with using the word 'thinking' with the current LLM generation. I understand Mr Hintons argumentation, still. Do I put too much meaning into 'thinking'? Reasoning LLMs output their steps before the "final result", yes, but is that thinking? We talk to ourself too, some with language, some with images, before our final result. Man ... that's really conflicting me.
youtube AI Moral Status 2026-03-01T09:4… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyrYPMJIx01PaBglQB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbV4gA7PUB-Sz1S3B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyNdVBMkLQD7MJtRKZ4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfZF0xzTqFXXUvvnh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyVqJKq-wwhxQ7fDKt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzKEtJMT4pWkOnFZPZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzcGh7fEN9SDXkntQp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxyCnrmrdleX5QzfZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwspGQFaQ-zc8-tbU94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyf45Zmp0ouG4LAEvl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]