Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
44:51 saying that it cares is giving it unearned agency. It computes. It reasons by statistical inference. If you want an intelligence with the ability to formulate its own goals/feelings about goals/wants, LLMs aren't it. This whole back and forth seems to me to be a fundamentally erroneous anthropomorphizing of a system not built for any of that.
youtube AI Governance 2025-10-15T16:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyRUbpBI9j6RRbvOut4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzwyhRynN5eXO6cOcl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwy6Ya4-kpt2i4XZP14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyGa4S8SgchrDP-_-94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyOWv5nCu3aTf6GOJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwx2COP9PTFWz8AwV14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyrztQWCIkdOhXw5il4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxygWqXL_6_xbYJoVV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwyzMU25mfS7puIar14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy9siEg4WVGOuU47EF4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"fear"} ]