Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes, LLM's are unknowable monsters, however they are pretty well controlled as long as you don't give them direct access to tools, the second you do that they can become a problem. we really need to not let them have access to sandboxed computing environments, THAT is the real recipe for disaster. if it's just operating a sanitised search engine and a text box there really is only so much harm it can do.
youtube AI Moral Status 2025-12-11T04:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzfkJjjmVroB0IM8LF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw4tnbRxkkmSrEfjLx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgztoMsIJds3l5aPyIl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyITxXnLlFhOAHKGBJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy9KFesjcJsMSNVfIt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzfcWx1_nPsEF855VB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzFKMJ4CLvmP3uxEOJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzWYagWJhiFibWNY9B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyQ1L00vAevLsr3o6Z4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwKzdJA9OgIy7LnGJ54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"} ]