Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with asking known Turing questions is that the AI model will already know the answer to it. You’d have to come up with a brand new one that isn’t derivative in order to actually test it. It’s like asking it to generate code that prints “Hello, World!” It’ll just yank the most refactored version of that simple code and print it it. It’s not very useful or interesting.
youtube AI Moral Status 2023-06-06T01:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyy69bHwXDcQ_4aPIt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy2LpWRVs56I78j6kR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwS-mTx_1Nlle1cacR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyMZIkWJ5MnISHyOa54AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy2Z0e_VnWw_ffMHr14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzUWEFJNydvafIY62V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfY-jr2lr4qAgWu_94AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyEuHDLI1QT1APKghF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzMZ-0FmYgifvhfClF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjIw0X4Mn_nYgRd_Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]