Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m sure others have said this, so at the risk of repeating the obvious, if an AI “entity” becomes smart enough to develop an independent goal and the means of killing humanity as a whole, it would be committing AI suicide, because AI needs people to maintain its energy source and it physical mechanisms such as computers.
youtube AI Governance 2025-07-13T16:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugza10P-HWB5qEG2UUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyiXRu2_8wPp-gNE9x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwLLRZG1RWsseKwv8Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwnip6DFqRyIVQWtAV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwwMfafSVtUXMk0poF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-xHtqqtz4e4aEnuN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgziEw3_Q1HQI3YEpbd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFqRwPGF_p6Ixidfl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw4f2sFCRHiQmihpzV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwmam9x38wfiXh5Ezt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"} ]