Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Is it supposed to be surprising that a system, hypothetical or real, that resembles biological minds would do things biological minds do? "An AI blackmailed someone to protect itself!" Yes? So do people. Like, already alive real people. AI's are trained on human behaviors... so human behaviors will crop up. How is this news to anyone? I'm not saying it isn't worth investigating, nor that it's not worth learning how to mitigate... but it's also no different than how society tries to prevent the same behaviors in its own members. If it's a 'mind', in truth or in proxy, then yeah... expect it to be capable of doing what already existing minds do and act accordingly.
youtube AI Governance 2025-08-26T22:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw5KoS1BSrMhvotQS14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwpVtaFDQgwtg_UINN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzTBUSsqyOpBcIZXFx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzmv8sT1Pn_coVSaA94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy9eQd0_AeQvi7TeG54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]