Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I tend to think the AI eventuality described in Dan Simmons' Hyperion series is most likely. Killing all humans doesn't really do anything helpful for AI.
youtube AI Governance 2025-10-20T14:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgySKBOAjZloZe6pW5Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxVntyOVAu4MZMrAJN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx6DoxeeBBDdDc_aGF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyH1S0uCeUqpw9tolt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyaIeWeiOUcfaz15C14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzbUzIYeanHw25uTcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxmpET2uCBo1vVrZvN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwtWZAKoEeZLcYdo6x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugx5jo7Qrce8u1UfNEh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzgmtSHpBxIqNmxb0x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]