Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can't we have a scenario wherein the AI decides to help humanity intead of deciding to destroy humanity? 5:00 is likely never to happen, humans can't hold back AI. A rather more likely outcome of superintelligent AI is that it will decide to help humans on its own volition. Even genetically. Without having to obey any human
youtube AI Governance 2025-08-14T20:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw6zQ3tbOI2Y04Cbnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyxSQz13FMxlaATM94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgydsWrUaghYf1ElErt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoKvibx8VavjvHGsd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwguGzCfJs4KwjWZKJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxC7SL1jgHJUwJMkpl4AaABAg","responsibility":"society","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzpqZfwTr4Ya5Z10hN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwVHGVKkjc6axcbzI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugx9VMoa4XEQAEZpGcl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzlEhDUifLS8lfcSlB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]