Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Wolram: you seem to only want to accept the idea of extincition of humanity once you can see how the AI systems would ever set out to do that. It's almost like you feel the need to understand how it would happen and if you can't see the path, then the path may not exist and you are not worried as much. However, if you take a step back and consider that AI will be developed until it can do anything a human can do, and even better, then it is only logical that its (sub)goals could be as quirky as some goals that humans have. And then, if you consider these AI are eventually vastly more intelligent you can deduce that some of them will be extremely dangerous to humans. You don't have to understand the technical details beforehand to understand the risk.
youtube AI Governance 2024-12-01T20:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwhetJyaa7zaqwDeDN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugyk-OKW3TGozQB_eBp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzX7RB92cDYXHhwK_x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwKpMq6JhlIFF28Tex4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz2JBNpRY8BevvEIeN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyrO0-H4ZqH4OnPqH94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxzMsmvWvck0BqRPNx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyeUBmE_5VG4ux2gSp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz2FpwtTbLpyQH34Kl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxIY763xz6KMKWDgDl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]