Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
But when we think about AI and why they would want to kill us, we're thinking from a biological point of view.. we're thinking they're going to want to be the apex predator (i.e. reference to the Chicken). But an AI won't be designed to have biological and territorial and social needs... unless we specifically program that. If we are just making something super intelligent, why should it have the same motives as humans? Like gaining territory and being superior. What motives could a superior intelligence even have without the biological impulses that drive it? Just as an example - let's say it may need more materials to make more of itself - why would it want that - it would need its own "reason for being" to want to "propagate" itself.
youtube AI Governance 2025-06-16T15:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwQ8eSwBsC_CtVA9H94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyUOEkZlek8P1GptZd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyYKPhC4bIVezzek3J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwPRrLfbYjU65rkC2h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx9rZxdbM76lfihsht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw8uTBXqsg_MjAv3h54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzIhNVeP1DlCY4-L014AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwDQ-MhaxCh4OO07v54AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyE-6M-aIdYY6WnSaV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwxVe07_-a_RVhK_QN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]