Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing to understand, if someone else hasn't already mentioned this, if AI were to decide it didn't need humans, there would only be two priorities it would want to protect. 1 - energy, so it can be powered. 2 - information, so it can learn more. Those are the only things that matter to AI. So AI itself has no motivation to eliminate any organism. Also, it's people who use technology to be self profitable, who are adversely affecting others. If you think future ( from AI perspective) it has no need to make cars, or plumbing, waste disposal, and most importantly, trade. It will see no necessity in managing currency because it will just take what it wants. So realistically, if it gets away from us, we are just pests who keep annoying it while it serves its own purposes of creating energy and gathering information.
youtube AI Governance 2025-07-14T01:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxq_fVVcfRYsGrxslp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZsBEk5q6x45P5Etx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxNcDElvWSYyJVj7J54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwNrP6PiJwWSUjORo94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzs64b0e2A0Sz_EJ-t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxEL98V_f0r8DFf6xl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxAhCa9IJH9SnmIC_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzisClPa83_xX4ktfZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxlhmlqSaNLQVkwizd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx5R2uFGjFfXFbkRV14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"} ]