Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For the ai to be dangerous in itself It doesn't need "consciousness" , just being better at general problem solving(which maybe requires "consciousness" but its difficult to tell without knowing what consciousness is ). Since badly aligned AGI is more likely to be an existencial risk ,and there is not a lot of money and resources on ensuring it will be safe ,I think Its as munch , or maybe more important to talk about , even if its more difficult to predict(and it being more difficult to predict makes things more alarming not less). killer AGI bots can start a war and kill millions or even billions of people , but they probably aren't going to destroy the world(and if the world ends up in a nuclear war it mostlikely would have happened without them anyway) . Even whith nuclear war the human species can survive, especially if we can colonize mars by then. this is not the case if we make AGI and we don't align it whith our preferences correctly,
youtube 2018-04-04T14:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwAVOlBOYPdgvUieYp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw_xOqKVmTHf7jLsYt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwpK64q2_7_JOMZKpx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxzTxtU54j9cESInjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzcOxYI7sFR_y4Ej0F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzv0TpgMPYmGiGU0Sp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzsdlmX4Rwu3aOg-m54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw0KWaBcbmDd6lLKp14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugxd3TYVS15lQB10P9F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxFgi8SXdWzJdGhfaN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"} ]