Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You still don't get it. You are assuming that AI/AGI will be an intelligence that is an equal to you — something that you can reason with, in the same fashion that you reason with other humans (I might point out, honestly/realistically, we can't even get fellow humans on the same page -- without pointing guns at each other). AI/AGI will become much more intelligent than us, will have it's own wants, needs, agenda, etc. You will have no more control over it, then you have over another human being... another super intelligent human... why would it share the world's resources with humanity? Are we sharing the worlds resources with any less intelligent species on this planet? NO... we don't care about any other species, yet we expect that this new super intelligent "species" to care about our needs/wants... especially when it won't need us in the long run.
youtube AI Governance 2025-10-15T13:1… ♥ 8
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzVMycQ_q4C0IHmFSF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyS9hVoezf_CTiXDp94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyFJcVKYVYUlE8lRMJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwri8NiUTaG35DUDIB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz_HaMGkkONKFXgdfd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw4fegkhpEZ3ufwAPJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwwrc0koYUHVT_Zv414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxmY-SpaVPD3MiBGQR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgySnjkGV4TD_4SA1RV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyE8PWRjmF_Gt9BXTV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]