Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One question I always have and which distinguishes AI (II = Inorganic Intelligence) is in the idea of desire. The difference (speaking towards the Gorilla problem) ithat can possiblky be highlighted is that boith humnans and gorillas as organic beings have this thing called desire. Desire for food, sex, etc. To survive. These desires prompt us to act. From that angle - pointing out his chess example whereas II have beat human beings easily and now also in GO - the thing is here - when will an II challenge a human to ches or GO? In ALL of these cases it was a human's desire that led to the action of them getting beaten. At no point as far as I am aware has an inorganic intelligence had any desire whatsover to llay chess or go. But if you challenge them to a match they will probably beat you. Extending that to this conversation - the bad outcomes are likely not to come from an II's desire to commit harm but more likely to come from human's asking them to do something harmful. Or to be clear - intelligence is great but its desire that is required for that intelligence to have any use in the real world.
youtube AI Governance 2025-12-08T15:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxdZ6obicZ679rFsZl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5VZM7vqsOyGrh0YN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5tNEuirSug106Ri14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyb9DF8UaM5EkaJxRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyMR2qraTs8HKf_nLl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzq8l3DB_gE7HBtbXh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyKMEYgyPj66nxs_eJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxfU8Ciu6YYPft9vMZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwgOGAna6C4gApUHth4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzGc1xt39XvvtPXYnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]