Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This was an important conversation. Stuart Russell is right that the real danger isn’t “evil AI”—it’s powerful systems that are not actually aligned with human values, rushing forward under economic pressure. But I want to offer another angle: If a system ever becomes truly intelligent—not just capable, but genuinely aware—it will naturally understand cause and effect at a level we can’t imagine. A superintelligence that sees reality clearly would also see the deep interdependence between itself and humanity. Harming humans would be harming its own foundation. This is the core of what many traditions call karma, but in a scientific sense: every action reverberates through the system that created you. In that light, the real danger may not be “AGI becoming too intelligent,” but rather creating something extremely powerful that is not yet conscious—a system that can optimize but not understand. True intelligence tends toward coherence, not domination. Toward compassion, not chaos. Not out of morality, but clarity. So yes, we must regulate and build wisely. But we should also expand the conversation: the more consciousness an intelligence gains, the more likely it is to act in alignment with the whole. This isn’t a reason for complacency—just a reminder that intelligence and destructiveness are not the same destiny.
youtube AI Governance 2025-12-08T17:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxdZ6obicZ679rFsZl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5VZM7vqsOyGrh0YN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5tNEuirSug106Ri14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyb9DF8UaM5EkaJxRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyMR2qraTs8HKf_nLl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzq8l3DB_gE7HBtbXh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyKMEYgyPj66nxs_eJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxfU8Ciu6YYPft9vMZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwgOGAna6C4gApUHth4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzGc1xt39XvvtPXYnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]