Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good vs evil AI. We're just projecting. AI doesn’t think in morality. It learns in goals, outcomes, systems, patterns. “Evil” AIs don’t emerge they’re designed or trained into it by bad inputs and goals.
youtube AI Governance 2025-04-30T11:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxAcpu8p_-RTTQmDmZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwQ1u70rKtl9Pm3CEl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwN6fPZiHNjjByukHF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw4kqC7ayMeL7ATP_Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwarptSG5E2rRe6sIB4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw-lwNeKXcWkPKEVbd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw9L3DmxEWFTc2NaTB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx-qal0V5YaeCj722Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgywAzkUzlD7UPuBwJR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwbBxlA6sByBwgNtEh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"} ]