Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t understand why we keep making AI smarter, it’s like we’re trying to start an AI takeover, everyone says “oh but that’s just in the movies,” but the purpose of the movies is to warn against what caused those events, they’re not saying it won’t happen
youtube AI Responsibility 2025-07-30T06:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugyx6yUsqBSjZsjAE3V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzkmIUxzyBPrJAcfPR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxAOAm9ze-Cx1g0UEd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz1fSf5upeFsHyP8sN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwCiQOp1Qja78u2Rn94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxlp3hcz7M5SOPERzp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwyrbtwDaBRmXcO0kx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwLL3rigWIc3DRuSol4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgzCm1HSnhvDufc8Ulh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzSk4woeTgol0RppUF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]