Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Question for Dr. Yampolskiy. Could someone like you create an Ai that poisons the well of data to prevent SGI?
youtube AI Governance 2025-09-04T16:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw8J27o7KVbdXTib5l4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyJdLk8t4V-ysrL_NN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzu72sJzDiSurUyxDN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx_wc7fR9LMHF7P5_Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz-p9Smv3xiYTev_c14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugy9qxcfFbb5wTH0pDZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzk_rF0Gn1BoISWpJF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzzFDAhnLOTeIZFqlF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy1KduyT3G0COHidlN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyuXRJovZUiuGkUeUh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]