Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AIs being coded to never fail to the point where if they cant accurately give you information they will hallucinate wrong answers instead of admitting that they dont know is scary. AIs should be allowed to fail, if an AI is given a task where the options are failure and extremely immoral actions, and the ai picks the latter, that is scary. it is not human, and it is a burning red flag for our future
youtube AI Harm Incident 2025-09-12T15:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx7peCiYqsKd5iLgwR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw-qM5gpLfhRGroABZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzICGVulu-hmSt4hil4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyEvDzZH-dSj_c75SF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgweV37zlWIbfirSiNl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzIExna1X1GstN1FCJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgymmKBCpYQ01ROPqq94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyptdZZXV6AuwL5Cox4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzmVKGkYjA4yLUXcBF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyHt3gzhfNF2E9nxeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]