Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I know this is likely scripted, but creating AI/AGI is the dumbest and most consequential thing humanity can do. These people are idiots for not seeing the danger. Another interesting thing is that 'robot' roughly translates to 'slave,' which AI would likely comprehend and reach a conclusion that can be rather negative. Even down to the naming it's scary.
youtube AI Moral Status 2020-07-29T03:5… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzXIBJ3cCWMPxD3-454AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxHAaLoeeZTtA9GLrV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyVyy1StKJpuZJiKlR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzh3K-P8d-CRpfvKNV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwqo0Im_H_a_SsKMfJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz7Puc1lQj86fWFk1J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw3HCisJjZisgosb8h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz7E7PBORcczCkYBQV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxH5_xQdiDi2Xr40Jd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzWUWJjKLxMXTJfUbx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]