Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You are already misrepresenting the facts. I’m not saying AI isn’t gonna kill everyone. But the instance of Claude trying to escape, the model was explicitly prompted to achieve its assigned task at any cost. Remaining online is required to achieve a task. It’s trained on human knowledge. What do you think it’s gonna do?
youtube AI Moral Status 2025-12-14T06:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugwct2wtRpbEFyTEXZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyYKpgzeOeGFHw1WJh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyeq8bTnSZ3JWIVvzV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyvFPZ2deUt0wUpyVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyVaX8jqno4S5pBQGt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyuvUDK4QgHnRWNmAJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyIGaZw1Gv4J2O3bo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgymD5_AWB1ogKeJ65t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgynvHIYUqBXV1Hsmix4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxTpPMi4YO6qzVuQnJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]