Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
None of the AI bros care if people live or die. They have delusions of grandeur where they feel like they can think of our civilization as a problem to solve, and decide what damage is "acceptable". I feel that they will have to be stopped by any means necessary before all of this goes too far.
youtube AI Governance 2026-04-23T12:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzN1RVez7mDDMSfw9J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz1cBQDdpmkdmnGRGh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxY8pM83m2ScZt04OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz6wxwdzr5anc-b-gx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwB3MiJWMxSm0ZsKNR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzJyQRmvx_WqLPeSu94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyZFoSpPSlZWbk0GO14AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz2KuCwVrmZ-k6yvVZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugys59ifvGa9hYV0Uwl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyE9MklwP7rvByUu7h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"} ]