Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that one posible path of mitigation action is have some companies training AI agents with data selected to predict, prevent and overcome the treats of an AGI. Every human and AI danger is related to the objetives and goals of the agent. Humans wont be capables of understand or change the path but AI could.
youtube AI Governance 2025-09-17T01:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxBUz7sycEw_rhWcgR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxqTfm9z0ADHfX7x3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"skepticism"}, {"id":"ytc_UgyEVoQUd12spRlmkUR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwtzWhvDDRUxa3PdbF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx4j5Q2X_ZnIJyYAxl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxBViWYHdOVp7R-6O94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyTpkR2a-uTWTpmS5V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwU2zV4fkUD8ldnoep4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugy6sZOOAysLfZvxlLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcFAy6tZQz5qFADUJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"} ]