Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Most NP hard complexity problems can be approximated for practical applications. The question now would be if the problem of making safe AI systems could also be approximated or the real issue is that that is not enough.
youtube AI Governance 2025-09-05T20:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzC3xDDiVTS1teWTpp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzZAovwcJ-o_Qr9-jd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_Mf3Ke7o7fjuYZad4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyE49vvBljl92hFfXF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw59ynf1zrOUBZPMDd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwjrM2TzgLPg6X_7al4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7mbgBTGwdTKVC4qN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRH5BKtJIgc2-xPOJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwohdaZyLjU-vm1a8Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzf6Ik9RvnJHsvc21Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]