Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It doesn’t matter if the scenario is "extremely unlikely to happen soon". One just has to increase the time frame. If we handle AI for 100s of years the possibility that it will vipe out human is certain. Every expert agrees to this. Should we really pursue risking everything?
youtube AI Governance 2025-08-03T11:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxsCVjOVllqD-BRJh94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyOp2la_E8qr9kMbJV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx2on1mYPQLvhcHTNZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwVZRor-_DVoJ3RR1V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxw_0KKCrt9xoCGJNB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw-9myUbulr5PZTyeN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzxKnKLjd40kq9A3-B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw12C61kiuvOzffFux4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxqeo45IOkdX-wVHtN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzkt7IdhYRjZe56oFp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"} ]