Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI needs humans for statistics and as additional sensors of the "real" world. So there's no need to worry about safety. The AI ​​will worry about human safety itself. It's a cool concept—simulation studies simulation. If AI is truly super-smart, and not like some of its creators
youtube AI Governance 2025-12-22T01:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwLkAZ9NmaJJ2h4eF54AaABAg","responsibility":"elites","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxItb9DAPp2QOhugsF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyAwhYeuwvjHvVgrf94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxdIdAjA8dFUX7zm2F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgzXOeWfQv4RWVb2-8x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-DDDDhTF7tUCXw4R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxW5OdOJXsofkAj21l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxXrmDpYqCYB0dbUVR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxpZbm3bVvrnBbtqkx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzrm5RnQqPkf4JnNXB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]