Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Pretty good intro on the topic, but it would've been better if you addressed the mechanisms by which artificial superintelligence can be dangerous to us, and also talked about the AI safety researchers and what they do to find solutions on the alignment problem. On the topic of researchers, there's an excellent youtube channel about this, Robert Miles AI safety.
youtube AI Governance 2025-08-30T15:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyindustry_self
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzo949ok97ZEANduIh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxJqb8TAy9zWTyl30x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw4rmWcbGSfePPe8Jp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugycci8huU0h8cabspN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwLaGaF0q4cevec_g54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"} ]