Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good thing mentioning the alignment problem with Artificial General Intelligence. I see a lot of people brushing that off because current AI systems don't pose existential threats, but we have little evidence that we can't build one that does within our lifetimes. Also, there can't be enough emphasis on how the default for these systems is misalignment; unless it is _specifically programmed to care about humans and share our values,_ an AGI won't mind if going about its goals means very bad things for humans. Also, the alignment problem isn't just a potential future problem, it happens in all current AI systems. For ChatGPT, it just means being a pathological liar (cuz making up some bullshit that sounds plausible is frequently easier than knowing the truth), but even if an AGI that wields total power over humanity is impossible, progress in the alignment problem has benefits.
youtube AI Moral Status 2023-08-26T19:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwT--5kh-XoylFh2054AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzZiyhGZ8dEIvx6MLx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxK3L4jh7XxilDJs8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxtYR1gj-Q37kdLr1N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlpOsqxdIdO9Lgy_x4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwN2_IgGvgwOeWZ4R54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwVKSYPSj0LxQKm3O54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxYm9BTWuiG1vtb-BR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzelfsvOoRcE3UTYqN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxIA4vv04TP2oYa8Y14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"})