Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We don't know how to make AI safe...because we don't know how to make human's safe. Sounds simplistic...but it is anything but. It's called alignment. It is actually the biggest question in all of history: What is a human being... and what does a human being ultimately want? What does 'safe' mean? The simple fact is...the vast majority of people simply have no idea what the answer is. And AI is 'trained' on information derived from the vast majority of human beings. It's no secret that people fk-up. Often enormously. So don't be surprised that AI will as well. There is a solution to this predicament...right in front of our faces in fact...but it is anything but simple.
youtube AI Governance 2025-09-04T18:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgymQejqPY0vbczVQth4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxS86-05T9bvH3peid4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwmlZbCzn2ft0Zrx2p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxAqyEg4nVc8Xs-EAp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz8y7g_dCnDcJFTvel4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyx6Q4hR4WAbLIT_zh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxw6sauUEBGK-vN9M54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwFyGo1Wo7ZNdIQKcx4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7Uu1cdJD8ppR_sv54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz0-s_OeHg-zFymm7Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"} ]