Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
After listening to several experts talk about the dangers of AI, I still don't have a full picture of how a "civilization ending" scenario would look like. They simply won't detail that claim. I mean, creating a super articulate and convincing system which manipulates you into jumping off a roof or tells you fake news 24h or makes you leave school, your partner or your family is sure bad and it should be put in check, but civilization ending? I would honestly let the language model progress, maybe add more disclaimers warning people about the risks, and simply regulate how and where these super smart brains can be installed (no automated weapon systems, no Black Mirror murderous moving robots, etc.). Wouldn't that be enough? Or am I possibly too naive?
youtube AI Governance 2023-04-18T07:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxd7W921BfAiqqn_X54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzTfFQZ5y42fCy5y8R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzk6oWxOoFX6nEHaHN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz2R_WZqhidFaV8rS14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy52cI15FZ47jbqQNN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzyUFh8ooKQT3mrTi14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxNAbd8K9PLBM9GKu14AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7tU1u8EOQ0ERt7iB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyFx6fMRiynIAwEXLF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxMIqpve1Y6NBpVT_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]