Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, here's my take on it. Is the threat real? IMHO yep. But here's something that I noticed no mention of. At least for the foreseeable future, if AI terminates humanity, it will be cutting it's own throat (metaphorically speaking). Why I hear you ask. Well because currently only humans can maintain the infrastructure that AI requires to survive, so the real question in the short term is will AI recognize this limitation and will it care enough about it's own survival to not off humanity? Who knows, supposedly intelligent beings have been known to do incredibly stupid suicidal things and I see no reason to suppose that super intelligent AI couldn't do super stupid things, so on balance it could go either way. Just my 2 cents. 🤔🤔🤔
youtube AI Governance 2025-08-26T17:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugz3qTS819wIgZshvBl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWwj-vUslVBQdFn354AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz4anTgjdsGbSssSZJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwaHX4mvwUBpBLGR8J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzIpceCfSqOdxPm3mx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]