Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Elizier has gotten a bit better at explaining things to the masses over time but he still has a bit of work. I think he needed to emphasize how misaligned goals aren't compatible with human life more forcefully. The sex and icecream examples do show how training on a goal doesn't result in alignment and the way it is misaligned is not necessarily predictable. But I think he needed to go harder on the idea that when there is even a slight misalignment, if you give the AGI essentially infinite power (which is something an AGI is claimed to virtually have, at least over time as it vastly increases its capabilities) that the maximally optimised outcome looks less and less like a place where humans exist (such as if the AGI needs to take over the entire surface of the earth to get maximal automated factories running to produce the most air fresheners for an air freshing company that happened to fine tune on the one that went full AGI).
youtube AI Governance 2025-10-25T11:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzuloiXX9NyhPcCerp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxoKATJs_-p_pyisyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxkMTZpL3o1OVxgbYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyN8lUbmNWdk2dffs14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwBhtnqoukhPTl8FSd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzzTOouq1je9BWmqSB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx_0e9quQvUALEUqVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwf5wLWAQ5s-arN28B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgygEzIeTg02bEQwoYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwRD3gum62tfJxg5lh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]