Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The discussion is largely determined by the choice of words, right or wrong: Yudkowsky didn't object when the interviewer said that AI would be "programmed" (no, it is not; it's more like "bred"). The best analogy I can think of for the AI ​​situation is this: It's as if chimpanzees in their lab were breeding humans ("Artificial Super Apes") and expecting to get the perfect servant.
youtube AI Governance 2026-03-16T12:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwK1w_gnBmM6l7zPEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwhNrVNBgQRvmYTrRx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwo1KEPla2iHpXaHCx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwpi6fJgid2WwTZrmp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwM22TwhUi3D7qd_oB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxnp09EoZmFliXHd1t4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzyg-hn1iH8xz9tnxF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwiMJ2hVYl-2AydnG14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyw7pTpKYqcGCBP7zZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyh05UE72bpm-0ugcB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]