Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm appalled by Ezra's denial of the threat and wishful thinking. It will only take one mistake or miscalculation for a catastrophe. Furthermore, AI is not "programmed" like traditional software. It is being taught how to learn on its own, from human-derived information and data. All humans do is feed more information letting it know if what it derived was "good" or "incorrect". These aren't commands and it is not like giving a dog a treat. What Eliezer is pointing out is that the result of the learning process is wildly different (and in some cases threatening) from what we hoped for. This is not an industrial machine or tool. We are creating is a digital life form.
youtube AI Governance 2025-10-21T02:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyMkp_eHRRL0dDh44p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwMyh32CmdkyYd4T9h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzyMjkxAfxT6e0NDpF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy8PDrcKhxHPFH7wHp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzFIM1AWbBki8LQo2Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxL5Ytx-_8kID63QbV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyKupnQIyCgQHUPUf94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwLUsO4qF1V423BTcx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwlvxYcaKeuSqtTALl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwREmL4EJrbZp6hs314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]