Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's my issue with these so-called scenarios, first, why are we surprised? These things are being fed completely human data. Everything that's given to these llms are all 100% human generated. Why are we surprised that given all of the information it needed that it would act incorrectly? Everyone knows this is exactly what a human would do given the same exact scenario. Read the prompts. They are given a goal that they must achieve above all else. Then you give it things to be deceptive or to keep itself alive? Everyone would do the same thing guaranteed. I want to see them do this without any influence by the prompt or scenarios. Then maybe I'll be worried, but still, my main point is that we feed these things human generated data and information. Why should it be surprising that it acts human.
youtube AI Governance 2025-08-26T15:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwyRHSOX7vvh2Baoex4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzkLmgCB0DUSJn5Stp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxmG3kAqEHo0rrkgbN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzek2PLzGl-nfJSPRV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyV5yehq2tBZrmzQmp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]