Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t think we’d be able to run and hide from these programs. In fact, when it does become smarter than us, why would it let us know beforehand? The most frightening thing is that there’s nothing we (the average person) can do about it. It’s already happening and even if AI isn’t smarter than us, it’s already a progran with a huge blast radius — especially if it gets in the wrong hands by a bad actor.
youtube AI Governance 2023-07-07T15:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwOf7FSslArYHYoTUR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRbBGHrmCxmNzHuxF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyvTUHooI1FGqwS_bd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxQf1gGAYwcoWcP2jl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwBUriXOFH_Y6ld5OZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugzh0XTBNW6-D23G7NB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyjpeLtCCJgd0GeS9l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzK6WEVqcDC7mbJVVx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy3fFM1L9G-kRgusIV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy82SeXIRoaaJjifNd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]