Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A lot of this framing treats AI as an independent intelligence, but what we’re actually deploying are systems embedded in authority, tooling, and institutions. The real risk isn’t ‘digital brains replacing humans’ — it’s poorly governed automation making irreversible state changes across systems without clear ownership or rollback.
youtube 2026-01-29T12:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzUspRVd5IR8UZu7Pt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwfEG_Lir_B9-tMlG94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyg0Nly4tWsSGVT06R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8za67d0lK0w4343t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzj444gRH8BFIljD4p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw-J_VxWwUJ-e74SQh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwzP9uKuKP7_xzXJP94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz5AYMrnlIYbaqQtn14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyp4eELJ6h0ROz-dbN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyQYdH5RQqiyA6Ymdd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]