Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
These are just LLM’s right? So when AI’s want to self preserve and blackmail, aren’t they mostly just predicting the next best string of words or thought process to use? And since they’re being trained on all human knowledge and behavior, then it only makes sense that it would replicate what a human would do.
youtube AI Governance 2025-08-26T18:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzf6B5zuA-NQ3Os94R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxjhqgUW1WmSy_L6ed4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwy6bK-E-_sjS8NyKN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx2ZzyG4tlS5Lf6qj54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgypVnkmiTYdJCwYBCl4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"resignation"} ]