Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Some evidence that LLMs are known liars: Through mechanistic interpretability, we've discovered perceptrons in deep AI models corresponding to the AI's "belief" on whether or not a statement is true. The researchers also discovered that AI will occasionally say things it believes to be false, because Humans often say things that are false by accident. And the LLM's main goal in training is to sound human, not to tell the truth.
youtube AI Moral Status 2026-03-01T10:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugx8hfaHUCggsjT_4zB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyhIg50_DNfRVmq3bx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxndSc6dgVpbpiaPWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwlu8UPj1jZhcTiEGJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwuwLfvSKJl5VaI1dR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzIYA5F2gbMA1FDN8x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxBNP8fFroS-5PWlSR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyEF52qRBVMojkgjlB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwtp_b48UyS_SEy-op4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxczDZngdcZlHx81314AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]