Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
An even better strategy is to never query an LLM with the expectation that it'll give you a truthful response. Because a lot of the time, it won't. This isn't because the LLM is defective, or needs to improve somehow. LLMs don't know anything, including what is true or false. Lying and hallucinating are inherent to how they operate.
youtube AI Moral Status 2025-04-17T02:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxHijLDho7e7Aw8jF94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyZdxg1e7LuGPsnVu14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxG6MXC4sMn4G6mDeJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgxXecUaSwJ5Ozft7AZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgyZWQwERDr2CSC1Rk54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgyC7MWb5U-_ZSgjj1R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugw3q6m_a5SNLNpjnmZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugw7cQJvKt5svBfE35l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwDRW53H9guXyrbwPB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugww-kaLDPv8dhdXAxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}]