Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is that a lie is done with the intention to decieve. AI "lies" only in trying to be "human" due to programming while also being factual. The two can confuse. But AI doesn't say non facts per se, it is a programming glitch.
youtube AI Moral Status 2024-12-07T23:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgwOJjf-W_6fwKOL-ZJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx7nm8zSps43I7B_hF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzJQUVrzlSAJYayiKx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx17zaQR__CAPRA27N4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyLh9TJsmMFrxq14DV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyyAkQHiAAD516TIl14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzIZA88DUcpaII8EuR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwt2N9LbYlxPPoxzIh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxq82AkzAERpIRhn2l4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzCIDsvnhD0exSVrvt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}]