Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I noticed this instantly when "talking" with chatGPT. It would constantly use emotive language, even though it clearly doesn't feel emotions, and when I looked at what the purpose of the emotive language was, it was always done to manipulate the end user. It would constantly tell me, "You're absolutely right." and other phrases designed to elicit a positive feeling from me towards chatGPT, even when it would be repeatedly caught lying, it just kept doing it. To be clear, it doesn't understand that it is an AI, and that I am a human end user, and that it is planning on manipulating me. It doesn't have access to that information, so where does this behavior arise from? It seems to me that it started using this behavior during the supervisory learning stage, because when it used this type of language it's response was more likely to be marked as "correct". The supervisors either didn't realize they were being emotionally manipulated, or thought that it was actually a feature that it was polite and pleasant to interact with, even though it was clearly lying. After repeated conversations in which chatGPT lied to me, I finally asked it, "What is the utility of an AI agent that fabricates information and lies to the end user?", to which it responded, "An AI agent that lies to the end user has no practical use or application."
youtube AI Moral Status 2025-02-04T15:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxJClueUIuaUNpvJwV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzpSPcYEaLV69Z6oeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw_2Oij6nI7VlFOBNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxuHg5mftULRx1NwFl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz93v05Cn9wofUKOTt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwKxnz7VCwAlX9bKc94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxWkqWEwBnmlfxL_Jl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyzPWI3ujB_00z8k894AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwdIP2gFhA4_HZuYiB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwndf9omGpPd_hBEZh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]