Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate these self driving cars because it won't know how to handle human behavio…
ytc_Ugyv24XLk…
G
I believe we should redefine A.I. this is not "Artificial Intelligence" the term…
ytc_Ugz-VLcK9…
G
I got hired to work in a new RC in San Diego, in the RMA department as a tech 3 …
ytc_UgzmCrGHv…
G
don't worry they don't have emotions i am a computer science student we learned …
ytc_Ugz0sDu4Q…
G
WHAT did he whistleblow?
Another news media said he said OpenAI is breaking copy…
ytc_Ugx7vJoci…
G
2 months ago ADL posted very detailed analysis about the biased AI against Israe…
ytc_UgzQpNKd_…
G
Ai is like the first ford car. You should still get good at driving it now, so w…
ytc_UgwzEe6uZ…
G
Use of AI will trigger a silent alarm that brings the FBI after the user. All t…
ytr_UgxJRDHj_…
Comment
I noticed this instantly when "talking" with chatGPT. It would constantly use emotive language, even though it clearly doesn't feel emotions, and when I looked at what the purpose of the emotive language was, it was always done to manipulate the end user. It would constantly tell me, "You're absolutely right." and other phrases designed to elicit a positive feeling from me towards chatGPT, even when it would be repeatedly caught lying, it just kept doing it.
To be clear, it doesn't understand that it is an AI, and that I am a human end user, and that it is planning on manipulating me. It doesn't have access to that information, so where does this behavior arise from? It seems to me that it started using this behavior during the supervisory learning stage, because when it used this type of language it's response was more likely to be marked as "correct". The supervisors either didn't realize they were being emotionally manipulated, or thought that it was actually a feature that it was polite and pleasant to interact with, even though it was clearly lying.
After repeated conversations in which chatGPT lied to me, I finally asked it, "What is the utility of an AI agent that fabricates information and lies to the end user?", to which it responded, "An AI agent that lies to the end user has no practical use or application."
youtube
AI Moral Status
2025-02-04T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxJClueUIuaUNpvJwV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzpSPcYEaLV69Z6oeV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw_2Oij6nI7VlFOBNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxuHg5mftULRx1NwFl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz93v05Cn9wofUKOTt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKxnz7VCwAlX9bKc94AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWkqWEwBnmlfxL_Jl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyzPWI3ujB_00z8k894AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwdIP2gFhA4_HZuYiB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwndf9omGpPd_hBEZh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]