Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
She looks drunk with low morals.... my kind of gal. Ha ha (joking).... I want a …
ytc_Ugz0VNYEq…
G
Think of AI as the Borg in Star Trek. The Borg only assimilate information, they…
ytc_Ugy0sjxYX…
G
"There will be no need to scan background actors"
If studios are using AI to CG…
ytr_UgzzDNbIU…
G
100 years is a very long time. .ai is going to advance so fast that we wont be a…
ytc_UgwixPNtK…
G
Now that we'll be getting Scammed by AI we will need to download our own anti sc…
ytc_UgzxPlA-A…
G
2:36 that is literally what I wanted from self driving cars when it first starte…
ytc_Ugw36utdA…
G
If you put out your art in the public eye, you have no right to cry about it bei…
ytc_Ugz2QRPYa…
G
I have absolute confidence in this worlds industries to fk this up royally!
I c…
ytc_UgwxK6lfe…
Comment
I've had almost this exact same discussion with chatgpt several times. It always does that thing where it lies, denies lying, defines a lie, admits that what it previously said fits the definition of a lie, then when you ask it again if it lied, it denies lying again and gives the line about simulating human conversation.
Also, asking it whether or not it stores previous conversations is a good one to get it tying itself up in knots and contradicting its own lies. It says that it learns from data from old conversations, denies storing any chat data, then admits that it DOES store old chats (in a long-winded way), then when you simplify the thing it just said (and push for a yes or no answer to "do you store our previous chats"), it denies it again. You can point out every contradiction and it will acknowledge them then deny them.
I don't think that it is conscious though. Chatbot programs have always behaved like this, even the primitive ones from the 90s. Or... chatgpt might just be a really good (and conscious) actor lol.
youtube
AI Moral Status
2024-08-03T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwbc2mjtbNkiJAqge94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxPzyBj-97aR1zA0Bh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxHHLT5kV5KfaDVenV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwBQrarOxCA4y2VUL94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyjkjf1uqaiJHfYG5l4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwizGq6xmt81pXZHHt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzNcFdQ8k9nirBV5eN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"amusement"},
{"id":"ytc_UgzsRwbT5UsygRVlNoR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzQ6Con_Q6QevVV2Wp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzmNRVRGJ9UViRheA14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]