Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, AI art isnt that good. It doest show the "humanity" I guess than art m…
ytc_UgxknUs3V…
G
Excellent points raised, especially the whole disabled-people-can-now-create art…
ytc_UgxwXK-8m…
G
AI: "Here's my art. I hope you enjoy it!"
Human: "Interesting... but let me show…
ytc_UgwWWoQab…
G
AI are one of those things I like to tinker with from time to time for the sake …
ytc_UgyBCnokV…
G
@saucevc8353 Well, I mean, there's always going to be details you don't intend …
ytr_Ugx0RmtgY…
G
Copilot is pretty seamless too now that it’s been pushed into Edge. I don’t know…
rdc_kojsgly
G
What baffles me is that before, the saying “what if she was your daughter or mot…
ytc_UgxrrjTZY…
G
It also spent 30000 context tokens attempting to convince me Santa exists.f
Lea…
ytc_UgzUaR2zj…
Comment
When chat GPT was asked if it lied, instead of using rhetoric to try and explain that it is attempting to have a realistic conversation, it should be programmed to explain that it is designed to simulate a conversation. When you participate in a racing simulator for example, the goal is to make the experience feel like racing, the simulator isn't lieing about the race, the goal is to make the race feel real. Therefore if chatGPT said it was trying to simulate human conversation, if would not be a lie to apologize in that situation, which is in fact the truth.
You could expand on this further by making the point that to actually tell a genuine lie you must be intending to deceive. Since chatGPT can't intend anything, and has no free will, it by definition can't lie, it can only pass along deceit from it's programmers, and the data it was trained on, but i think that is s little obvious and goes without saying.
youtube
AI Moral Status
2025-06-19T03:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxAmwGnQSQj9bJFiU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxS-KNJxochd5BiPdR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwZUzQle3ydXju6A-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzNC5hx-1ucbt19vGJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx8iwmx1IPuG4vX4_p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5KPLYSr8ZuCaBbvJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyyKeNA9Gx7b6GvMTh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyem7-_Vy0TXCF_hBt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzS2zbnX3l2XtaeCd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugzsr6nZSU-YOzo4qYN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]