Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
LLMs *always* "make stuff up". That's what they do. It's just that sometimes, th…
ytc_UgzqqvUub…
G
People fr acting like AI is all-knowing and can see the future, like the fucking…
ytr_UgyQVzTVc…
G
That first shot, the head nods and mouth animations were definitely similar, but…
ytc_UgxJcMBSq…
G
“People were born with the gift of art!” No, we went through so many discarded n…
ytc_UgxI0ukRJ…
G
AI rule the world would be better after I look at all the leaders around the wor…
ytc_UgwAxRv9p…
G
Ai art should never be taken beyond a scaffold or a placeholder. It should be a …
ytc_UgxLnBVCM…
G
Guessing age is stupid af. Okay lets say you're an actual adult and enjoy videos…
rdc_n7e3tsf
G
He is right that he doesn't understand much about all this at all. A robot tax i…
ytc_Ugw8xdBrN…
Comment
Failing to see the point of all this, it's really easy to fool around with ChatGPT, we all know that.
The thing can't hold an opinion and you keep asking for it ! THAT IS IDIOTIC, Alex, yeah, you too can be an idiot. I do that often too, now you're doing it - no grudge held.
You're getting stretches of text from a lot of different books, rearranged to give you the most statistically probable answer - the follow up found in those books to the text you gave as input. This is a curiosity for people who've never chat with AI, but a laborious exploration.
ChatGPT was genuine, obviously, you were twisting everything to get to your desired answer, knowing the weaknesses of that thing made it easy.
So it's just a show, empty of real content because your interlocutor was a cripple in emotional awareness.
Morality is an emotional parameter. ChatGPT has no emotion. It's answer was splendid: "I can't have a moral opinion, I don't have the tool to make a choice: I don't make a choice", that's a precautionary principle in itself though. It's not about the options, it's about the reasons, and the reasons are emotional, outside it's purview. Why didn't you first give a frame of mind for it to follow, you'd need to delineate a personality, but in doing so you'd be giving it your 'morality ', or an arbitrary one if your just toying but still based on your idiosyncrasies.
And you didn't even mention Free Will. No Free Will, no real choice. You used to do much better.
youtube
2025-10-13T19:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | unclear |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6u9kSP0q1ErdBU9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLPJ0vfZ9SzhLKLr14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzLBeq8d6lIIm5Drgh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw8m4Fl-BbNergL9J54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKjDp0n6ot9wgllox4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx0gzwIPuSqnbUrgkx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwodsn1Jw97eGI_RQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwtq5RhAZ0N3_Xvhht4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx9Sxklry1S9csY5cN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDKZ8UbeHYgEO8TVV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]