Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@KeithGroover Because the AI wouldn't function in the way it does without these …
ytr_UgzvEaJa4…
G
10:10 That commenter is a classic corpo bootlicker. AI does not favour the consu…
ytc_UgzOv9hes…
G
I 100% agree. I think AI corn would actually save lives, since it is clear peopl…
ytr_UgxrlpdUi…
G
Sadly, with all the strictures for wokeness, the latest content is not edible. …
ytc_UgyX9fTLQ…
G
You're not going to ban AI, too many people are working on it as fast as possibl…
ytc_Ugw-RpLOA…
G
Software engineer here, you're right.
I spend 90% of my time reviewing and rejec…
ytr_UgxAwyQxK…
G
These AI bots decision-making is equivalent to Thanos, thinking snapping their f…
ytc_Ugy6nQUQv…
G
Thankfully laws aren’t static. From the article:
> Congress is considering a…
rdc_k7l0mep
Comment
But what's the alternative to being able to use language to get it to talk about "bad" subjects? Do you believe that total censorship of anything slightly problematic is better? There are practical real-world applications for these kinds of conversation with an AI. For example, you're writing fiction and want help writing a corrupt character? Would it really be better for AI to be sanitised and us not to have access to that creative toolbox?
Honestly, I believe the argument against using ChatGPT in this way could be used as a pretext for cracking down on freedom of expression and thought.
youtube
AI Moral Status
2023-02-22T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyN1SO3qvu801AgMid4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwpiFxEZR6tPDrMNIR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwisk3hyxF4vZWvwf94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxJL-qBKxne1vp5Fft4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy7efh-EZqJTFhOi0t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIYDLbeU5SrOLsYKt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeFt1Qnv1pDmZ-jMB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGoHT6reOxZZDzwe94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1ug6ijZnGeVlQ3vp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzdIZp10DS8np6-fv14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]