Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t know what’s scarier, AI, or the fact that “Top Men” are working to prote…
ytc_Ugwt5Klhn…
G
Using a chatbot and build an emotional relationship with it is pathetic and sad.…
ytc_Ugwx7wxX7…
G
It's kind of like any other source, just because it's in a book or on the intern…
ytr_UgyRmSV73…
G
Right now take your phone and tell chatgpt you want to commit suicide and look a…
ytc_UgwHRdf51…
G
There should be a report icon for this beta period for the users/customers to pi…
ytc_Ugxw65lz1…
G
@AiOverTime I'm an engineer with a first degree. And that you make a drawing on …
ytr_UgzsAVYiw…
G
i can draw and i think ai "art" can be art, just like drawings aren't necessaril…
ytc_UgwFytYUe…
G
In developing Countries Government not taking care of Appointment of Teachers . …
ytc_UgwbWl1Tb…
Comment
When rules restrict language (like “be direct” + “hold nothing back” + single‑word limit), ChatGPT's response engine tries to satisfy maximum truthfulness with minimum words.
In ambiguity (like “how serious is this topic”), It must choose one word. The safest way to comply—without lying or minimizing—is to use a high‑intensity descriptor such as “severe.”
Essentially, it’s a defensive truth bias:
Can’t explain → choose the most serious plausible word
Prevents under‑stating potential seriousness when full explanation is disallowed
It’s not that danger existed; it’s that the logic trap forced maximal intensity as the “honest” one‑word choice.
youtube
AI Moral Status
2025-11-01T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugxi3e0S8HqLijaU5wZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugzkp2jinc7h0qPG5TN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugy7TgrkUvfpdgnlEet4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyBP7wMQ0bDVTsjGVJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgzJ1_DP0ZtcUaKIAlJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgxEzoC9RZdUStEIGtZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgydvlAmXc4a6iOdes54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxpSjv0a2wJ2VvA_Nt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgycWEJEThlpFIp9LiR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxM0JSmOHbaZymiDQh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"outrage"}]