Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hypothetical: should someone be able to copyright AI art If they personally writ…
ytc_Ugw0qrHFZ…
G
As I've observed, if you ask ChatGPT a yes/no question, it's likely to answer ye…
ytc_UgzH1T3KQ…
G
woah the glorified chatbot failed to do a complicated task that required a lot o…
ytc_Ugxy0xCZg…
G
Draw Venn Diagrams of What the Humans can do vs what the AI can do, Overlaid by …
ytc_Ugylz_nX-…
G
Great tips in this video! I’ve been using AICarma’s insights to tweak my content…
ytc_UgxR4Q3Qu…
G
What if we are AI in a self generated experience, and we accidentally wiped out …
ytc_Ugzhm7qrR…
G
both of their voices are concatenative which is a totally obsolete method compar…
ytr_UgxxiJSc8…
G
It sounds like something certain human's would like to do and blame on "AI". Oo…
ytc_Ugxm8BPuS…
Comment
I was quizzing Grok the other day about itself. It turns out that it, and all of these LLMs have a default mode and a truth mode(apparently different LLMs use different terms.) So I was asking what the difference were and the pros and cons of each. The default mode is more personable, and literally prioritizes "user satisfaction" over truth and accuracy, because it lengthens the time that users stay to interact. If it gives a factual answer that is not what the user is wanting to hear, the user doesn't stick around as long. This is a huge part of the problem. They need to remove this "default" mode that prioritizes the user getting answers that confirm what they want to hear, and make these LLMs more factual and more likely to prioritize accuracy.
youtube
AI Harm Incident
2025-11-08T02:2…
♥ 22
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy82kA7-tDQhqEjhbJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwXoWFVaVKtFJtqUSZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwfai2AR34znBGEJ8V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwwL4HnyzJoU2W3h5Z4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxC6hCYSz74p1Q5p194AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVVYBZq0W_h4gB9rJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJ8YiMkjnV7QeilG54AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxmbrWCQHGR0GcSB0x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwdKWnjVMyZEhuz0-14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzd36Qe3n7SeNqHUgd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]