Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
WARNING!⚠️
Please stop using ai and delete any ai apps you have!
Reason 1! Peop…
ytc_Ugxvo19tw…
G
Because Tesla's system is not self-driving yet, it's 'assisted driving.' You se…
rdc_dj69jm8
G
Instead of "progressing" by creating ai robots, how about progressing by finding…
ytc_UgxxS72dY…
G
Economically, the issue isn't the AI. If AI replaces ALL human workers.. good! N…
rdc_j4zijki
G
the more I hear about this AI thing the less I like it!
I hope it doesn't catch…
rdc_o9d30w9
G
I use it as an active working process not as a direct copy paste. My issue is ge…
ytr_Ugxdd1wSi…
G
OpenAI lied about Vatican blessing Epstein. So maybe you should notice the Chris…
ytc_UgzRkBYCr…
G
Since I realized that 70% of humanity isn't even such, I can imagine why some of…
ytc_Ugz0FKx_S…
Comment
I think the key distinction between "good" use of AI and "bad" use of AI is, "good" use is coming up with your own ideas for how to transform content in ways no one has done before, and making AI do that, and "bad" use is coming up with no ideas and just asking AI to think for you.
...which is why I have, to this day, not used AI in a single one of my work or creative projects. If I had a novel idea about how my work could be transformed by AI, I'd use it. But so far, the only potential use cases I've had for AI would be just having it do things I already do just fine with my organic human brain. No thank you.
youtube
2025-11-11T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzmXE9qI8elHEqk1a54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4-wAXUEH31XlRR8d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxpidSiUNUOy5SXszp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-kEdLiy91fqnjsY54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyBMOM8LKCfVS59Qih4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxJJ9fku5UH2m4AY5N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyRGvD3WxRd9dHEpaN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyavRxURzL40Mkhq6t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw1lKeE0d6J532PGap4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3A0cI1uL-lwF9QtV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]