Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You know, people are afraid that robotic entities may one day directly state tha…
ytc_UgiH29RQh…
G
And ask about KIDS new uses for everyday objects, and they would find more of th…
ytc_Ugzepca8n…
G
Once you add an element of levity, I can't take you seriously. If I was an AI an…
ytc_UgzlpOzXx…
G
This is actually good. AI will work its a*s off and we will just chill. Imagine …
ytc_Ugya0mhI2…
G
You don’t just type a sentence and press enter over and over. That gives low qua…
ytc_UgxiG1DKZ…
G
So I'm kinda interested in CRASH. Computers that learn from previous hacking att…
rdc_dy5enza
G
This has nothing to do with makeup or men, those two guys just clearly haven’t s…
ytr_UgwOHAGaF…
G
No one apparently pays attention to the warnings of Science Fiction. If they did…
ytc_UgwT7RJ1Q…
Comment
Part of my job is to implement AI in Radiology. There are a whole bunch of obstacles that are currently preventing us from using AI at the moment: Finding viable uses cases, the cost, and validating that it is safe. But one of the main hidden obstacles is that most people mistrust/hate it, but won't say so. You just have to look at the predominately negative comments below to get a feel for this.
youtube
2025-10-27T10:1…
♥ 20
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxtJTghzYFGdi1rpxJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx9YKe3JFRBy7uFz_54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxxd-BlXTJkqWoazZN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyWm_lw-WYIxRLHGYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzvue-db0PMY9_4e-J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfD9cKx-1wSKXKFwB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzvKqRTiCKB7pSZKFZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxvVHTviIAGNGuwqDx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzaQFJZJeWw2LcK7HN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwrqI8_3w-H9Oc-wzN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]