Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They all have mostly the same data. Google and Microsoft scrape all the Facebook…
rdc_kojzvuo
G
The algorithm feeds us whatever gets the strongest reaction and most people just…
rdc_ohfj8j4
G
I have toddlers and it shatters my heart to see where we are going. Im actually…
ytr_UgwshcQ9g…
G
@ketotic You're missing my point. My comment was saying that there are ways to u…
ytr_UgzBiU2ob…
G
So I’ve a question about all this, if everything is controlled by Artificial Int…
ytc_UgzM0CbOp…
G
Maybe the ai's are right. even when using "non baised" statistics it normally co…
ytc_UgzM5qiFR…
G
Seems like you’re projecting that the interviewer is being deceptive or trying t…
ytr_UgzxFR5Fq…
G
AI is a tool that allows people that have little capability or understanding to …
ytc_UgydKcoO1…
Comment
Robots lack compassion.. but robots also don't think things like "I want to kill fucking terrorists". They don't have racist or bigoted attitudes, etc. Robots do what they are programmed to do. If they make mistakes, it is because they were programmed poorly. I don't see any reason why AI of the future won't be much better than humans at determining if someone is a threat. And they won't make judgements based on bigotry either.
youtube
2012-11-23T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxVTDG_AcOqtX5Mat54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxP9paH9FALh-nIfnN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZzZdUq5YTfEBRWuB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyvoV0RgNJfvfGauOl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxyCaSrLWjXndY9nGh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcUcbQq_FNZ__zAWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx6cXP0pv4_NK9-6IN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwCbLlgUMEG7OZrV9R4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw8fQ-5ELa48r5vVPV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxVOqPyOnA2Rcu-oAB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"})