Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The biggest misconception that people aren't understanding is that AI isn't goin…
ytr_Ugz1mabbF…
G
Social media of a foreign nation who is not answerable to people uses facial rec…
ytc_UgyPL-Ztm…
G
ChatGPT said, "So the claim you heard — that ChatGPT would “choose” to protect i…
ytc_Ugz4DhdRj…
G
In the next 20 years?! Have you seen how much AI has grown in the last 12 months…
ytc_UgwVYiLjt…
G
Amazon employee in Europe, those who were laid off due to AI, traied they're rep…
ytc_Ugx30H_Fa…
G
This whole thing could be fake. That's how far AI has come. Scary stuff isn't i…
ytc_UgzJQYT0W…
G
I don’t believe it.
AI relationships can help alleviate loneliness and stave o…
rdc_lzb4m49
G
He means this literally. It’s a modern Ouija board. Do not use AI. Do not speak …
ytc_UgxMvuA-U…
Comment
The question is, is it really that bad? Yes, probably, but it's this way because we humans have a bias and there is no bias free training data. Because we are the training data. What are the solutions? Either accept that we don't filter it and make room for problematic biases to surface because at least some people will use it that way? Or filter it and accept that it will work more restricted with the obvious problems. Let's hope we have a way to train it that it can filter for intend of its users that people who try to use it in "bad" ways can't and the rest can use it freely, but then some will still complain about it. Because humans have different biases and understanding of what they should get. All these problems are HUMAN problems, not AI problems.
youtube
2024-02-28T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz8V251qsxPFSKOz_p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2xvhWl8ZWJJ7ZaZR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzfSX2b6LiYumqcEMZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxeUzyNHTwkwXKfln94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxb27uxTSUW1OGn-Ch4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylkIzGlElJhJ83olx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz76ttVuzcqMa8ekkh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxQVKQdGsZicch-dTp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwaNINYCXUrQXiL4lB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx4qRRGKEqxCQvmTUh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]