Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem is an obsolete economic model that is unable to reward hard work unl…
ytc_UgyDtmyNn…
G
AI needs to be regulated by a board that is not the government. The government h…
ytc_Ugxv2R_-6…
G
We need regulations. Every piece that is used to train AI, must be tracked back …
ytc_UgzPOxdPS…
G
Let's be clear. F'ing up the planet is bad for humans, not AI. That's the conclu…
ytc_UgwcsfS4I…
G
There's a defending ai art subreddit and an ai wars one. Turns out ai wars was m…
ytc_UgyxFLcKr…
G
I do not know why we fear Artificial Intelligence. It cannot be any more destruc…
ytc_UgyHvdXBf…
G
A good visual aid to understand AI hallucinations: I asked ChatGPT to draw a pic…
ytc_Ugw9CnVlp…
G
If people sucked at art the AI would as well, it’s illegally trained off of real…
ytr_UgwGHPjTO…
Comment
In the foreseeable future, there is a list of things an AI is better at identifying and another list that a human is better at identifying.
The list for the AI has already surpassed the human. Therefore the AI is safer, despite the fact that it makes mistakes humans wouldn't make.
The list of mistakes an AI wouldn't make, but a human would, is LONGER.
youtube
AI Harm Incident
2022-09-03T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgytW-PrK7qF0hNyAo94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwpK4Cog6xEhdxAy3l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyGSj82GtZwOZU0H7Z4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyxmdbSTNyxZXm_8Cd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8v85HcXDH_Ia9FlN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx9FFc6RvwT-h4ab4x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwHQN-ZHISZIHOLyDZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyGDhpSirD3cjGPJ9F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxeoh9w7wopz9C0zDh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxurQRp-5yJmKzaUJN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]