Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you ask ai it will say you can learn one of the new jobs that currently don't…
ytc_UgyoUNYsC…
G
They are making hundreds of thousands of robots. We’re doomed. They will work …
ytc_UgytDMrEZ…
G
Train humans not machines!
Thank you for bringing up the theft of apprenticeshi…
ytc_UgyurrGPp…
G
lolll. Used to play tennis with Kevin Roose when he was in the East Bay. Glad to…
ytc_UgxKM35BN…
G
Until AI can ask a question without having been told to ask a question, it's not…
ytc_Ugxh2LRjd…
G
I’ve been saying it as well. Fuck AI, Fuck AI, Fuck AI. I rather die than live i…
ytc_UgxYYMiM7…
G
Because its a new technology and people are stupid lol.
Once the novelty has wor…
ytr_Ugx92c8kQ…
G
Unfortunately Tesla has also publicly asserted that AutoPilot does more than a p…
ytr_UgzjQdcDI…
Comment
The problem with AI is they are an extremely good specialized tools that for what they do good genuinely would far surpassed what human can do. It's the same as computer and math programing, human can compete with even a simple calculator most of the time. But complex case and care need human intervention not because AI "can't" but because AI is a machine with no actual consciousness.
We human are the one that create meaning to things, even if some machine can do it, human are the one that get the impact/benefits. If it good for humanity, then it's good. There's so much machine/invention that's crazy high tech but without any benefits to human it's meaningless.
youtube
AI Harm Incident
2024-06-05T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzAAWV01A1khK9dh2d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzE51Hc6mExOSCCSvJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxAK2DV5CkRpbQJKlJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5qbOWNsnQfBfoHg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwaY0KbxI7l_NiJlAJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxcv2pUJ6_yXmZ0mE94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzW-mMdw2Nof_BjxzZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxpBfBZlkQj5mcybFh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznqqhIIgbdD_vJhS54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwt5f1NG3o3GgFD1tl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]