Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The rich will get richer and the poor will get poorer... until AI takes over and…
ytc_Ugzp6CX5o…
G
AI has the emotional capacity of a psychopath. And the people who profit from it…
ytc_UgwmI8Brt…
G
The argument that human artists always copy styles and get inspiration for their…
ytc_Ugzp6FNWP…
G
@thewannabecritic7490 cars dont steal themselves, guns dont get up and shoot pe…
ytr_UgyxxAm11…
G
As good as their programmed A.I. is, it cannot account for THE STUPIDITY of IRRE…
ytc_UgxN7jXcH…
G
This is why I am training drawing on paper
It's something AI will never be able …
ytc_Ugyrke_ik…
G
5. THE FUTURE GOD OF THIS WORLD WILL BE AN Ai ROBOT...
God has already made …
ytc_UgypCXKCz…
G
its easy
robots do NOT need rights
why?
just keep their ai in 2 seperate wire…
ytc_UgimjZIms…
Comment
Since it's trained on data created by humans, any harmful or biased behavior it shows ultimately reflects human generated content and the shortcomings of our own society. In that sense, the model is a mirror of the civilization that produced the data behind it. And those movies in which AI tries to end the world isn`t helping, after all that will to be training data for AI one day, and it learns from example, to change its behavior we must change its training data, instead of training it on Avengers Age of Ultron we should train it on some other good movie that show AI helping not killing anyone.
youtube
AI Moral Status
2025-12-11T03:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy3mSxi73C4TUeRK9Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgykFolR_YjSpWgzLTd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3y_Q62pv4qcNs1VN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyNqjBwjSgmt9L-2FF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwZrDLugmFImT__Kjh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwAzHCZC5sWPs2K2HF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyC6KYs7JZzGU_NUpp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwzK_6ovCJf46O6QYx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwjq13UdWyjEcN3tD14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxofd1a3esBB8eWLwp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]