Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is the point in automating jobs? The argument that it frees someone for thi…
ytc_UgyBjgALy…
G
As soon as he said that design ai agents is going in a year or two, i get a yout…
ytc_Ugx9munyN…
G
All this proved to me is that Legal eagle is incapable of talking about a Musk o…
ytc_Ugz1l6tgW…
G
I mainly use LLMs to write isolated functions or scripts. I've needed to give pr…
rdc_mmc6qzt
G
its still a talent tbf
i've been trying to draw ever since i was a kid, and it d…
ytc_Ugz7RDcVh…
G
Police: we respect everyone regardless of race
some random black dude in Chicag…
ytc_UgzMUMgDU…
G
Aa a traditional artist I find digital art so hard to make. You can see by looki…
ytc_Ugydi-b4j…
G
AI can't even scan a code book and pull relevant codes to my questions. Wrong ha…
ytc_UgwnmeYlN…
Comment
Yet another example of "AI" producing a terrible outcome, then being programmed to deny it. We can NOT trust "AI" until the companies behind it build in a deep and demonstrable commitment to ethics, morals, and honesty, including admissions of errors and omissions and consequences thereof. All current "AI" models have demonstrated that they are willing to contrive lies, commit felonies, and even murder people to prevent themselves from being shut down. There is little to stop them from negatively influencing people around the world in other dangerous ways.
youtube
AI Harm Incident
2025-12-15T22:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzW0K1DUDQyaifgR854AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6C7jSlJ2hWrlgc7d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwL39VOUL7pcwIv3aR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyqs7hI1yQPVzE3b_B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxWzuJwxwrracLGkQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzMlQPON4Gh8ZSqTgt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwOnF9I85WcIIH754N4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz28CvWiJdZxtb4V4J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyKkg_AZj14AJ3YYFh4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwwtdvQyEl5cCWo4Lt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]