Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Spend 25.000.000 $ airdrop on target, target wears a cheap halloween mask for 4,…
ytc_Ugyy7TiD3…
G
00:07 ... Correction, not "leading researchers" it's "ai CEOs". The fact that u …
ytc_Ugx1MA3ob…
G
The only way republicans are going to care about corporations replacing workers …
ytc_Ugz-5m9bI…
G
I actually do have autism and I do not like AI at all AI all it does is take peo…
ytc_UgwyDVlZn…
G
This just shows how dumb and greedy boomers are they think they can just replace…
ytr_Ugzlvhxz9…
G
most likely not gonna stop anything but I REALLY hope that its at least starting…
ytc_Ugwn-PXW_…
G
AI has the power of "speed" over human intelligence but human intelligence is ne…
ytc_UgwOD5fIW…
G
Yuval, if you’re curious…
You’re welcome to ask ChatGPT a few questions—not as …
ytc_UgwjndHtn…
Comment
Using your "feelings" is completely useless. You are not educated enough to make the decision you just made. Fact; we do not yet know what makes consciousness real or how it works so we should never try and create it artificially simply because of the ethical implications. We should not be developing A.I.s like this until we do figure it out because we could just be creating beings that suffer in ways we could never even imagine.
youtube
AI Moral Status
2025-12-02T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy6lQBRNBuIh_3HfC14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzhgEbc63s7BwMszMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqrIvA1AoNsqOR3wJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2gtarnsrta5nRf494AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzGNfnCxfNp0Fjq6Lh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzoMrPmwFpXn3a5IHJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4OOxRJ3FkQIO8ag54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx6adDVsGcOJhkKUlJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCMG6tp7iRZDXqnXB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwj18OmKNgJPQS1Jql4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]