Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Instilling perfect ethics in AI (such as the Quran) is the only hope; the proble…
ytc_Ugx5Fjgmb…
G
I can't wait for the stupid AI bubble to burst and these tech companies feel the…
ytc_UgwsQMSMK…
G
I wonder where it learned that from. And this is a drop compared to what is ha…
ytc_UgwIkIrKF…
G
Scary and dangerous. Robot has no feelings. If china wanted an army like Super S…
ytc_UgxuxJA1W…
G
This man does not know what consciuosness is. So he partecipates ai business. An…
ytc_UgygA8za1…
G
The Sun will erase us all along with Ai. So yea enjoy every moment you have.…
ytc_UgxnluZFR…
G
I understand the concerns, but I don’t see the difference between ‘looking at ar…
ytc_Ugy8F3aA4…
G
On flattery:
You can turn that off. There's personality settings in OpenAI's mod…
ytc_Ugx2z7Y-c…
Comment
Why would someone use a robot in place of a servant if their outcomes are unpredictable. And they have a will of their own. A computer that exhibits such characteristics would be one which would not be replicated.
I wouldn't be to worried creating consciousness. Humans know nothing about what it is or how it works.
youtube
AI Moral Status
2019-12-29T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy0gjdV9fhwM-m-7zp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzaDKLbMavJLtCFuLV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXjpnmOtWG7JYTYnp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzM7D55jKAMLucgup14AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzazQbmVdwYL00S9SV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzYSi9YA1viwzY-opV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy4hiZR9Il2L3m5d2t4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxm9eAGlUKJCQJKt5N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFEwB2G_UoLk2bF_R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxMG6x2mRHVwqZBgvJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]