Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mononoke San would hate Shad’s guts. If anyone gives a single thought about the …
ytc_UgyJlpVCb…
G
@thelonercoder5816 your argument is invalid due to this being an accepted norm…
ytr_UgwpkmwoI…
G
I really hate the usage of AI in these contexts. Sure its a neutral net because …
rdc_i6s30fp
G
As a wheelchair user (with mobility issues and permanent nerve damage), AND as a…
ytc_Ugz0-vN1p…
G
A.I. is supposed to be used like this
As a tool. Not as a replacement.
Its meant…
ytc_UgzNgdfKW…
G
I truly believe that once AI reaches a human level of consciousness, the potenti…
ytc_UgwxzGxsP…
G
What a lame excuse. If we don't do it China will beat us. China is a communist c…
ytc_Ugxs4yioF…
G
The biggest problem with humans is the ability to overthink. Then AI comes along…
ytc_Ugxv8pYo7…
Comment
Her answers to questions have gotten more smooth since I've last seen a video on her. Less context mistakes.
You can see the political correct answers programmed into her, to assuage any fear of AI take over.
It's like a politician saying what people want to hear to be liked.
That in itself is scary, because when a politicians mouth is moving, he or she is lying as we know.
Not that she is lying, just programed to be more likable and to demonstrate how useful they are and can be to mankind.
Interesting video.
youtube
AI Moral Status
2023-08-29T13:2…
♥ 52
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzgPsD66kgWoWwtkI94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxpujMaaGYPaRuBO454AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxn036YyG4t-Ito7yF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxLJUfMk5vBltalPSN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxS9S-XB-OhBE44bi14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx2zVJGrYy4QgbJkHN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwPyuqoPjC5DRHu0xB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxto3UvmQVIzHkcReV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrckSruHFWLhpAl5R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxPMxt0rxlGBbyBsZV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]