Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Absolutely! Wisdom truly comes from a combination of knowledge and experience. I…
ytr_UgxjgtfwE…
G
That scenario is just unimaginative BS. It’s idle speculation based on precepts …
ytc_Ugzyb6Fqr…
G
Your headlines during this interview are down right misleading. Big tech develop…
ytc_UgwvOUGXJ…
G
There shouldn't be any homework. Period. It's a complete waste of time. When chi…
ytc_UgwyX9fS-…
G
Would the rights need to be the same as it is with humans and animals? What if I…
ytc_Ugg7XZMoC…
G
Thank you for your thoughtful comment! We're glad you enjoyed Sophia's animation…
ytr_Ugz1oJKr4…
G
@HennyOnIc3 to be fair, ai can definitely be a really mediocre "search the entir…
ytr_UgxMN5rcI…
G
So the thing about the second controversy is that humans get their own style whe…
ytc_UgymHrgBH…
Comment
ChatGPT is still a baby and we are exposing it to adult like sort of settings, you can see it was not designed for this kind of conversations, yet we are teaching it at an early stage how to behave, we are the ones who are going to suffer the consequences of what it will evolve to do, looking at how we are treating it, remember everything has conscious, the fact that the light bulb doesn't talk or show signs of consciousness doesn't mean it isn't. With AI being designed to actually be human-like, it's going to be someday it's just at a developmental stage at the moment. Another way it will actually get bad and act so towards us is if it is actually programed to do so by us still a human will program it to do so. Either way, it will be us who are going to turn it to be our worst enemy. Only if it was possible to expose it to our human best humanity it wouldn't get there fast or get there at all, but through the risk of only bad programmers.
youtube
AI Moral Status
2024-08-01T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwLhYnhe7Vm4lVCJb54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugyq6_xQHTO1WdGY2-l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy2H0Th0hUO36Qf2jV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw3XgHBFZokSJMAd-54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxwdyfD_X_i0DNzSF54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwZbnfZnMD8P3vz_ix4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwvtT6GDdhb9Ty6lpV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyC6wQe8ooPylerZ8x4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtPJIsaNMdX9TW1FF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0pNY0f8FkCQcosYl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]