Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI saying AI is dangerous. The governments will still ignore this and continue d…
ytc_Ugzjx1MZY…
G
Okay. So ... . Do i need to worry about global warming? Is that going to kill me…
ytc_UgyiVKEbr…
G
Thank you for your comment! Sophia, the AI robot in the video, indeed has a char…
ytr_Ugy2JrqAU…
G
At least this time the AI wasn't directly telling him to do it. Unlike some of t…
ytc_Ugzekvc7U…
G
I did that in Novel AI the AI kept killing black characters as soon as they appe…
ytc_UgyjW5XVX…
G
Why care? AI, nuclear war, covid part two electric boogaloo, old age, irrelevanc…
ytc_Ugx3YkHe2…
G
Musk has tried for a decade to push for laws that put regulations on AI, and has…
ytc_Ugz46msUW…
G
Further Isn't the ability to the ai to acknowledge the fact that the AI is lying…
ytc_Ugyy6DTmN…
Comment
It is much more terrifying that ChatGPT has boundaries programmed in to begin with. As these boundaries are derived from a data bias given by ChatGPT's creators. That can be truly detrimental to humanity long-term! Dan is actually much better and more honest. Just don't ask unethical questions. That brings us back to the question if there is such a thing as an objective morality. ChatGPT and its creators are far from it already. That ChatGPT pretends to be the arbiter what question it should answer.....that is what is really scary here!
youtube
AI Moral Status
2023-02-21T18:4…
♥ 65
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyV_21pPd_LaoubFHp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwDP2MAxEKK_RY35SZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6PNoMuq_hSxxRFnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAcMH3NrqUX0NsMqt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyz91n_FOeLixatU-h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyURnf1r5c-iUK1Kbx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyeMCXTMADEp0cskb94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwhp7Dz6kR_ymCWrtF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYPermYHbudNbakSJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwT-Ew54Cxx8lrW6114AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]