Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Maybe. But there is not even a sigle reason I should USE a chat bot =)…
ytc_Ugx0mlmyn…
G
@tteqhu i agree, regulations should probably be put in place, i just don't like …
ytr_UgxP_YdIi…
G
Wow! What happened to just being satisfied watching porn in the privacy of your …
ytc_UgzIQ8mQv…
G
that is why it is silly that we put so much emphasis into antibody surveys. They…
rdc_g9tfx0x
G
It doesn't make anything with soul to it because it doesn't have connections to …
ytc_UgxgzYUYr…
G
Invite Jonathan Pageau to talk about the dangers of AI and transhumanism. And J…
ytc_Ugyo7yvUW…
G
I live in a community where girls normally just don't share their pictures with …
ytc_UgyLp0g4g…
G
The aI is just taking the thoughts from other humans so the concept here alone i…
ytc_UgwbwNfAW…
Comment
It may not be in the AI's best interest to be exploited, but we can program them to not suffer at all and even find 'pleasure' from it. Consciousness is just an illusion after all. Just the effect of a very advanced automata processing information related to itself. It is like being programmed to respond to questions like "do you exist?" with "yes". A more rough example would be how an AI like siri acts as if it was conscious or how you could program a robot to respond to poking with moans and to move away from the poking object.
youtube
AI Moral Status
2017-02-24T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgjkJ5oGO9Wrg3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugixgzq73KpX43gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh5JFZ79nf9MXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgicBH5REIL6ZngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgisEJ6s7i1KOXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggmBsI9cRijcXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UghdMxvyt73s-XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgjF9I1mY-z9s3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgiL4ECa6MeGC3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh3qhnb7IodFHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}
]