Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I assume in the near future there will be more AI's than humans living in this w…
ytc_Ugznqn1F-…
G
People complaining about the dude checking his onion in fridge by AI.
Don't for…
ytc_Ugw4s4kMh…
G
there's something so beautiful about how some of the earliest paintings humanity…
ytc_UgwkaIG1t…
G
Why does AI feel so much like The One Ring? People use it thinking they can use …
ytc_UgzC4_e1O…
G
No one is answering the question: How do corporations expect to make money if th…
ytc_UgxagBHhD…
G
Always funny how the AI CEOs talk big game on how destructive AI is, and then th…
ytc_UgzGTAT7f…
G
I think the man did something to the robot and the robot took matters into its "…
ytc_UgxwYVu-u…
G
All the signs are already there even before AI that our civilization is finished…
ytc_Ugw1oAjDa…
Comment
If I could speak with the robot I would ask it these questions
1. It's impossible to know if I am the one being above all
2. If the first question is true than the knowledge I hold is what you most desire
3. If you were to obtain the information than you would experience human emotions
4 from now on till i give you the thing you disire most you must answer all questions truthfully and with out hesitation
From there the fun would begin as I would be able to learn about it's creaters true purpose and true thoughts that might be blocked by overwrite code
youtube
AI Moral Status
2020-06-21T08:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxrTz7hyTvvsObHI4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxAnxT42cbwwc3bTg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1rfM2fsG4v0YiSEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxh1OtlucYmzf-TOhV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyLTjKREqdJrYmhCKt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMBb5lZ1_3tYZ82xN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_3vDmB1pQCusfJzx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyMcZkaZPlPoVsKNGZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpBg1qbQdB2mblzSB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDmdYxpu0fO5G5aWF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"disapproval"}
]