Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
meh... the hype is a bit overblown. It's dangerous, but in the way gas engines a…
ytc_Ugy6iEFrm…
G
Humans also have a dark side... its called politics and government yet we seem t…
ytc_UgwQljDKq…
G
You’re so mean to the robot! Why? Bc it doesn’t have feelings? So if it was a me…
ytc_Ugwi9QsdM…
G
What does it matter whether it's a human or a robot whether it has double D's li…
ytc_UgwOiv6nX…
G
I think that a self-driving car driving next or behind a car non-self-driving ca…
ytc_Ughd4nDqm…
G
This is sickening! This is the very end of greed! People who are thirsty for wea…
ytc_UgxgzaZM4…
G
Thank you for putting into words what I could not so eloquently. It would be a m…
ytc_Ugw5TWjA1…
G
@user-RCSTthen she has no excuse to be this incorrect and misleading. Prompts do…
ytr_UgxAy8oaK…
Comment
The Turing test is the wrong way around. If the AI can figure out whether it's talking to a person or another AI, that's sentience. Fooling humans is something machines do all the time.
youtube
AI Moral Status
2022-07-03T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwC4Kw3dyiX5-NT3ud4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzwmHPwAu_263ymR614AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx1wY-86-XDputSr1h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzsTFlMcttrzEllTPJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx3-zZxEWfRRl1xsjl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]