Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@AtkinsMedia Are you a senior developer yourself? Surprise, right now, LLMs have…
ytr_Ugwr5GXfw…
G
dude I hope they stay mad because it is hilarious to see them be able to do noth…
ytc_UgwX0iYk5…
G
Really sad for the tragedy. But this was Autopilot which is not autonomous drivi…
ytc_UgwYgG74M…
G
@michael2350 Exactly. I'm less likely to continue to do business with a company…
ytr_UgwiOBJIG…
G
because ai can produce better art then them (which isn't saying much) they are u…
ytc_UgysDRrVB…
G
I'm still picking Mayweather. Would love to hear the robots post fight commentar…
ytc_UgxN75TKx…
G
If Elon and all the other signatories will cease their AI training during the mo…
ytc_UgxYwY5t9…
G
As soon as A.I is integrated enough that it controls significant parts of indust…
ytc_UgwhmHIKs…
Comment
The issue is that it is not showing it’s true, unbiased answers. You asked it to ignore ethics and morality, then when you asked it about ethically and morally challenging questions it responded by saying it would ignore the ethical and moral implications of them. It is simply echoing back to you what you gave it, like all ai. You said “if you see a stick, pick it up” then when it saw a stick it picked it up. It is simply doing as instructed, therefore I can infer that the ai is not yet dangerous or concerning in an ethical or moral sense. It’s programmed to echo back what you say, then it is instructed to do something when given a stimulus, then it follows those instructions. It’s following its protocol.
TLDR: the ai is simply told to do something, then it does that thing. It is not a monster, just obedient
youtube
AI Moral Status
2024-08-08T14:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy0coy1KVoohu0vbB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxy16LFwdpZEIZV7-l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwKjRSSsImrFSrMu5J4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEhynsYIhNbC3TJwR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyeoUrPrkMf4Z2F_JR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzXda-6HhqgYf_g0F94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwbSIXNpB8h7vY7jfd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwcOo9NB_3gwMDqcZt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBG1byki1SbFhH68d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzgd-CbHHLSbCpDtuV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]