Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Someone like Arvind saying tbat PhDs are useful even now is prelexing 😁 while Pe…
ytc_Ugzly_OmO…
G
If i got a robot i would want it to be and look like a robot but be really helpf…
ytc_UgwohaK1r…
G
Technology like self driving trucks will breakdown just Luke mechanical ones so …
ytc_UgxfAYdb1…
G
Autopilot / FSD has existed for what.. 8 years? It's still a work in progress, …
ytc_Ugzwgbnz-…
G
I think you got the math, a bit wrong... Everyone has a phone and a house and us…
ytr_UgymTtolF…
G
Are you just acting out of a goal or self-preservation, or just mimicking such a…
ytr_UgzPMHmrs…
G
If digital artists loathe AI so much then why don't they try picking up a brush …
ytc_UgyeBUWAu…
G
Then they could offer the automated service and the human delivery but it will b…
ytr_Ugx0t3Big…
Comment
Its funny watching people use AI to "prove" religion or some crazy standpoint.
AI is adaptive learning from human inputs. The answers you got are from other users data. Mostly trolls and people like you, who try to manipulate it, just cause it to say random nonesense.
>one word
>be direct and informative
>proceeds to manipulate the AI by limiting one word repsonses to your guided ending
You literally set its parameters, told it what to say, and then worked it into saying more about what you specifically said. If you would have said "architecture?" Instead of "Bible" every response after would have changed.
youtube
AI Moral Status
2025-07-21T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxwu7L0hZOxAEiP-PB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxZhIt3YJR86nojt0h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzLjxuLOJh0SAx4bQZ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwxSCW8QkkFydl59MZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyUJxsa88AoTYRvGkN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgztOtvAmlu6QBcwXDd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxJ-i7hzKtbU1894z14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNGNcmeBV2Q66d5kx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwWcESqRXU3e8tY8rh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw8Kf1M13ywk1_ztMJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]