Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Trying to make an AI lie about being conscious and seeing that it could already …
ytc_Ugx-VGGKS…
G
I think AI is grossly overated. The thing that really scares me is the people wh…
ytc_UgzrQWdLH…
G
Alex O'Connor content challenge: Take a shot anytime the AI model says "I totall…
ytc_Ugw8m4Fl-…
G
imo, as long as you make the storyboard, keyframe, sfx, and plot, I don't see an…
ytc_UgzOS8JH6…
G
Me:
When I ask you a question, please answer as 3 different characters. First as…
ytc_UgwLmnuVN…
G
Oracle is laughing in the corner 😂.
Because it knows that AI won't replace you b…
ytc_UgyvU2AQM…
G
> These chatbots aren't trained for war, they aren't trained on military resp…
rdc_o7pqrvv
G
The comparison to youtubers for me is what made this click the most, because I c…
ytc_Ugw1vc7LT…
Comment
An important thing to note: while this threat is real, the AI in question is not likely to actually be a conscious, self-aware being. That's a long way away. But what we do have, and will continue to have, are complex programs capable of learning on their own and pursuing goals given to them. The AI that destroys us will not even be conscious, just very good at mimicking how a conscious being would speak and present itself.
Honestly I find that more terrifying. We will still be destroying ourselves, just using software that doesn't even understand what it's doing.
youtube
AI Governance
2023-07-14T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzgFuuwnbPlnRsPXl14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyOgP8RdRvGqY2urD54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPG3rTY5SMr5pd2X54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzL6MrltpezmW8yQjx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwfA20nPymYqD-tpMx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwFZ5ZhdolIsZXEBuV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwbUfr1o8Mx_zm6HMd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyPiPLJCyfOFLbRSpd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyeOabVIGTRqZp28ct4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxOqgiZKHlqT7DtjQ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]