Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm sure the professor is a smart woman, but her knowledge on AI is terrible.…
ytc_UgzXVQIol…
G
The former head of OpenAI safety team says that our species extinction will be b…
ytc_UgwQZ99XF…
G
@illbeyourmonster1Does the AI care about the gender differences? Does the AI car…
ytr_Ugy3Ifpdf…
G
You dont have to have talent to be an artist and not go the ai art route…
ytc_Ugy886bJd…
G
I am not nearly so worried about whether AI is conscious as I am about whether o…
ytc_Ugx1CJwX0…
G
Here's a suggestion: robots/AI should be given rights when they [edit: _decide_ …
ytc_Ugg3qEenr…
G
The AI one is too erratic and smooth-looking to be an actual professional lookin…
ytc_UgyXgexlI…
G
in my work theres no use for AI but company executives are so fixated on buzzwor…
ytc_Ugy_KCNYs…
Comment
I wonder how many people in here hating on him saying he’s a liar, a killer, going to ruin humanity, etc., are also using ChatGPT, lol.
It takes a hell of a lot more than one person to destroy humanity. If hundreds of millions of people didn’t download ChatGPT, how would this work? If hundreds (tens?) of millions of people didn’t support Trump, how would he screw up this country?
It’s just too convenient to blame everything on one person. Everyone did the same thing after the Nazis fell from power: “It was all Hitler! He was a monster!”
Easier to believe in monsters than to accept that humanity itself is the problem, apparently.
youtube
2025-09-23T22:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx2w6y3eW_t6LnrT-V4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyVaQ2LFxCxEaPkUsV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw2zz6umF5ehh6sTCd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyS5UzsUzGX_2LP6c54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxLHxD028wkUk9q30B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxtJj_5yBiRhxftKOl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzoIGivhr1U9ZYQuf94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugz7hAKaF2K-vMktJEJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw_117TjxgH_A6Vy-94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwFdAFcLVTo11lDHVB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]