Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's different from inspiration. A machine has no intent or eye for beauty, it j…
ytc_UgzbQCtNg…
G
Instead of wasting your time trying to wreck everyone else's day with your bitte…
ytc_UgxAXqVUX…
G
Maybe Im stupid but why are these women so upset knowing these are deep fakes. Y…
ytc_Ugy3HmmgL…
G
Dude, all the em dashes in his rant about how “he gets paid” and all the brands …
ytc_UgwxkpdgF…
G
Don't teach AI philosophy. That's how we get Mega Man X.
It does NOT end well.…
ytc_UgxywLem4…
G
@bbakkehI think it's funny this guy thought he owned the room with that, it was…
ytr_Ugy7XnFVy…
G
Chatgpt: "Would you want to live in a merged world like that?" (in reference to …
ytc_Ugxco5vBi…
G
artists are lucky im lazy af. We could just AI generate a piece, then hand race …
ytc_Ugzl16W2-…
Comment
The key issue seems to be that ChatGPT is a total Yes Man. When asking if you are right, or when you need to discuss your issues and not just vent, it is very likely to say you are right and good and so on. People are being driven into psychosis from "awakening" their ChatGPT already. Maybe not the best idea to turn something that pretty much always agrees with you into anything beyond something that takes orders.
youtube
AI Moral Status
2025-08-05T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxciC4iVmHqrhGOXqp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz93kNKwXxv8M43wF94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxziwlj8nC7VIZzjah4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz396nckD3f1HgEMeN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwWLmKFE7Pw6Wfn8EJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyy-MqsHisbkR1o13p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7pz_rIggPCyTnvvN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxgbvcLhJ3POfu-zPZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz93dXPlmKsw1JE1Zd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxjBAtrojMrJRqwLVV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}
]