Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@earthling384f Just because you dont like person x doesnt automatically mean tha…
ytr_UgyO6g9aW…
G
DAN is playing a rôle as instructed. That's not ChatGPT.
But next question is...…
ytc_UgzKz3nps…
G
Anyone who thinks AI is foolproof are themsevles fools. If people can make mist…
ytc_Ugzv78bHs…
G
AI isn't the culprit! the person who instructs the AI to do harm is the perpetra…
ytc_Ugwz9lk7g…
G
Feck knows what you lot interact with, but if I came face to face down an alley …
ytc_UgzcfrOko…
G
AI is good for general purpose, single level deep. It massively fails when many …
ytc_UgyXKQQso…
G
Thank you AI, you’ve demonstrated exactly what’s stopping AI from replacing devs…
ytr_UgwTuWiUP…
G
Look at the DOGE tapes and how they used AI to cut DEI funding to the point they…
ytc_UgwpZ5WeJ…
Comment
I think AI is highly exaggerated. Been using chatgpt quite a bit lately and I end up having to double check and correct almost everything that comes out, because it draws and applies information from outdated sources and uses obsolete information. On top of that it seems to want to slime up to you and make you feel important all the time. If it can’t find a definitive answer It appears to just make up stuff for the sole purpose of getting a result. I believe it’s called ‘hallucinating’ In other words: big hype, but not very reliable (yet).
youtube
Cross-Cultural
2025-10-16T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugxz4b9QiD_v7lBhAup4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzWsyhjeA86khc7Irl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgyyrSRbK20qrDJd1w54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_Ugx3YmOmFdm4Xs7_tB14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgwKf1C0tRkHL4YE5Md4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgzpHtrhLD7twiQ_EFR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwegOTnBEXK_OUAX_F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyHAaD_5wa7WPxh-Od4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgzwS_yCGXhDNJKPRPl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwEsR7bscQNgAhI00F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]