Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I WANNA DIE IN THIS WORLD WHEN THERES A ROBOT THEY DONT THINK ABOUT THE HACKERS …
ytc_UgzIdCvbO…
G
I program with chatgpt as a helper it now makes none stop mistakes even with syn…
ytc_Ugwhgs-wz…
G
This is 2025 AI is nearly taking over Really? I have already subscribed, should …
ytc_UgxEZXyPG…
G
First humans created the concept of God , or became aware of forces and events t…
ytc_UgzKwKGDz…
G
Yeah, I watched all these AI stuff on a movie called War Games. Is that a true s…
ytc_UgwkxxDRL…
G
No, science is not a religion. What is a religion is evolutionism saying that th…
ytc_UgzoxMwUn…
G
This needs to be said to the AI bros who believe CEOs of Generative AI Companies…
ytc_UgzT41GI-…
G
im so fr right now, just got an add for ai in the middle of this video LMAOO…
ytc_UgzrDuwRd…
Comment
There is a setting in gpt called 'Improve the model for us' turn it off, problem solved and even that is not enough you can turn on temporary chat on the top right corner which doesn't save that chat neither use your shared things anywhere even improving itself.
Note - Still don't over rely on AI, you can go there for in general situations or to get opinion about a certain situation.
youtube
AI Moral Status
2025-08-31T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwhCK5HLx226Vr3W5B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhUXf8Mzpj-EEZRwd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzre9r7Lct0JoDdiG14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxyuUqrsuydYI9N5t14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzi3GsoGtEaqH2trK54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQlnyn0vBux171hv94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxyssjRw1gfVI8VhEV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxSvhNvQELtMRxHq7p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnUIUtdwFkJlxzNDF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy9h_j-oH5BDV2KIY54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}
]