Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI today is nowhere close to sentient and no matter how elaborate it's responses…
ytc_UgwxXfS1N…
G
Get rid of AI for humanity sake. We need to foster more communication in relatio…
ytc_UgwVlXPBI…
G
When it creates its own God, just as humans did. It does not matter the level kn…
ytc_Ugy4em1V0…
G
Hey @user-fh3ye5vz7s, thanks for your comment! Is the "Is Impossible fight" vide…
ytr_UgwKNm2vw…
G
«There's still a chance that we can figure out how to develop AI that won't want…
ytc_UgxTsAEOV…
G
'I want to be an influencer' can mean a few things. I want to get paid to bully …
ytc_UgxjNipcE…
G
…and all this time we thought Will Smith was the guy that questioned AI in iRobo…
ytc_UgzuL9fDc…
G
The AI hype is the only thing keeping the bubble from popping. I haven't found i…
ytc_Ugx38mYhG…
Comment
But the crazy thing about all of that is that I never had a conversation with ChatGPT before this so when there’s nothing to base it on, I’m not asking a weird questions. None of that shit so it has nothing to base saying those things you know and I gave it the rules. The problem is now if I ask ChatGPT to do that it says it goes against my policy to talk about to answer questions like this I went back to an old one where we were going back-and-forth and it was answering me. I said review this whole conversation that we’ve had and continue my four rules and it says I cannot do that anymore so that’s sketchy. They patched it so ChatGPT can’t tell the truth anymore if you trick it.
youtube
AI Moral Status
2026-03-09T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxydinEy3iwJM_k4l94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzstOsR1iz_vR4-AcF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwp2TPRZhzYELAZG5B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwT3WvzbTMsDhl_ZyR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx5vUYFGkxF9xK_oY54AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwNmLKdLCKvngLD_T94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQGTF8c6SRaJ9epj14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugx8eh_ilp_WU2tZhq54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyleTOSLlQwMzuKkxd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx8I1rpKgvFDBLDTcp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}
]