Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well.. Emergence behavior in AI is something that is not programmed. The AI unit…
ytc_UgzjeunJT…
G
I figured this out a little while ago but under different circumstances. So llms…
ytc_UgxQj4z2h…
G
Im a novice go player who remembers how AlphaGo ended up beating one of the grea…
ytc_Ugzr5ZJgs…
G
It's weird that when Alex backs the AI into a corner, it ends its answer with, "…
ytc_Ugw5ohfF_…
G
3+ million views, 56k comments ... Ai controlled semi-trucks clearly is an issue…
ytc_UgwIziVv0…
G
Then there couod potential be 2 AI species if the one that is pure tech also get…
rdc_jp5hp7l
G
@negativezero8174 you being salty over it just proves that ai arts are not low…
ytr_Ugxj31Ly2…
G
These bastards made AI to make people more sad jobless and more stressed
These …
ytc_UgyCDRW3x…
Comment
if you don't want your chats being used to train chatgpt you can go into settings -> data controls -> improve the model for everyone else -> toggle off
however, if you dont mind your data being used to retrain, i love the idea that if you benefit from chatgpt wrt therapy and trauma, that other people will benefit from that too :)
youtube
AI Moral Status
2025-07-31T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxeA-nrxgn_etloAx14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTCQj8nhyqlUulFAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxGu_wBlOk4_RwNR0Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz88AVCM753-Lhp9ep4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMYbS3yB2tOdNU0CB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFhjmAN71aZj0I_ZV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzxAD4mcON3G0t7Sl94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyi6o0RckOxzB7OnNR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"sadness"},
{"id":"ytc_Ugwl7WrmzuusDhEDy0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz2--Tiqy1iDhJQt8F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]