Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Microsoft now has AI agents in copilot. He put it in there! How is he not sure h…
ytc_UgwuUuUqv…
G
so were gonna act like we weren’t using search engines and more before ai lol…
ytc_UgytESfIE…
G
Navid talks about the stability of artificial intelligence and the potential to …
ytc_UgwMhwgSl…
G
Yeah, i don't think ai is gonna save the world, but like the super hostile toxic…
ytr_UgylBlhyU…
G
We do not support or promote inappropriate or disrespectful comments. If you hav…
ytr_Ugy5vNohn…
G
the reasons why facial recognition software has such issues with recognizing bla…
ytc_UgwhsXQof…
G
@B0XAL0TXD clanker is a slur for robots, not humans. I am a traditional artist…
ytr_UgyCEnW4p…
G
Ai loses, actually. Feels better, looks better. Ai one looks like a shitty stock…
ytc_UgxootNQ6…
Comment
Every GTP conversation is "separate" unless there is specifically manually stored memories, you cant explicitly ask about past conversations or users. Also, the model only changes if you dont pay for plus.
Excellent point about new AI warnings, as with what happens each time something new and not fully tested is released
youtube
AI Harm Incident
2025-12-24T03:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw5pWILNOLE8hXZk_B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyOq5L1cS1c_StVFIx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxae8LcYpCQsK56Lg14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwiBI7X9x1R-CCzfNR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxFqGbiTFg3nfEpoNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwGadUjVZTHvDI3Hzp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwZJ5IkqZZk61wlp1Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxFd3EqhV03sr2wLzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw2e_qMYE8EhBlNDZx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBpeTaTTi8XuNGV4x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}
]