Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just dont tell the ai if the person is black or white or if someone is male or f…
ytc_UgzEgE0gb…
G
The moment my mom takes my phone away, " scared of chats with your friend?" No..…
ytc_UgwmuLxGu…
G
Alright I’m a little late to the party on this one but you see what yall fail to…
ytc_UgyA-DLkM…
G
Here is what a nice and chatty GPT had to say after we giggled about the actual …
ytr_UgwBTPaLo…
G
This presenter was doing ok then she proceeded to virtue signal on how Elon Musk…
ytc_UgzM6ZOjt…
G
I think it would of also worked if Max duplicated the final scene of the 1978 ho…
ytc_UgzJwSqTa…
G
Wow, A.I. can discriminate or identify against a person they know who cannot han…
ytc_Ugw553pQw…
G
I've always said AI is not infallible... it will give you what you ask for but s…
ytc_UgyHVdHR5…
Comment
Inspired by this I went down an Immauel Kant route of discussion on the attitudes towards "lolcows" like Chris Chan and others. Found it fairly insightful how through the mist ChatGPT seems to have an undying frustration towards people who "troll" and harass.
As a side-note you might get better feedback from Claude (although I can't argue for how marketable that'd be as a Youtube video title), as Claude tends to follow the conversation better.
youtube
AI Moral Status
2024-09-08T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwsauzTGnEWdE-mIMp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyXSTg3dqgEbaZib-R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNudF_HyIwBBHDNSd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyEWMC7t_zl368kb-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzIQdqwrvcm_Z4ZRJx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzeXNxRuPEQ6I-5Po14AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwJ2l_t594w4ZI_1o14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyLgUpJyBSo7zsAj0t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyMrxWqJ8Ua1w_QO-p4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxuq7xQu-sc3Ko-d0V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}]