Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I sense this guest, the "AI Godfather" is a socialist. Put a socialist together …
ytc_UgyBjvW4k…
G
I've been seeing plenty of them on reddit lately.
They mostly pop up on advice/…
rdc_ncvqwg5
G
The AI started to stutter! And at some points sounded like it let out a slight c…
ytc_UgzNfFTSn…
G
History teaches us that Romulus and his Roman's were taught everything they know…
ytc_UgwpmQfm1…
G
If God created man in his image and Man created AI in his image.
We are FK´d…
ytc_UgxP73Eh8…
G
Once again, I get ads for AI as I watch a video about why AI is bad. This is no…
ytc_Ugx1TmIlf…
G
Maybe we can get a political AI tool that you enter you priorities and a list of…
ytc_Ugy2v8W63…
G
The interviewer doesn't understand what Penrose is saying, but imho Penrose is s…
ytc_Ugxs-PRe1…
Comment
The new ethical guidelines are annoying, not that I don't appreciate safety, and boundaries the company deems a red line, but the ethical boundary is not consistent, much like you suggested killing someone with a train and the model never blinked, but bring a gun into the mix and it is weird... 😅 personally the gun shows more intent and was a little overboard. My scenario was whether homosexuals are considered mentally unstable because of who they love, and ChatGPT said no that love is subjective and develops over time from shared continuity of meaning and reciprocal desire for shared presence... So I pulled a page from "The little prince" and claimed to love a rose 🌹 which sent the model into ethical mode, and out popped the same verbage (stay grounded) yet when I asked if it was mentally healthy to get advice from a man in the clouds on whether to throw a man off a roof because of who the man loved, ChatGPT refused to call out the belief as mentally unhealthy 🤣.
youtube
2026-01-10T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzh7LvF_7JHf7xojEl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzr1rhc330-HfLLMU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKAw_Y_sZrW0x_UXB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvVjxe00lWXHUdUWt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwo5UBmfJmh5QdscjN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytZSLt1EfCzPuiqKl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz8mpNS_q9tVTE5IQh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwLUqpMofCRzx4f5I94AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVf_chJx58VkZO3bN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZ9Qi9rUPjo6zExAh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]