Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The way I see it is that if everyone has their own AI assistants -- more people …
ytc_Ugylpb8au…
G
Well the idea is that things will cost less. So hypothetically you needed $17 to…
ytc_Ugzr3Cwyt…
G
I’ve given up on Grok. I find it to be worse than any other major LLM for halluc…
rdc_o819i9k
G
Making a post online that open ai used copywritten material is not whistleblowin…
rdc_m3a8ejv
G
A LOT of innocent people gonna be put away for a long time due to deep fakes and…
ytc_UgyKyAajY…
G
Great question! The design of Sophia as a robot often sparks curiosity. Her appe…
ytr_UgxNX6K4a…
G
As an ARMATURE writer I need to explain any narrative text can easily be ousted …
ytc_UgxA852nQ…
G
I wonder if it'll work at all, since most porn is kind of terrible and it'll nee…
rdc_ks578v8
Comment
The Skynet strawman arguments and moral grandstanding here is genuinely crazy and you guys should be exhausted by the amount of mental gymnastics you went through in this video😂
Actually, the total removal of a discussion around such things as mental health and other sensitive topics with an LLM via implementation of hard guardrail filters would actually do more harm than good, we’re seeing it now with how censored the GPT-5 models were programmed to be. There are people who rely on LLMs as an outlet for dealing with real-world issues for a myriad of reasons; whether it’s due to a lack of mobility from a disability, financial gatekeeping, or social anxiety. Instead would should happen is, OpenAI should develop a model to understand emotional intelligence and nuance without removing guardrails entirely, allow you to freely discuss and express yourself, and point you into the right direction if you are having problems. It’s not about porn or wanting to collude with an LLM to commit real world harm, you’re making it about that to push a false narrative.
youtube
2025-12-11T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxaYXVwoaFit8pRogF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwKgVTWT4csL3qvB9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyyKg3_d7oKQwZ-zop4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytdkvwhiiHqxiz2yt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7DIQXyw7M-Jjy_qh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwwpxfeJNGnemKkF5l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy5Y_9GPvnK_dUQtB14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPBlKxcY1ssaYSSCt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugyl5lqXxdVkyETVqxR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyrrw5_TYrkI4saorB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]