Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm a dog trainer so I'm safe 😂 but if nobody has a job how will I get paid 😂 ff…
ytc_UgzSRpQEG…
G
>It’s amazing to see Zuck and Elon struggle to recruit the most talented AI r…
rdc_mz5ditw
G
For now ai is taking the jobs then... they'll go back to people because humans…
ytc_UgyTzwcjR…
G
They didn't know AI wouldn't replace IT professionals because management couldn'…
ytc_Ugw9QpMWy…
G
If McDonalds were to automate something and show it off, what would it most like…
rdc_nmabvuv
G
Notice there doesn't appear to be any "diversity" students spoiling the learnin…
ytc_Ugw6f9kc_…
G
You are overestimating how many people share the sentiment of this sub. I see po…
rdc_o783021
G
Wow we are on the wrong path. We are now living in a world were we are willing …
ytc_UgyYprC1z…
Comment
Among other things, it seems chatbots are still WAY too sycophantic: they apparently still feed into, and amplify, whatever mental road people select, regardless of the consequences. It looks like the ai companies and university groups are still struggling with these personal safety issues.
[I guess there is some utility in having those annoying people who always disagree… Maybe these people help keep society grounded (in their own way).]
youtube
AI Harm Incident
2025-11-09T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzl8XQEkxe4eTwCE5x4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugx0hb0MDGmwb6LSRNx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyg-j5LS4-lQGFXROl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyZiibi9QPe_8BK9Sl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyA_582QeBIbXUHVo54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxzZBMVJJMWXQku5M54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyV-igDJ6TBZkyBF3J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgysunZ_Oujg8J5_ltV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMVoetB3euE9BULfd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyakgsstTrHrvGIpwd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]