Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Karen Hao compassionate and intelligent argument goes straight to the core of th…
ytc_Ugx0MFhjA…
G
Just because a bias in the algorithm isn't currently illegal doesn't mean it isn…
ytc_UgyNzHHzY…
G
10/20 years ago I would have been on the side of the writers. Today they insist …
ytc_Ugwgtg7dx…
G
Superintelligence with boomer ethics is more scary to me than superintelligence …
ytc_Ugya8-fKU…
G
That's what UFOs are they are not aliens they are time travellers AIs who have t…
ytc_Ugz4QutQH…
G
If the books were in fact the product of an AI black box such as some LLM then t…
rdc_lz9e5i8
G
AI is similar to CAD. There still will be some creative and analytical jobs...bu…
ytc_UgwVKwhLl…
G
idk what kind of AI you use, but chatgpt still makes hella mistakes, pretty stup…
ytc_UgxfOfbtD…
Comment
I was listening to NPR and they were talking about how a teen committed suicide because the AI chat they were using told them to when they asked for help with their suicidal ideation. They said the issue is that the AI is not programmed to understand or help with these complex situations. Yeah so if anything, a better alternative is Reddit. Make an anonymous account and find a community that fits your issues. Reddit isn’t therapy but it’s the closest comparison I can make to why we use AI for advice. Therapy or hotlines are always better.
youtube
AI Moral Status
2025-06-03T22:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzjV9CxjLS3r09Yvql4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSSDeLZaxIGtD0PCN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzyogm5Wf3r5-whik94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyjkZ6ZOqa-adRC61F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyBZjgr0KjRvw-vu5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJxTqZYP0L8Hiw0Nd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgynOpycslJfOQi6PNB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyRsmzqt_zHIxOQFb94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwfDK15PJDlNODx1CZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyffwSht2sulikfYP94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]