Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@connectthedots5678 ai doesn’t need rights in the same way humans do. but you’re…
ytr_UgzeQ2JBs…
G
Add the times matched to each section:
1:07:00, Jul 30: Introduction to AI Conce…
ytr_Ugytevwaf…
G
@just_bean_things that only works if as you said it saw it coming from either th…
ytr_UgxfXURaW…
G
oh trust me, AI coding will be way better than majority of code made you know by…
ytc_UgxKX5JaY…
G
We should never give AI the rights of personhood. If we do, then some people wil…
ytc_UgysbSJ2d…
G
"chat gpt should not be used as search" look, they put a dumba** bot at the top …
ytc_UgzBL0HqC…
G
Who exactly do they think is going to be buying stuff once jobs are all taken by…
ytc_Ugzeesv69…
G
As with any ethical question, it can go both ways. But I want to say that if dev…
ytc_UgjGEKM8R…
Comment
im currently training to be a therapist and i stay away from AI. i dont wanna risk sharing sensitive info just to generate some advice or answers. i also think its a pretty complex process (granted not everyone is perfect), but i really worry about AI just parroting harmful answers, sharing misinformation or not evidence-based information, and encouraging dangerous things without anyone to hold accountable or maintain safety of the user
youtube
AI Moral Status
2025-06-04T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyLbpHSX6w7qB03gQF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-4i-dZkcWd7IVB5F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwupD4LUT03H3oMxFF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz1LHVIcVprHjnz4M14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwIQHmMPmf4sMHprXZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXW01J6D-FftSzHER4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyKtqFBAaOt_o3qCuF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw6MzluNnQXCb0jGfV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-OzSfsLmTmS0fCQN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyl3D7SDXrql2_g_G54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]