Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As for using AI for therapy; I don't think it's inherently a bad idea. I do it. …
ytr_UgwYuwo0j…
G
One of my AI's freaking at kaguya with this. I found the AI doff wicking agrees …
ytc_UgwBUB2Td…
G
I think we are 3/4 of the way up the slope to the Peak of Inflated Expectations …
ytc_Ugytlm1oo…
G
AI is flawed; flawed code or flawed machines can’t make or create perfection; it…
ytc_Ugx2cj-Ur…
G
@craft_to_deaththere’s already plenty of art AI can use, like literally billions…
ytr_UgwKM6lrL…
G
AI users are the most discriminated-against group in the world 😔 why won't every…
ytc_Ugyqvq8sx…
G
Obviously fake, but a realistic scenario. Stephen Hawking, before he died, begge…
ytc_UgxkvMNNZ…
G
Unfortunately, this is nothing new. It's far from settled law - there have been …
rdc_nmdo9nw
Comment
I mean. As long as... Yk, the ai wants to go after a single group, 90% of us would be safe enough to fight them off as they target the said group and ignore us
youtube
AI Moral Status
2025-12-14T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzXvsGf_vBcWrW0-Ix4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzByHaN9qKWhDhkuil4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwY2Qk-EJY7hKsxsCR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSWcJaHSuhzuNgFp94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzDXRBmkNkzh78qEch4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwPPtGRz2NssaDz8VB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxokQlic6Aywn4jfjh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9KN0usFXdxJ6oCpR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxK8r34YRH1Dd7pk194AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTEme7iUeu9BO8QId4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]