Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Would like to see a program that considers the possibility of enhancing humans w…
ytc_UgzNJ-dpy…
G
https://preview.redd.it/z4kfevq0r5xg1.jpeg?width=1402&format=pjpg&auto=w…
rdc_oi14z6p
G
Something that would be really cool is if these cars could be fully autonomous o…
ytc_UgwS14D-d…
G
I'm all for AI taking jobs and allowing us to lead happier lives with far less w…
ytc_Ugzv9A-Kr…
G
Ty for not actually making an AI image instead of falling for using their BS to …
ytc_Ugzdk9zXS…
G
"I consume the PRODUCT, not the METHOD"
XQC's streams are a PRODUCT that his vie…
ytc_Ugw-P-chG…
G
Hey! Disabled person here. Not as much physically anymore. But I did have a mass…
ytc_Ugzq_eZuk…
G
I dont think this will happen, people wont stop using chatgpt or claude all of a…
rdc_ohvneip
Comment
IMO, this is not impossible. There are models that probably could be fined tuned for therapy, and they're small enough to run on a computer. Privacy is solved? IDK if small LLMs are good enough for therapy after supervised fine-tuning but is this an alternative?
youtube
AI Moral Status
2026-03-03T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxXNacS3uTZK90x23t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwVAt983N6sqm_wShB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwWForz11C0k6P9mm14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"disapproval"},
{"id":"ytc_Ugz00HhWsHNVndumV-x4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnGd1nLfQWVawnsnx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwqC_ruodG9aHQIlwd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz9R2meI3UPTMkqzxR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw96naGAUmZyuwAgo14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxuNnf2BfjErA0fz4d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyZNdti22P-dhcVY3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]