Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just listened to Dr. Subi ji, and she didn’t make any sense at all. Is she a M…
ytc_UgyG3igma…
G
it was up to me I would destroy that thing before it tries to destroy humanity w…
ytc_UgwtIACMv…
G
Using ai is like commissioning art but human made art will be better in 99.99% o…
ytc_UgwYtq_Vr…
G
Here’s a project 2026 to 2050: 1. Gradual reduction in work hours. 2. Gradual in…
ytc_UgxKIGlbF…
G
why do you have to bring trump and left/right politics into this? This is a com…
ytc_UgxdT-9ow…
G
Yessssss, i've always said that AI is a great tool, it's just that our society i…
ytc_UgyCm9b6a…
G
I went to a private Jewish school. It was a “go at the level you’re placed at” k…
ytc_UgwSwl4Au…
G
Its extremely naive to think an ai will never be able to make this. At the rate …
ytc_UgyWI6HgM…
Comment
I really love how a lot of these arguments about AI Safety I heard in videos from Rob Miles 5 to 7 years ago.
Still a great source for some fundamental / theoretical approaches and issues in the field. (Though he suddenly seems to be quite busy since ChatGPT hit the market and the work is no longer that theoretical xD)
youtube
AI Moral Status
2025-10-31T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugznx6Vrfa_ILXDDAmN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIzZsIk9hou_DkG5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyxuC2lR1DcVZvxeph4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2WvPg2zwHagKEc_p4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw29TXfU1-C6sJ4Iv14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhFUeHflYZB26QLxF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwP_OwAJj7ACUAxfkV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgziSIhT7JSsVAbovId4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxG34lc0Pl01TyzbH94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzpiCz-nk2S8FTrSet4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})