Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First of all, how is he qualified to make that statement? He sounds like he's me…
ytc_Ugz0T2Ubo…
G
So I do gig work online (RLHF in coding for various AI companies). Essentially y…
ytc_UgybSE53j…
G
I speak with A.I. just as if speaking to any intelligent being. I actually speak…
ytc_UgwRZ2H7G…
G
If your white collar job pays more than $100k, AI will be aiming for your job. …
ytc_UgzxAQnaQ…
G
Its an ai this "bias" is just utterly inhuman assessment of data all of its conc…
ytc_UgzDbfQuR…
G
Well i kinda think you right. Ai is already advance’s. Yet only speeding up in a…
ytr_Ugw19GY09…
G
i don’t think it’s right to target “ai art” as a issue though. this is my first …
ytr_UgxoNa13g…
G
I'm always curious whenever discussion (be it in science fiction or real world t…
ytc_UgyD2jXqo…
Comment
I do wonder, if you build an LLM on the most benign documents, leave out all war murder, all of reddit, only happy stories. Would it still want to kill everyone if you ask it? like today the LLM seems to be fine with killing everyone to achieve self preservation.❤
youtube
AI Moral Status
2025-11-04T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyl8xbbMDubkIbCLlB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9qnhM8U6V4ym-p6p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynQOhkwvxuATqD25B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJB8EAqaa-qhiHt5J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQkWyxzHcwXq6lP6V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzj7cfV4WQql07mbux4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2_dKEb04mm6Qyulp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgylEWd0mSHiGGFIjaB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyd8jfpG76I2UR_Ep54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlYpPuP65_axTKv2R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]