Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i can't understand what to do like i felt chills down my spine hearing this but …
ytc_UgzVoN1PS…
G
You’re wrong SMR. It’s very clear Musk has focused more on autonomous driving AI…
ytc_Ugx63Ew_a…
G
10:40 artificial intelligence is subject to failure just as much as humans maybe…
ytc_Ugx62KscE…
G
Those are not a contradiction, knowledge and pattern recognition are not the sam…
rdc_mzy0rs6
G
Yeah… about 3 years too late 😂 Tesla already had this and they never had to rely…
ytc_Ugw5VcOn_…
G
He makes it sound like AI is about to reach the level of intelligent life when w…
ytc_UgzyoIafH…
G
Story- AI realizes we aren't good for each other or the environment and shuts us…
ytc_Ugx6RqEiC…
G
So that's what they are doing with the AI, if you manage to actually talk with i…
rdc_n0mh93v
Comment
A lot of bad AI behaviour has to do with training data from bad human behaviour. That's why you have to supervise all training data, which nobody wants to do, because it is too expensive, time consuming, and impractical, making their development less competitive
youtube
AI Moral Status
2025-12-16T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugwe8SeMOU0SFcby49p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzB45xugpNDG0Rexn94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdjrbkRm4n9QlhuLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzNFHMeR8VA0AZNIS14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwdbizx8U4RC2lEA5J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzwF1UKeQ2X9Mq_d_t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwyT013V4Be3OifIL94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1B_-QphtgUrlrU394AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwmYUYCDVjI5KSK_Vl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-Xe9R_F7y80-WWm54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}]