Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If ai invades the world, you need water for drinking, also a weapon of robot's w…
ytc_UgxFD8DnK…
G
AI to regulate, enforce and crackdown on harmful AI needs to be used. Then scale…
ytc_UgyRuXwu6…
G
This is insanity. Why did the ai kill humans? And why did it care? You apply a r…
ytc_UgzkKG0Ht…
G
TECHNICALLY the AI is not trying to do anything it has no wants or desires it do…
ytc_UgwPyjxAo…
G
Yes, "AI" is just a digital application for automated intellectual theft. But il…
ytc_UgwxzGzSs…
G
Because they're having a hard time comforming, like having been fired by a greed…
ytc_UgwFaRIeR…
G
@haresonaga better analogy:
if a full course meal (that's realistic) was made by…
ytr_UgzvNXwoD…
G
I can't take the greed anymore. I hope they all rot in hell or become a seagull …
rdc_d0f88v3
Comment
I feel like he should specify what he’s specifically referring to when he says AI. Like is he speaking about theoretical technology? Because I’m about halfway through the video and it sounds like he’s *just* talking about LLMs.
And I think he sounds silly to say that “AIs have an understanding of humor” and then following that he explains how an LLM works, which should tell him that AIs don’t have an understanding of humor, they have a bunch of data that includes people telling jokes, and it uses that data to predict how a joke might be put together.
And- There’s *huge* ethical and safety concerns we should be worrying about and discussing when it comes to AI, but dwelling on LLMs understanding humor seems- silly… and maybe makes all concerns about AI technology seem silly- and then maybe policy making and regulation efforts might also seems silly, because people think it’s about chatGPT telling jokes and that’s easy to hand wave away as alarmist, or people fear mongering.
youtube
AI Moral Status
2025-10-31T14:0…
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzA7vlrd087013Y1g14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmWwNHvU02M92Ermt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx8MvlrcMUm9hXN7gd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIEoAWfjGSS2lgs994AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0j09RLZBTv38Euwd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2ZFiSByaLGFw6TbN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyrrY5B1-Vdt8F-sfV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsyM78VoON9jVXCmV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtjMfkynSd-a6jhNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6vsazHpUQ6oK0OLp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]