Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't agree with his position but I thought his initial presentation was very …
ytr_UgxqeZDCs…
G
LLMs are trained on Text from the Internet. It Basically ‘learned’ from humans a…
ytr_UgyFE3-0N…
G
If people lose their jobs and they have no money, how are they going to buy what…
ytc_UgyBwqkpN…
G
No billion programmers. And actually TODAY there is no programmer without AI. Wh…
ytc_UgxLg0T-E…
G
Bro what is sora ai it keeps poping up on my yt page please explane 🙏🙏…
ytc_UgzO4po91…
G
36:16 it’s the show the character is from game of thrones.. 😢 this is SO sad… yo…
ytc_UgwtWf_fC…
G
I think you shouldn’t play around with AI. But if you’re going to ask you to ans…
ytc_UgzrSXrNe…
G
By the way i think the audio is auto translated in many languages. Thats an AI …
ytc_UgyyIoezl…
Comment
I mean, that setting is the first thing I disable when using a new AI tool. If they don't let me disable it, I won't use it.
But I always thought it was required by law or something because I have not found one yet that did NOT give me the ability to opt out of having your data used in their training.
For those curious, in ChatGPT. Go to Settings, then Data Controls then switch off "Improve the model for everyone"
youtube
AI Moral Status
2025-06-05T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxZAn_8ZVmXE-ghWsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjBGA8ZBnAImQQby54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxvi9WNvBIfF4l56eF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUsbPCgiMEkkTaxCZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzAU5Zk5K7WP35eq3x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzz5GH9UQqh8wkhUf14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvWYbR2BTac9ScNYJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz380moQ0FStdGk3ph4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx004pTrlCug9WM06N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgwtJ7YExSZkqGmWXrN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]