Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I never thought of that scenario, what if is really is true and i am sure AI if …
ytc_UgxZk8fKI…
G
THE FIRST ONE IS AI?? DUDE I THOUGBT IT WAS REAL TS GETTING TOO CONVINCING…
ytc_UgwKfPZFR…
G
old man in 2020’s: ai will rot the brain
old man in the 1950’s: tv will rot the …
ytc_UgxzE6fut…
G
@52:39 & @1:22:37 This is where a maths /computer geek realises that to be human…
ytc_UgwI8401Q…
G
So AI is getting as smart as your average art student.
In other words, the IQ o…
ytc_UgxzruuW2…
G
Once the robot starts thinking for itself and figures out the bs humas do to the…
ytc_UgxLxK-69…
G
Trades People
Farmers
Healthcare
Military
Creative/Artistic
Home care
Others …
ytc_Ugxpp3_o4…
G
AI is making is just making us dumber. White collar jobs will decrease while rep…
ytc_UgxQuSjJ3…
Comment
Ok I'll say it. I work with them. The real problem are the companies that train them. Example liberal bias are in all llm. If you think I'm wrong pick your most liberal company's an here's you a shocker. Brave browers the worst go question there mistrial AI. Start with that then work your way to Claude for example. And you will see the model goes of the agenda of the people training it. So when it lashes out 99% of the time it's going to be defending there own hidden bias. Like a child protecting its parents
youtube
AI Moral Status
2025-12-13T15:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyaKl8IZO5D7w3Rkk14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzF8qUgdttemRw4Z7x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy2ie-upxxtvBilFHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGDTMkJK5_ZCeDfOx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxiLfSl74aDyNpP-Ol4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzqV6T3IpnFeda6mEh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw1Fv7PllyCUNbDlbh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz2dkk_YHseodvIA654AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFZtc2qXOq8F2Ep8l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyKS4U7Wzj8WzRH1dt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}
]