Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have worked in customer service and tech support honestly it's not just speaki…
ytr_UgzR6HG42…
G
A judge of an oil painting competition said, "I'd rather vote for a pretty pictu…
ytc_UgzxtbAdi…
G
I feel like at some point we'll just have to give AI rights as a distinct being.…
ytc_UgwIt7uFQ…
G
"if this robot rolls towards you, you have no chance of escape"
Me: oh really *…
ytc_Ugw8xgp7p…
G
I love Peter Tiel’s philosophy for business. You should always strive to be n of…
ytc_UgzyRd1i3…
G
⚠⚠⚠There are a thousand videos saying everything THIS VIDEO says.⚠Please!!!⚠Use …
ytc_UgyBK_vlF…
G
@mayukhawasthi8156I mean, exactly.
LLMs hit the ceiling already while they are …
ytr_Ugxrejwji…
G
Though this article is about something a bit different than levels of developmen…
rdc_f9dqknq
Comment
“Alignment” is, in this context, a word with no meaning! In his book, The Precipice, a Cambridge scholar named Toby Ord argues that AI must share “the perspective of humanity.” What an astonishingly vacuous claim, especially for something that’s supposed to be a “scholar”! There is no “perspective of humanity!” Humans are notoriously ignorant, conflicted, and inconsistent! Even if the goal were to match a program to the “values” of just five programmers, the problem would be intractable! One programmer, a lady 28 years old, is pregnant. Another totally disapproves of having children. A third is Catholic and already has 12 children. Another has had a vasectomy. Ergo, in just this one realm, this one question out of a million, there’s no common perspective! How could Ord have been so idiotic!?
youtube
AI Governance
2025-08-02T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxodujcKDHYurgOObl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtEmWvnoFeozPrRfh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugypn6YQmYkkLS68QpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyPMd1i5wly8_jL3qZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwMYHOyd2WgZnTzFu14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw4p6zxPlyJJI-3Shh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwb7kZe1diJLieW2sd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwueHHRLE9HjCX92HR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyogtGfKYtjSBPPeyt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkdLREyUSY0swiN-V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]