Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai bros are like random people who want to copy your project in a subject you’re…
ytc_Ugx3jCper…
G
At least now I can edit in her crying face into the ai porn now to make me bust …
ytc_UgwXwS6Yg…
G
Imagine if we had just chosen the UBI path...AI would have been paying all of ou…
ytc_Ugzd1Cfl7…
G
Similar problem with the AppleTV app for PC. I can no longer play my downloaded …
ytc_UgyS49QCW…
G
He talks about Super AI and long lives but wants to invest into Bitcoin. . .
Lo…
ytc_Ugx4GIDZY…
G
So glad to find someone else that agrees with my "poison the well" philosophy wh…
ytc_UgyEOaRkm…
G
I have no issues having a sentient robot race. As long as we can co-exist. But m…
ytc_Ugg4EttFw…
G
These scenarios are extreme solutions why not just apply breaks gradually and sl…
ytc_Ugw2KRyps…
Comment
If I write an essay can I get a couple of billion dollars in funding?
Amodei and Sam Altman are two sides of the same coin. One is selling Utopia and the freedom from work, the other is warning us that the “powerful” technology he has created will have catastrophic effects on the economy and humankind. Make no mistakes, they both want the same thing. Investment. And lots of it. These tech CEOs are then parroted by mindless media who get some views, clicks or even kick backs from these companies.
Amodei must be building a different type of transformer model because people in the AI space know these will never lead to their venture funded fantasy of AGI.
Read some real, unbiased papers rather than just guzzling down these grifter’s snakeoil.
Are LLMs valuable and can they lead to productivity and human advancement? Sure, but are they worth potentially tanking the global economy for? No way.
youtube
2026-01-28T10:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugydd7yBuzLVdFa3Hbt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxzW8XmY-m5IV8OD1R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9jpQpHRc2VR1C6oZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgywL84DovjSSEgHrHN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxG6MdD0ZgMD07krop4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwTGH_bhECYNyc15xR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxqhVMbngUPakqk6ZN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzEq_KLCGd1sonpkGR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwLO6Y0F0CCH2HVEG14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxyeJJ1eTbwL1e7TFB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]