Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have this problem, every time I try to be formal in any kind of debate or situ…
ytc_UgxmsYmUD…
G
I remember once reading what a teacher said. He said to use please and thank you…
ytc_Ugyb8ART8…
G
How are scientists like this man not criminal? And most importantly, what are hi…
ytr_UgxzXoh3Y…
G
Waymo STILL is not profitable. Aurora will NEVER be profitable. Its just a gimmi…
ytc_UgweTs9EW…
G
AI wqs developed by predators and you can't change my mind. This is so devastati…
ytc_UgxDMrlfp…
G
Oh my god this almost literally drove me insane. I'd somehow managed to trigger …
rdc_mxhv163
G
@LunaSpFan the technological advancement of society as a whole is removing jobs …
ytr_UgyBOcy96…
G
Just a thought, I don't have the internet speed to even run a game, how are we s…
ytc_Ugxc23VAW…
Comment
This is so fundamentally wrong. Computer scientists anthropomorphizing inert electronics and abstract algorithms in a way that actual psychologists and neurologists would find sophomoric is such a scourge on our society. It is impossible for a Turing machines to progress beyond the level of data! Contemporary LLMs are no closer to understanding anything, to having the most basic grasp of information let alone genuine knowledge, because they are just very complicated Turing machines, i.e. they reduce to algorithms, which are inert abstractions. LLMs are vastly computationally expensive linear algebra convolutions with billions of layers trained on the entire internet, trillions upon trillions of pieces of information, all in order to mimic the expected response-- after *human workers* have not only gone through and labeled the training data but also corrected the model millions of times over in order to make the responses come out right. That is so much human labor! All of the vast amount of training data, collected without permission, done by human creators, and then millions of hours of human labor on top of that labeling that data and then adjusting the model by hand. And the thing that is supposed to be intelligent here is just an algorithm that happens to include quantitatively complex linear algebra, executed by the same old electronic hardware? This is the mechanical Turk scam all over again, only this time the actual human making the thing work is not a little guy spatially hidden inside the device but teams of sweatshop workers in Kenya and such, as well as all of the unwitting human creators on the internet, displaced in time from the product for the end-user. Of course the model does not possess natural language understanding! It does not possess any understanding. It's just returning the expected response, which given the vast amount of data it is pulling from and all of the human training it has been given, is not surprising or impressive. Yes, all that these machines do is stick pre-existing pieces of text and images together. No that is not what human creators who are *influenced* by previous works do, at all-- human creators understand what they are doing, they know what a human face is, what an expression refers to in the world. Algorithms do not and by nature never can exist in the world, can never make contact with the world, and so cannot understand anything.
youtube
AI Responsibility
2025-10-11T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz0RZmpsz-cM0XvhSl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxbImAhO9if0I9sLKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6njlPgqutOEdEmJ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwow2FEmFfRSSm58Yh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0wQMFhJ_sV9BGPgJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcKeKhKrJMyqY3PkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyYtX10zNF0QF15xvN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJ-PyHDxietFMcvdJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyuNKPKvWpCbklAQH14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxjUlX92iyt3c4690N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]