Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OK. So I might not have time to finish this research... so if anyone wants to st…
ytr_UgwBjdCsj…
G
Julia AI can never replace your art. A printed AI art will never feel the same a…
ytc_UgxUI3rtr…
G
Well, I guess if AI art is so hopelessly atrocious, human artists have nothing t…
ytc_UgwJn_C8H…
G
good that theaf can go suck eggs about copy right if my art not protected from h…
ytc_UgzSRDqFs…
G
I agree with you guys, we should submit the trumpstein files to AI and let it de…
ytc_UgzeKcB9a…
G
In college or even high school, do they still teach students what an exponential…
ytc_Ugy4SudAx…
G
I cant believe i get to live in a world were racist AI is a thing this shit is f…
ytc_Ugy01t0M1…
G
0:32 after watching the news. the company became angry, and this is why this rob…
ytc_Ugy2KzoY-…
Comment
this was very hard to listen to... There seems to be a lot of lack of information regarding the very nature of what is being talked about: mathematical algorithms. We are effectively trying to simulate a physical phenomena, human intelligence. Specifically achieving reasoning through a model that "imitates" speech. Let along the reductionist approach in all this to reduce imitating speech to just "statistically assigning a value to the next word" concept, the whole questioning that follows on if AI feels, is there experience, what are nodes bla bla bla these questions are not even at all aligned with what is being built! Reasoning, intelligence, feelings, reactions etc. these are a problem of physics essentially and modeling them is not a task of coming up with a mathematical algorithm! Especially when that algorithm is not at all designed to reciprocate how real human intelligence works! These models are not built withe questions focusing on the processes that governs reasoning, speech or feelings. How the hell these models can be then at all used to infer anything regarding these?? There is a serious scientific methodolgy problem here. You built something complex without even considering human nature. They are statistical word guessing machines! How the hell can one even begin to associate human traits to these mathematical models??
Anyone who knows simulations of physics, especially fluids, would understand this analogy that I make now, to clarify what people who talk all the time on AI is not at all grasping:
Today's AI models are simply trying to guess the flow field in a given fluid simulation just based on the previous velocity distribution, without even having any governing equations. They literally take the velocity data for each point and according to whatever weight is assigned for a given velocity distribution from their training data, they just spit out the velocity for the next time step on that point. They cannot have governing equations like energy, momentum etc. Because in the case of modeling speech, who the hell fucking knows what are the governing equations for speech are??!! These are just statistical algorithms with no chance to reveal anything fundamental regarding the phenomena they try to imitate. So asking such questions on experience, understanding etc. are completely irrelevant?! And having a 1+ hour podcast on it is the best way to prove how no one included in this have any idea on it.
youtube
AI Moral Status
2026-01-07T10:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwo1P8kisYu_1IAwe54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwAERHzdC0QhPBUAPd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzU38CVeCSuHrUQ_jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyMflifZsFXoXafBa54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyXfHEwu88GP9Htddp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyHHAIpRBNdQfiV78d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxnuNPn12og6DD9ZMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbhKyWyRViJUoFgwF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz9nrrKluo20eoRQxp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwTCO29C3Xm7_404-V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}
]