Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The so called AI experts always say "omg we're so doomed!! Its over!" Then the other half says "what are you talking about". Has any one of these doom posting "ai experts" ever precisely explained the HOW of how it will complete some of these cognitive tasks that they claim it will? All of the top models are improving a few percentage points at a time becayse its fundamentally incapable of using logic to solve things the same way we do unless someone goes and redesigns the entire architecture this multi trillion dollar industry is built upon. The simplest way to explain it is that we as humans can be 100% sure that 1+1 = 2, 2 instances of the same thing together equalling 2 is absolute, even before the words existed for it. The ai will only ever be 99.9% sure of something even as simple as this, because it only sees reality through the lens of probabilistic language, and it gets much much worse the more moving parts you have just like humans. The difference being that our brains will be able to adapt to nearly any complex problem given a reasonable dataset and always understand the full context in every logical calculation we make. To say that we're going to have a lack of intellectual problems to solve because of AI is suggesting AGI, which most experts say is incredibly far-fetched even now. And all I can say about someone as accomplished as this spreading this, is that he has some other motive than informing the masses
youtube Cross-Cultural 2025-09-28T05:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugw-83PPa5NbkcQp5SZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxm2uBh9Qz8wQTBxPN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytc_UgxvszlO-Zfxyz1JQgV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzbrELiswZLzleNTP54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgxVnp5o1WGhiWuhjbR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwhbhJLiB1p8UrXnOt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyI22sPNiT4GwdfRzd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzFG0C8Ky0nEhVKRNJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwOaSHVpxKO_t7g__Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyNA-H1oj6SHn3II0Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}]