Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
im an hvac tech but im not stupid enough to believe that ai wont take my job one…
ytr_UgzvQY7ao…
G
@wintersgreenallgood but Alex kept on insisting it was lying by using human turn…
ytr_UgwUJ2KpJ…
G
why do his kids not have kids? I would have asked if there is a reason why . I b…
ytc_Ugw0Msfeh…
G
If ai takes our jobs, who will pay for useless things that fuel this capitalisti…
ytc_UgwUQyCmo…
G
Do AI bros not realize that their slop could not exist if it wasn't for artists …
ytc_UgxIPq3B2…
G
AI is only getting more relevant so you'd better either learn to use it or get u…
ytr_UgzvF3Dkp…
G
The fact that he doesn’t know the consequences implies he hasn’t really been usi…
ytc_UgxP1S6Mq…
G
“Oh, you’re an artist too?”
“Yeah!”
“Oh cool! I draw manga, what about you?”
“I’…
ytc_Ugxqnd5eR…
Comment
That's the grifter talk that lines Altman and his Silicon Valley Broligarch Venture Capitalist buddy's pockets. There is no reasoning in LLM based model - it is at best fancy pattern recognition. And the funniest thing above all else: there really is no learning despite all the deep learning and similar phrases thrown around (mostly by people who know even less than me what these terms mean) during end user interactions with a an LLM based tool. Good outcome - bad outcome - it's rolling the dice each time a similar interaction occurs again. This is one of quite a number of underlying issues which will prevent all this (fan fiction nonsense level) general AI stuff ever becoming a thing unless it is solved. And it cannot be solved with the current LLM Architecture, nor are there any ideas (let alone roadmaps) how to advance that architecture.
youtube
AI Governance
2025-07-02T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgymLUR6K6yxnzNTDot4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwuP6Z6oa7bEyzFOsh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz41LuBBOJGy48HzFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyk-LVx5mXRaBw4Rm54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnyTbpP2Z6NTcmCpJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOnekq7fJSOkfZZcx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzeIOVpkw7pw0Wzs1Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxhrAetCSmO3lj9Ynp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwpSDYOomeD9eVqF-h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzBoHEBUAyc5ig2IpN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]