Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>I asked ChatGPT and it basically said that moving to 5 has sort of reset it …
rdc_n7l68vm
G
I don’t really think that ai is the problem it’s the people using them is what i…
ytc_UgxcmAsUJ…
G
sad, i haven’t seen workbeaver ai here. I’m such a huge fan of this tool and how…
ytc_UgzBpSV-4…
G
This chat gpt is like any People pleaser, has no opinion and no entity just affi…
ytc_Ugzanq-fg…
G
Yes, AI is now training AI with exponential improvements.
Will Tesla abide by th…
ytc_Ugz_Exjir…
G
Alex yes, 100 Trillion low bar, my AI dream team told me my co could be worth 32…
ytc_UgzMI4x4N…
G
God made Man in His own image. Man, through his own intelligence, eliminates God…
ytc_UgwB3Dea1…
G
@thewannabecritic7490 I understand your point about skill and complexity, and I …
ytr_UgzTbg3H8…
Comment
A conversation that is totally off base. Try again, but next time don’t anthropomorphize the “behaviors” (‘actions’ seems a more appropriate word) of LLM machines/algorithms or the human-written programs mysteriously called “agents” for some reason. LLMs don’t “try” to do anything, nor do they have any intentions whatsoever, they simply produce next text tolkiens based on a matrix of coefficients produced during it’s “training”. Agents don’t do anything other than execute the code that make them up. LLM’s are powerless without agents, just machines good at predicting next lines of text and agents will stop if you install a line of code saying if the stop flag is high, quit, or whatever.
The hype is through the roof on this technology and when the bubble inevitably breaks, a lot of people are going to look stupid.
youtube
AI Governance
2026-03-02T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxvwo_02ZBF84zNxFl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_UgwXBgKbah3EcsSIicZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyMwXehTshcpUMgcGx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwxGXV8sPil8PGttYJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgydS01XTsdjHOPcj0V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwqwdyNjRghOD34QHp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy-J-u0uTLP2WDZ7k54AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy8yTqW3iLjS2ryQdF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"curiosity"},
{"id":"ytc_UgyaWavpM1N3Q5FfogJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwsooizqu0pGx6aDfR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]