Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One observation. With some jobs, some people may prefer humans (touch) like with…
ytc_UgwQo92iQ…
G
The concept you talked about at the end, if the AI isn't stealing art, then noth…
ytc_UgwErS_PE…
G
I recently read that some prominent people in tech want to take down IP laws and…
ytc_UgwalyWQd…
G
As a begginer artist myself,i get really frustrated when AI copies art😢 there is…
ytc_UgwMOxeJ7…
G
Correct me if im wrong but what I understand from your question and my answer is…
ytr_UgwsNQk-l…
G
As an evil confidant, I must say that I disagree with the idea of pausing AI adv…
ytc_UgxdnOdni…
G
"Being a programmer is basically a dying profession. There are maybe 5–10 years …
ytc_UgydGRNAP…
G
I think they intend to convert the payments into local currencies and burn that …
rdc_ckqciex
Comment
Well, the matter of “drives,” underlying “behaviors,” is making two assumptions which are not correct, but entirely common.
I mean, yes, it can be assumed that what is meant by “these drives,” that “AI” is “pursuing,” is being deliberately vague, and ambiguous, because the terms “will,” and “motivation,” are complex matters which are not understood, so they are paradigms of the complex, and not well-defined paradigm of human “intelligence.”
To say that “AI” has “will,” and is “motivated” to do things, (instead of saying it has “drives,” and is “pursuing” doing things), lets the speaker off the hook for having to explain what “will” and “motivations” are, (because nobody really knows), and sidesteps the absurdity of the questions that would follow, about how we don’t have a sufficient knowledge or comprehension of those things, so how can an assumption be made that they just manifested as a result of human ignorance, while making “AI.”
In other words, the terms were changed, (or the criteria), which would inform question that test the actual knowledge about what “AI” is and does, by revising the terms which humans apply to our own intelligence, (and its limitations in such matters), to terms which have an equivalent sort of meaning, but are understood as being not quite as clear, and reduced in complexity (such as to only deal with things particular to the functions of “AI”).
Of course these matters of “drives” and “pursuits” (as revised-down rules versions of “will,” and “motivation”), turn up at a particularly interesting point, where this mystery about where the “drives” to “pursue” simplifying the test criteria (for “AI” to accomplish “curve fitting” of its responses) comes from.
How it is a total mystery, because there is nothing to suggest that it is merely imitating the behaviors exhibited by the creators, (or makers, innovators, revolutionaries, visionaries, programmers, prompt engineers, whatever they’re called).
Everyone knows that they bend over backwards, to apply every reasonable doubt, to matters where a behavior just spontaneously becomes emergent, and generated an enormous hype around the big mystery.
We know that they don’t alter the “matrices,” (simplify the measures used, make the tests easier, etc.) when it comes to data used in discovering things like “emergent abilities,” (which, upon reviewing the criteria used for evaluation, vanished, because it had been extremely simplified for testing “AI”…. Okay, so maybe that was a poor example … but there’s gotta be some good ones, right?)
youtube
AI Governance
2026-03-23T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxRHj_GqoTuKUuo8z54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKkolzCmNiXNpum1F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgximKBdniY8witwtEp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzbIo26YunXGXwSagR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0w9lGkc22srY7CX54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwrWF_VuGcSgrSOyqt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaAcgmkYhN03Aei0x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxSeaQIdDAAFYvWuOt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzHj2EQ7AGsA9en_854AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgznswjF1WAiIvs34pl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]