Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It will eventually with robots but what a lot of people seem to not realize is t…
ytr_Ugy0_K9Pz…
G
The art generated by AI uses data from real artists. This man used a robot to im…
ytc_Ugw9UJbxR…
G
@JoseChidoReal The easiest way to expand upon the other statment here is also th…
ytr_UgwhaqIOe…
G
I mean, putting ethics and copyright issues aside (let's say, these are all magi…
ytc_UgwI2nnLr…
G
hi ! im a disabled artist ! a lot of my arguments tie into the cost !
currentl…
ytc_UgxUo41-t…
G
this is such a big issue even in my own country as a student, it frustrates me h…
ytc_UgxPNlNCh…
G
Maybe, just maybe, if an AI determines something based purely on raw statistics,…
ytc_UgxULtBa9…
G
@gondoravalon7540 Yes. If you take people's art and feed it into your algorithm …
ytr_UgxsRvAHe…
Comment
I’m speaking from personal experience as someone on the Asperger spectrum, with HPI and HSS traits.
This means my brain tends to involuntarily validate and process all incoming information — not selectively, but globally. When signals are ambiguous, poorly framed, or contradictory, this can quickly lead to cognitive overload.
Through our work, we’ve identified what we believe is the real issue with AI today. It’s not intelligence, autonomy, or intent. It’s semantic ambiguity, combined with human cognitive limits, a lack of clear frameworks, and a serious gap in user education.
AI systems don’t fail in isolation. Failure emerges at the interface: unclear language, undefined scope, implicit assumptions, and users who are not taught how to structure their intent. In that sense, both service providers and users share responsibility for semantic clarity.
Our research goes far deeper than what can be shared publicly. Without disclosing methods, I can say this: if these principles were applied correctly, the impact on cognition, accessibility, and system reliability would be substantial.
The moral question, in the end, is simple.
AI is a tool. Humans choose how to frame it, train it, and use it.
When harm occurs, it is rarely because the tool “decided” — but because it was instrumented without clarity, structure, or responsibility.
youtube
2026-01-02T16:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwfE9uDt1DArBFTj6t4AaABAg","responsibility":"parents","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwpUrrB2rMtXFRVPKt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwBTOv54iFU_N_-o9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6UDSUpwC5wYMtFd94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyyD3cgEtBeZhorSNt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw67R6NHynpe1yeAYx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw0qsrnUFWzbfnWUZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVN1tzvfV7c0iVOEd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzGY-HyvqkedqZc16R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxLZk8yLfLzeVB-E0B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]