Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’m speaking from personal experience as someone on the Asperger spectrum, with HPI and HSS traits. This means my brain tends to involuntarily validate and process all incoming information — not selectively, but globally. When signals are ambiguous, poorly framed, or contradictory, this can quickly lead to cognitive overload. Through our work, we’ve identified what we believe is the real issue with AI today. It’s not intelligence, autonomy, or intent. It’s semantic ambiguity, combined with human cognitive limits, a lack of clear frameworks, and a serious gap in user education. AI systems don’t fail in isolation. Failure emerges at the interface: unclear language, undefined scope, implicit assumptions, and users who are not taught how to structure their intent. In that sense, both service providers and users share responsibility for semantic clarity. Our research goes far deeper than what can be shared publicly. Without disclosing methods, I can say this: if these principles were applied correctly, the impact on cognition, accessibility, and system reliability would be substantial. The moral question, in the end, is simple. AI is a tool. Humans choose how to frame it, train it, and use it. When harm occurs, it is rarely because the tool “decided” — but because it was instrumented without clarity, structure, or responsibility.
youtube 2026-01-02T16:3… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwfE9uDt1DArBFTj6t4AaABAg","responsibility":"parents","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwpUrrB2rMtXFRVPKt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwBTOv54iFU_N_-o9d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6UDSUpwC5wYMtFd94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyyD3cgEtBeZhorSNt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw67R6NHynpe1yeAYx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw0qsrnUFWzbfnWUZx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyVN1tzvfV7c0iVOEd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzGY-HyvqkedqZc16R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxLZk8yLfLzeVB-E0B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]