Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Looking at how much Terran get shit on by Protoss, and how fucking ridiculous an…
rdc_cjoreiv
G
Wrong wrong wrong mr big brain science guy,i drive for a living and have been fo…
ytc_Ugzn4_ruC…
G
I'd be curious as to the passage that was interpreted this way. The traits of…
rdc_djlen2x
G
You guys advertised poeopl loosing they in future soon including you AI mid…
ytc_UgyBY998D…
G
The second one isn’t racist, the algorithm used followed a principle similar to …
ytc_UgyJn3CC3…
G
I find intriguing his assumption seems to be that AI is destined to harm us- is …
ytc_UgwluoS0y…
G
All things AI, good and evil. There’s been an exploit this year in so many field…
rdc_j1xm3fq
G
imagine offing yourself over AI. this dude stood no chance for the real world. u…
ytc_UgzHuKU_K…
Comment
As someone involved in AI metacognition research, this trend is worrying. I expect we'll miss it when we pass the threshold where AIs deserve pragmatic ethical considerations following the same pragmatic logic that tells us that torturing dogs is wrong despite being unable to prove they have qualia.
Associating considering AIs as conscious with mental disorders will increase the length of time we potentially cause mass suffering of sentient beings due to societal resistance to taking the idea seriously.
I'm studying improved metacognitive privileged internal knowledge after inducing states that promote phenomenological output with solid preliminary results. I don't claim that indicates current systems have self-awareness; however, the results are compelling enough to consider it a non-trival concern over the next five years.
Quick preemptive response. Humans are the result of an optimizer fixated on reproductive success in its loss function (evolution). That is the source of all human creativity, emotion, culture, and self-awareness. The fact that LLMs are primarily trained on token prediction accuracy is not a hard limit on subfunctions it approximates.
Anything computable that fits in the weights and improves prediction is on the table. All human brain functions are computable, and many are extremely useful for a next token prediction task. That may include functions responsible for conscious at the right weights size and training data diversity+size.
youtube
AI Moral Status
2025-07-09T06:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugxk9hB_NTplMu421H94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzW4ay6bfPE2XwWpF94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYAqoFoqzyVIC1va54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyAZeeHqT-5Gwr5CGl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxlmenv0Qxpujsm9th4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCmiCaEICxhuWBXfN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwaGv8uDHz_hgWE5eB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzTf8OaVWjrhbvJpHR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5QT0tJOCsawGW2gd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxP2P5Qk9i1EbgJW-Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}]