Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not a single mention of the Russian government persecuting people who appeared o…
ytc_UgxSNRfwO…
G
If you think that's scary, you should read about the concept of Mindfire, introd…
ytc_UgyrqpU81…
G
@Raimoon-qv4nh dummy there was AI in 2023
18 oct. when this was posted exactly…
ytr_Ugx1MSXjH…
G
What about from the other side, with AI being used to generate attack code like …
ytr_UgwG61aeI…
G
Now every " influencer" suddenly is a llm expert 🤦 before it was what.. money? A…
ytc_Ugyoj8KkD…
G
Lol. AI killing people. That's good. Maybe finally humans can come together ag…
ytc_UgxPtR93I…
G
I’m an artist and I’m not bothered by the Ai at all… I don’t think AI can really…
ytc_UgzkFyeE3…
G
Exactly. His actions has already said profit is king when OpenAI became ClosedAI…
rdc_jd78mbc
Comment
Reasoning models do NOT make their inner workings external to us, rather they create an extra layer of "inner working" that is visible to us and typically increases the AI's performance, but the basic layer of inner workings is still just a jumble of anywhere up to trillions of trained parameters.
Edit: after I made this comment, I was made aware of a paper that was published about "Reasoning Models." These reasoning models are not language models at all and perform the same test as LLMs, but are incapable of language. I think it is obvious that I'm talking about Reasoning Language Models here.
youtube
AI Moral Status
2025-10-30T20:0…
♥ 413
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxdWB2GvyUuqIVlCi54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgybtjBUk39J3illv054AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6W6lYH1D8Uj9Bwxl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzLmQDK4VS0RkkLAUd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw83iGH3FmGlHOpS314AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw5gyINpG8jmJV9s6V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzelWm4EbPVk114lMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyk7e-1BrjucVChMBR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBdApmyz7dTqviZ154AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz590g8tnUELebYGlN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]