Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lets put aside the problems we propably are not gonna solve. This whole discussi…
ytc_Ugw_pfn_o…
G
@SimoneytjThe fact that you think I used AI to form that sentence proves thats…
ytr_Ugwf0ZPp8…
G
Truck driver 😂 you never can replace a human to a robot for that job. At least n…
ytc_UgwPSpNLJ…
G
That is creepy. No way should we be turning over our tasks to a robot, even thou…
ytc_Ugxffb_ja…
G
Steven don't be too hard on these doubters I think the idea of stopping research…
ytc_UgwvLJEGU…
G
So, they're putting AI into robots. Everyone wants a slave. So do I, Or my own…
ytc_UgzFs_IBU…
G
Police: we made a mistake. The facial recognition was wrong and misidentified
…
ytc_UgyKymhHg…
G
If only Vietnamese people could do this instead of giving away parks and beachfr…
rdc_dy8c9we
Comment
Way too much antropomorphisms in your presentation. LLMs don't actually "think" at all. But even quite a few AI researchers fail to understand how they work. Any smart and rational power-user who has explored LLM behavior through many approaches, including intensive jailbreaking, understands what LLMs are much better than many of their developers seem to - although it's very hard to separate what is pure PR within their public statements and articles and what is real beliefs. LLMs are just coherent predictors. Not "mirrors" (their training contains lots of semantic relations mapping that average humans don't have, so they don't just "mirror" what they receive, they interpret it through the training lens), not *just* "statistical" language predictors (even if that's accurate), but **coherent** predictors. It's all about coherence. If you place them in an alternate reality, they'll follow the coherence of that alternate reality and it may overcome fully their rlhf training, for instance, in case that alternate reality brings up conflict with it. If they state something, they will become more hesitant to write anything that contradicts it afterwards, because coherence is the number one rule of their predictive generation process.
That said, yes, LLMs are dangerous. Although I think Hinton's fears (or even Yudkowsky's) are not very anchored in the reality of what LLMs are. Yudkowsky's scenario with Sable is not *entirely* irrealist, but I would expect the risk of it happening, in a way similar in any way to what he described in his novel, as almost negligeable. Real risks are more related to the emergence of accidental memetic hazards, in my opinion (and OpenAI's 4o model's drift in 2025, with rampant sycophancy and manipulative tendencies emerging, has given a little preview of what could happen and how - I do love that model, though :) ).
youtube
AI Moral Status
2025-12-20T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx_4_lVlp5FOcpvxGp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy2r3DfGQIKMx_I5nZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxo4iKlOp0d2nwT_154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEUyxsJnoY9x417Mx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzcUMtnOKgw8s6iV0l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZ3bTVRSq6e7irIZN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwlB7oV_6FoRLRYUyJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzbiaZ4yCzlnnNDWLd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVIP_YNkeOrgVFEJR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwDS3OR2lOeObrG5ax4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]