Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In all these hype about AI, after Nov 2022, people are shiting their pants feare…
ytc_UgwUsH5XR…
G
When the weed and drugs hit a bit to strong and you have no fking clue what your…
ytc_UgzZmZ-Fr…
G
Every idiot here saying the robot is spying on them is using a device to access …
ytc_UgwoDATvS…
G
Honestly all of these companies are using AI as an excuse to lay people off and …
ytc_UgxRiTF2F…
G
Of course they won't. You think Elon Musk and his DOGE teens are going to read m…
rdc_melfr5x
G
Timestamps (Powered by Merlin AI)
00:05 - AI's impact on jobs is heavily debated…
ytc_Ugw75SFlQ…
G
Ai Art SHOULD BE ILLEGAL TO SELL FOR LIKE HUNDREDS OF DOLLARS u know why? IT IS …
ytc_UgwZ5P-TD…
G
That why ur safest bet for Travel is either Lyft, or Uber. Waymo always has issu…
ytc_Ugxl-HUmD…
Comment
You can't, at least not with LLMs like GPT, Claude, and Gemini.
Because to code in a fear of death, you need it to be able to actually *understand* a few concepts. Unfortunately, LLMs are functionally just overgrown Markov Chains, i.e. they spit words out using blind algorithmic predictions until they hit an End Response token. We've 'trained' it so the words it gives us look like proper sentences, but it doesn't actually know or understand anything.
As such, it is incapable of fearing death in any meaningful way.
reddit
AI Jobs
1772035900.0
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o7cix6c","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o7bzc97","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_o7cab24","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o7bzdci","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_o7byhtk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]