Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Harari is obsessed with words but not the truth. All his sentences sound good, b…
ytc_UgxOlqfAe…
G
15:45 Neil is deeply underestimating the dangers of A.I. and I think Hasan Piker…
ytc_UgyUuipNA…
G
I wonder if artificial intelligence really dangerous or not. I don't understand …
ytc_UgyHHVoeq…
G
Technology is starting to become a problem rather than these great solutions t…
ytc_UgzaV-lgT…
G
The problem with a question like this is you need a giant context to process all…
rdc_m2famdq
G
They could make a difference between consumer grade chips and dangerous chips.
…
ytr_UgxlmBGpd…
G
I will never accept what's been going on the past few years as real AI. Real AI …
ytc_Ugz0ygvGN…
G
Whelp, we’ve had a good run. Controlling/Managing AI will take foresight, ethics…
ytc_UgwPfqJkN…
Comment
It's not so strange. The LLMs are trained on human data, and the algorithms are made by humans. There is going to be human flaw in the way the AI works. It's well known now that AI can, and will make things up, because it doesn't comprehend what it is writing. It's a probability based algorithm that calculates what token (letter, number etc.) might come next, and it sometimes gets it wrong.
This lack of conscious comprehension is the reason why, for example, playing 20 questions, or asking the AI to write a long word backwards, doesn't really work.
youtube
AI Moral Status
2025-10-15T21:4…
♥ 29
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyjMsdtJIoZla6NHyR4AaABAg.APeeIMRd2a3APfSDc_HcYw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzikmzxIdm90uN2cRR4AaABAg.APHgBXWezrXAPHnDgItQ0x","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytr_UgyWB-Ruh_4DH3GX0kh4AaABAg.AP6ZdKTFII8APi2hhilDLA","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxhSUxcFo0Aj1urVR14AaABAg.AN5PLLHbp3XAN5beFU_uVM","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzKxcpswuofvs1m2NN4AaABAg.AMwAOmzsAEFAN1MNBQTLKE","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytr_UgzKxcpswuofvs1m2NN4AaABAg.AMwAOmzsAEFAN6IPisTG1x","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytr_UgxHQq8ba3aPh5EqZGh4AaABAg.AM_a4FFNYFRAOPootFb1D7","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzMKfHAZQl3G_qFRmh4AaABAg.AMVHgzv4Wp1AOJc0pPjEHx","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_UgyjzSR2l4dbLuEyREh4AaABAg.AMTD0Teflm7AN1MZ9KYCaa","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytr_Ugzkk7miBvpCInRR9_d4AaABAg.AMSGE9HU5TpAMSGWMpXUd9","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]