Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here's the thing a lot of people still aren't really understanding. What is curr…
ytc_UgyyJUGjc…
G
Another insightful episode that highlights how AI can be harnessed for social go…
ytc_UgyW_vDD0…
G
@DrVinnieBoombatzDO
"...would anyone put it in control of to warrant that fear…
ytr_UgxvIl8M_…
G
I wonder if we'll get to a point where we won't know whether an AI is concious a…
ytc_Ugwnyk3IP…
G
I wouldn't want any1 to collect data on my kid, if i would have one. Neither i w…
ytc_Ugw0Xsgud…
G
I refuse to use ai, and if my stance has any impact on ai dying, I'm happy…
ytc_UgxdGiP4q…
G
Go watch the latest AI Explained video, it's not as open source as you think…
rdc_m94ske0
G
Another piece of media that speaks about facial recognition that is set in San F…
rdc_enjwq2d
Comment
@genericbeansmile756 That's missing the point entirely, I say.
We don't talk about mathematics the way we talk to ourselves or to AI chatbots. LLMs are not *for* math, they're for chatting. Must every interlocutor be knowledgeable about calculus to make a good point? No, of course not.
You point at the AI's difficulty in counting pieces it can't see because what they are given is tokens and not the full picture. The problem is that language and meaning transcend both words and symbols, because at the core of the training data is the human desire for connection and collaboration. We don't talk to each other using math, we communicate with language.
You're essentially suggesting that LLMs can't have depth because they also have flaws, and you need only look into the mirror to prove yourself wrong. They aren't meant to reach the goal you say they aren't reaching, and it's dishonest to use that to suggest they can't possibly communicate in ways beyond your understanding.
youtube
AI Moral Status
2025-10-30T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_Ugzbpe_VtRtLrfYT2q14AaABAg.AOuy9JwWL3RAOv5knTHD8C","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugwqt_SKxgEL1MKMCNp4AaABAg.AOuxgIin46BAOv8HBG6_Zg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugwqt_SKxgEL1MKMCNp4AaABAg.AOuxgIin46BAOvHzy5g3zm","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzzOP2k_R0rKvnnVjd4AaABAg.AOuxL0syLP6AOv0QadHGHk","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgzzOP2k_R0rKvnnVjd4AaABAg.AOuxL0syLP6AOv0avKinbb","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxFRZ3ULDDHYaVFVa54AaABAg.AOuxCSbbTG0AOv1s-1k8Z1","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxFRZ3ULDDHYaVFVa54AaABAg.AOuxCSbbTG0AOv3EWevr6u","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxFRZ3ULDDHYaVFVa54AaABAg.AOuxCSbbTG0AOv5ddF-9Bu","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxFjEChoGdjnwOcNMF4AaABAg.AOux-GPImJjAOux5Z0Q1L-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgxFA6eVPzli1Fino1V4AaABAg.AOuwz5O1wDnAOvEWAX3-NW","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]