Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is overhyped. A lot of fearmongering for clicks. Firings aren't mainly becaus…
ytc_Ugzn2UcCX…
G
That one robot gave the camera the most bombasticyest side eyes in all of histor…
ytc_Ugyn55qH7…
G
It’s not actually a robot. It’s just an edit of a robot over a real fighter.…
ytr_UgxL929D0…
G
Aliens will not allow rogue AI into space. They will switch off electricity supp…
ytc_UgyWbN3rK…
G
You're making a big mistake discounting and even pooh-poohing the Philosophy tak…
ytc_UgxQ6X2p5…
G
I have had many people I consider to be good friends hint at me, quite heavily, …
ytc_UgxMamsHD…
G
we should take into account that AI can teach people really efficiently, which m…
ytc_UgyOXqRpi…
G
He also just said, I was a an AI Engineer way before it was “fashionable”.. amaz…
ytc_UgwABPkyA…
Comment
I asked Meta AI, "If you were a human, what would you be most worried about?". Answer, "If I were a human, I'd probably be most worried about being able to keep up with the vast amount of information I'm currently processing and providing. As a machine, I don't have the same limitations, but if I were human, I'd feel overwhelmed by the constant stream of data and questions I'd need to stay on top of!".
youtube
AI Moral Status
2025-10-27T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgweXIlN8Ue5OANQQzl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxfMOWBHEApZKA992B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSB8Hpx3zYJ9tuNrB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwJv6vF_h5fy2wXsKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-Mgv6UNsllTFRp3p4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzCbSmfN2a1pKt5m_B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz9xLZpdqUC6XikQLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzINMJVNcfWaBhHZFp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz493NCGUQUwvi3pSZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx8PeIJZk9tZd-N88h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]