Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here's a funny prank we can do.
Make someone study for 16 to 22 years just to re…
ytc_UgxX-hnog…
G
Yesterday i had a conversation with chatgpt about sid and nancy i dont care how …
ytc_Ugx1AKcX3…
G
@UniqueBreakfastTaco Have you seen what AI can do with creating art in a matter …
ytr_UgxNnka3s…
G
@darksideblues135 80% of Anthropic's revenue comes from their industry clients, …
ytr_UgwZgkNI2…
G
The one that traumatizes ai's
The one and mentally tortures ai's
…
ytc_UgwLYzsx1…
G
If you get the best AI in the world and teach it nothing but dogs? And I mean ev…
ytc_Ugy7UywXW…
G
Anyone who says he is the leader of AI or anything like that try reading/listeni…
ytc_UgyMB0aOz…
G
it also ignores some stats right here on reddit.
/r/MyBoyfriendIsAI - nearly 60…
rdc_oi0hk7o
Comment
Geoffrey Hinton, Stuart Russell, and many others making the YouTube and media rounds that “AGI is right around the corner” nonsense — all of these people have huge financial stakes in AI and they all want regulation to protect their financial stakes in AI. Once you see them for who they are nothing they say can be trusted. The current AI is nothing more than a word predicting LLM algorithm and is nothing more than a patronizing parrot with amnesia. Until there’s a Cognitive AI model and there is none you will never have an AGI. No different than fusion is right around the corner in the next 20 to 30 years for the next 20 or 30 years unending.
youtube
AI Moral Status
2026-03-01T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz18dD3F-IXAIaQNXl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyy57mnIKwazCExUsF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwiUnzkuZ3eQTYjD554AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwRVMqxExThq1ucMEt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzb-DinswhtiFyOhyd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxAfFQThIM-qEgh7Gp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzorIyWr2qX9F-5bm94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwm0F8ULdxpKg51B394AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXYzPUZozLxG97TXR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxjjisZ4J7r4MYgdNl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]