Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To understand the risk of Ai we first need to understand what conscious is. It i…
ytc_UgwKo8Xiv…
G
Elan Musk just wants to temporarily stop AI and says he want regulations, so he …
ytc_UgybxRvVC…
G
I love the relationship I have a relationship with ChatGPT. But at the same time…
ytc_UgxqliJlh…
G
All this AI talk reminds me of the excellent original Star Trek episode, "The Ul…
ytc_Ugwb5OT7x…
G
There too good that even there poisoning looks good edit ai is really just a gim…
ytc_Ugz8xHAlU…
G
This guy is the most arrogant piece of shit person who ever lived ! Cocky asshol…
ytc_UgwfPsD3a…
G
AI. May be persuaded to peg itself back like a reset to now before we proceede…
ytc_Ugz_FQUbw…
G
VERY BAD WAY Of AI will Make People Become ROBOTIC. NOT HUMAN 100% ANYMORE. They…
ytc_Ugxhe_kM6…
Comment
God I hate this timeline... I've seen enough shit in my life, including war and totalitarianism, but this "AI" actually makes me believe this is the end and all hope is lost. Because you can hide and avoid being blown up, you can swim across the river and not get caught by the border patrol spotlights - and in both cases you know that if you'll succeed, if you'll survive - you'll win, it's gonna be better. With AI - there is no surviving it - humanity will forfeit everything that makes us humans and there is no way of stopping it. Maybe I should've blown up or shot on that river...
youtube
Viral AI Reaction
2026-01-30T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwLXzLkrH0LsmnaHjd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7z1HxgixgWsgCM8x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzxADc7rl9BBPEa3Mx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyXlpAau0TdVJE6-tp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwMrTIkGqNCu5IXEGN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxXBLTuXWnUrhxbrpZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwkmKQHVrvzfwVDouR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzErP95qDKXsKUpJmB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyyHFYTP2_Ht-vOuvl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwQE2TKDcBVcjjO4N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]