Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Deception is a motive and intent of an evil man. AI is intelligence artificial b…
ytc_UgwrX9zII…
G
IF
If A.I. worked perfectly and it didn’t hurt employees directly but say— 30% t…
ytc_Ugw4uMkFt…
G
This is kind of creepy.
It reminds me of the episode of Stargate SG1 where an …
rdc_jvpsa8q
G
Sounds like the Borg. And at what point does the AI care about human needs or…
ytc_UgyH6mcWX…
G
Of all the low-down disgusting (and perfectly on-brand for an “progressive” AI c…
ytc_Ugz01p86a…
G
Any browser company can do this already, they don't need AI for that. Malware fo…
rdc_nufufb5
G
I really dont understand why pepole hate on ai its just a faster alternative to …
ytc_Ugz7Wt-h5…
G
The robot literally called the man in the hat not conscious, that let's us know …
ytc_UgwQN8uXl…
Comment
Human level machine intelligence must be possible (even without humans getting dumber). After all, human brains exist. Human brains are just biological machines (machines designed by natural selection rather than by engineers). And human brains have human level intelligence. Therefore, machines with human level intelligence are possible. However, current AI models are definitely not taking the right approach to achieving human level machine intelligence (not that I claim to have any idea what the right approach is).
youtube
2025-11-06T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwm9I9NcRQElvQfqu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwRhW6ydR3WoIlU3gl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgghtrugE12abngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugic-8CdfbK863gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiiVzQEVXTO8XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgigNAG8ggHJ7HgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugigkb4gWN8_I3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi_4VKjBann7HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugi9Gszi21MTEngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UggnLXyVGHuX8XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]