Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is an inevitability that Ai and automation will eliminate the majority of job…
ytc_UgxXnHbzk…
G
AI IS STRAIGHT BULLSH*T
ITS FULL OF LIES AND FAKERY
NO TRUTH IN IT
AI IS PLAY…
ytc_UgyAGk7rk…
G
One flaw...it says it will create a bottleneck at higher employment levels...but…
ytc_UgxMP7BiG…
G
Every development has a price. That's normal. I'm not saying it won't be adjuste…
ytr_UgyU162Xj…
G
Congratulations. You evil people have successfully brought the king of deception…
ytc_UgwlCqAXK…
G
@지-i1f Nope. That's way too many. And they are only predictions that need verifi…
ytr_UgwUrUo_X…
G
@pRahvi0that won't work. You can't surrender free use rights by force of lawsui…
ytr_UgzNH6BqP…
G
I just think we should be extremely careful not to anthropomorphise LLMs. Rememb…
rdc_j8vnn6l
Comment
Is the AI interspersed through our algorithms today not already, more subtly and methodically, moving beyond humanity in the hard coded goals to maximize profit within the thinly crafted regulatory environments they operate in? (and you know, sometimes they even go outside those)
We worry about the impending superintelligence, but is it all just a distraction from the enemy already evolving and continuing to manipulate us under our very noses? besides the corrupt few that we all frequently talk about, how much are individuals behind these massive corporate algorithms actually guiding these AI processes, vs having just become a slave to the ever evolving system that has no regard for human wellbeing?
youtube
AI Moral Status
2025-11-01T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwqDZPwS0sJhzustSl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwfVIgjc9RUVbtK2Yx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBkZ0RB2dzvKO0Wc54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyRx8kIRspv6bsRE4J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDgzcIUZXZuAzgHSR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzYdePcFg5OXhfaaV14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8uz5IUjT5JCw33wF4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3sz23nrUfIxdlCFJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnojViKzl0G8CMj794AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyH4hSorqWq8zxU7AN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]