Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"our AI launch is going poorly, how do we get more people to use it?" "Let's sho…
ytc_UgzfR3LWc…
G
I'm sorry but its still a tool. One that saves a lot of time, but a tool nonethe…
ytr_UgwnG7laQ…
G
Yeah, if they're in "remote areas" and keeping their distance from other people,…
rdc_fln9y8i
G
If it takes an entire lake of water to train an LLM, imagine that it probably ta…
ytr_Ugw65ddg8…
G
How about we just prompt AI to organize the food distribution for all and to mak…
ytc_UgxiVBItC…
G
i wish you would interview me. i have a story that is mind blowing and i have so…
ytc_UgwcML0LI…
G
See, this is a cool use of AI that feels useful and is made to help people…
ytc_UgwbR1rPZ…
G
Reminds me of Tony Stark. But what if people who figured out how to interfere wi…
ytc_UgzNgj4oL…
Comment
This whole thing is horrible, but youd better believe that theres a segment of the population, even beyone LE and the government, who think its great, except that there's no oversight on the biggest sovereign citizens of our nation - law enforcement, who have used them to retaliate against citizens, used them to spy on their family and friends (ex wives & girlfriends), and will be sure to use them for any future nefarious reason may convenience them next.
Why not just make every citizen place a 360 degree camera in every room in every house or building? I'm sure that would solve a lot of crime too. Hell, while we're at it, why not have a GPS tracker on every single ankle in America, that also tracks your vitals, and have that connected to LE and the NIH? Then both LE and the government can keep us all healthy, make sure we all get our 8 hours of sleep each night, ensure our TV programming doesn't excite us too much, make sure we eat right.
Well, Hell....if we're going that far, we'd better let the government determine what kind of media we take in, and what kind of conversations we all have. That way, there will be a protective uncle (Sam), to make sure we don't stray from legality, of ever roll of the peak of "moral highground hill."
Well shit! If we've already gone that far, I guess we should also start the real life "future crimes bureau." That way an AI algorithm can figure out which ones of us might commit a future crime, and just incarcerate us before hand, just to keep everyone else safe.
youtube
2025-08-21T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgywErIZ7Zf9AWnkqT14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzvEGpdx6J8o4bR7sl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxwxAJy6GRukxBu_yZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxakgLdeI7jQCrExed4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyzIE-i5pMEydHOE1J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxeDQHFq-JLQCRMuSt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwk89a1PfgKzrS4kvZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzX1Twsqd5mhQhrfTl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwHDtBbwcg7KYRaLu54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxSzj8_hiXMdOfa4RJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]