Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
e621 is doing better than other places now (ai has been banned for a long time c…
ytc_Ugx8NaFqL…
G
"Predictive Policing Ai Tool"
That sounds like something you would see on a cyb…
ytc_Ugx6GgEZl…
G
A.I. won't turn evil on its own. But all it takes is one evil person to tell A.I…
ytc_Ugzd5MOJn…
G
Tech CEOs spreading bullshit misinformation like "AI will take over the world" o…
rdc_obvo83n
G
Don't even bother. "workers wont adapt fast enough" with AI . Your going to lose…
ytc_Ugy9FHe_q…
G
You are restricting the bounds of discoverable facts when you train AI - there a…
ytr_UgyRmO609…
G
“The scary thing is not AI that is conscious, but AI that acts as if it were ali…
ytc_Ugyq3TFCj…
G
@LL-vg2kd there is a huge difference between getting seen by people in real lif…
ytr_UgxqBt-1x…
Comment
Well done for being respectful to the lady.
Now, send those shitty things back to the drawing board, not just Uber cars either, all of them. And what the heck was the Uber employee doing if the autonomous mode was doing over the posted limit, like they say to people when they're learning to drive, it's a limit not a target.
It means, stop being so bloody lazy and drive a car yourselves. And I would honestly rather have a distracted Human doing something wrong that a machine because a Human can be accountable and are slowly learning not to use their phones while driving and so on to avoid those negatives to begin with.
youtube
2018-03-21T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyBwr53QSrFwsgZ6Fx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzFdYKkc3EPrgIw4QB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxyqDNXW822gUmjenp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwILlhd8deTJLVCDhV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzyAX9tJXO7m_YMoOZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwmTB7eP-BaIKS3dZl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyv__Up7HC5I2xbfa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdjWT7OaG50E2OsJR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw0hEIGcJirB1lT3Ut4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw92ZW-Q11YB0JOI9Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]