Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The first level customer support - aka chat and AI assistants - is pure shitt. …
ytc_UgyWsyFKA…
G
Creativity isn't a zero-sum game—it doesn't become obsolete just because new voi…
ytc_UgxIAuPkz…
G
chatgpt arbitrarely assume that is talking to a human being. so to prove it has …
ytc_UgyYD6qU7…
G
yes, that works at a small scale, but when it gets serious you need humans, beca…
ytc_Ugx9box1G…
G
Not an AI artist by any means, I draw with a pencil
But your analogy fails beca…
ytc_UgyDmf_Ti…
G
Autopilot IS NOT FSD (full self driving). Full self driving is still in limited …
ytc_UgyKxnz4T…
G
That cop believes the AI over both state ID!! He certainly was correct when he s…
ytc_Ugwg0CEWc…
G
These moronic corporations are about to lose all of their customers. Customers a…
ytc_Ugxt2LD-A…
Comment
Seems we need a law to have TWO different (non-drunk) cops, the 2 more ranking/days served as police if rank is identical, cops sign on a "this photo of the crime matches the suspect we see right now in at least X criterias other than skin tone, and list which ones", before they can actually trust the matches made by an AI that is so error-prone and proceed with a raid.
1 minute paperwork, follow the suspect from a distance while it's done, drive off if not matching.... but NO, they have a racism quota to fill?
youtube
AI Harm Incident
2023-08-14T00:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy6ba_mY-MUxtJCz8N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwplGDy1OEujTgVn6N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz7DMMZwD6GEzylG9V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwNxixz2GJChTZlaUd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzwNDGIcWR1c528PIx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxhaGqTb79ZEl3jAwl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwJSUpIyTSwME6TJjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwtgpUjzU0Hd79D0oN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwTHTw8Kz1vJlxmhqt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxVI16I62JUmc0z4nF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]