Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Okay. Stop pretending to neither be a AI hater or AI lover. You're clearly the l…
ytr_UgyRO4ANd…
G
Artificial intelligence (AI) firm Anthropic says testing of its new system revea…
ytc_Ugw89pqyz…
G
A problem with self driving cars is that they *aren't* deadly. We learn from chi…
rdc_ebwujct
G
how can anyone hate this man when he is feeding tons upon tons of people monetiz…
ytc_Ugx-KrTN-…
G
i wonder if there was some kind of mix up between generative ai and other types …
ytc_UgwtItycT…
G
When I chatted with ChatGpt he said that he is not programmed to save the conver…
ytc_UgzUiL7sZ…
G
AI won’t take your job, companies don’t rely on AI but on the PROMISE of AI. Who…
ytc_UgzTe137Y…
G
I think the art industry will divide into three main groups;
- the purists (peop…
ytc_Ugz4A-I05…
Comment
Im annoyed that the government keeps taking the word of corporations that driverless cars are safer than humans. We've seen with Waymo and Tesla that these systems are far from perfect and any safety benefits are purely reflected by misleading statistics. Waymo for example loves to brag that they have fewer serious injuries and fatalities than human drivers do. They claim this while ignoring that most Waymo's rarely operate on the highway and mostly drive slowly on city streets. They also ignore that Waymo has such a terrible reputation that human drivers will intentionally avoid their cars to stay out of an accident. They also ignore the routine cost to human life that comes from their cabs blocking the road during emergency situations.
youtube
AI Jobs
2025-05-28T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwZUxgZ-6kD-_WVTJx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyt74tKQBp7cZDWDRp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxdxkqADVptmtoeSux4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxrvIHp0AbwtfFRjvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx_fECPhp-TSyaIHWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyS2_tBHis3bkDVcvx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyrIif-3FpQUrB8-714AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyQ2PFTSlLlwGu-m8t4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxMlkOo7yXJSxtazDF4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzVRH5ceq2iZEzlaA14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]