Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If they argue that AI learn like a human does, that means they create like human…
ytc_UgxaxETyI…
G
Listen to Lex Friedman instead. This is sensationalism. The ‘smartest‘ people in…
ytc_UgyiMyD7E…
G
Listening to all these videos where people explain AI and its scope, deep down s…
ytc_UgwICmMDk…
G
Thank you Prof. Anand for sharing your valuable knowledge on AI.
By the way how …
ytc_UgycKQDR6…
G
AI is used as a convenient excuse to fire people, these jobs are actually being …
ytc_Ugy2CmzZP…
G
I mean I'm not going to listen a.i. but there is definitely a dark force or grou…
ytc_UgxlFIlDH…
G
A lot of my personal use of AI is local image generation so it's not really depe…
ytc_UgzukeTb_…
G
The paradox is, AI is the only surefire way to preserve endangered languages. Be…
ytc_UgzcsOWg6…
Comment
I feel like instead of trying to insert self-driving cars into our current road system designed for human drivers, we need to create a new road system designed for self-driving cars.
reddit
AI Harm Incident
1765288887.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nt07zfm","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_nszt3yp","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_nt14d78","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt42x17","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"rdc_nt03k18","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]