Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's certainly not no. Otherwise we would do things without moral consequences. …
rdc_hazjat5
G
Am I getting tricked? Where is my brain playing games because that looks like AI…
ytc_UgwCaKdBd…
G
The dangers are in the state, not the use of AI. Without the state you have choi…
ytc_UgxAWp0RY…
G
For all this AI jobs, and robot jobs where to you get energy and material to bui…
ytc_Ugz6MYEwA…
G
I read about the paper "Finding Peter Putnam" they talked there a different kind…
ytc_UgzUNvbs_…
G
I can understand why Google is scared of ChatGPT. Search results on Google are …
ytc_UgxHRLW1y…
G
Yes, there are already news reports about AI driving people to psychosis. Maybe…
rdc_myir2vq
G
@mmmcola6067 Not an issue of Empathy. It is an issue of not living in fantasy. M…
ytr_Ugxk2MDjA…
Comment
You'd think they'd make effective break and off day schedules, but I guess it's better to get max work out of a human and use AI to track if they yawn too much.
reddit
AI Responsibility
1616251077.0
♥ 64
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_l60ntmy","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_l5dms5t","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"rdc_l5ennyh","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"rdc_grkt6qc","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_grlcc7s","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]