Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The true target year is 2035. Lets revisit this conversation then. I've seen wha…
ytc_UgwhloMF1…
G
I understand the concern you’re raising, and I don’t think it’s coming from a ba…
ytc_UgxRZls--…
G
and watch how the robot raises the gun at the end.like in a revolutionary way. l…
ytc_Ugw1DT4Tk…
G
I work at a tech company that is all in on ai. At the last all hands, someone a…
rdc_kyhsld0
G
Cooking..putting together elements healthy that benifit the health. Ai doesnt ha…
ytc_UgxXefqgL…
G
Is there nothing to do so you went and wasted your time on a stupid robot…
ytc_Ugz4pFTEt…
G
The high consciousness spiritual concept of a "person" by the definitions I'm he…
ytc_Ugz8hfacg…
G
11:05 She is engineered for emphaty and compassion. I thought robots/ AI do not …
ytc_Ugwe2UXHa…
Comment
Apparently you can toggle the accuracy settings on Amazon's tech. According to Amazon, the ACLU ran the test with the AI at 80%.
A toggle-able accuracy setting is an issue in itself, of course, but if you set it at 80 and get 80, maybe there's something there to delve into...
reddit
AI Harm Incident
1565793477.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_ewuwql7","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_ewsv6x2","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"rdc_ewsmh6i","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_ewth5hs","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_ewu0s5q","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]