Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One use of ai where i would say it was a solid thing is a game called gug , you …
ytc_UgzXObXGV…
G
If you don't keep human beings in customer service, the new planning for failure…
ytc_UgyzW6JbU…
G
I must say that it is just matter of time for AI to reach to the level of creati…
ytc_Ugxd9hHQ9…
G
@bookishbookworm ok, well thank you. and for the record, I still feel that AI ar…
ytr_UgxbwmFr7…
G
listen A.I is already here in all applications of technology but the A.I is many…
ytc_UgxwMNdH6…
G
One-sided and only promotional. The paper that Sal mentioned is credible but dat…
ytc_UgxY4ZHLU…
G
Super AI is the fear of Yudkowsky and his think tank MIRI because -- phew.
Well…
ytr_UgwRM1UtU…
G
Okay so if these deep fakes are LEGAL in Korea — why don’t the females do the sa…
ytc_UgzwF2hGx…
Comment
> These chatbots aren't trained for war, they aren't trained on military responses, and they aren't trained for geopolitics or anything else needed to make an actual informed simulation.
Yeah you're seeming to understand why it's a problem the pentagon wants to invest in LLMs.
reddit
AI Jobs
1772206036.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o7ohx9o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_o7ozlko","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_o7p4ul0","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"rdc_o7pqrvv","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_o7pzjo8","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}
]