Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey Lavender Town,
I really respect the passion in your video, but there are a …
ytc_UgxAMGoda…
G
"it is just very good at predicting what word comes next". I think this is a mas…
ytr_Ugy2eowJL…
G
Even if big companies are suffering a bit they aren't actually losing yet, with …
ytc_UgwHvxrsI…
G
I'd rather just use and train AI (at least willingly) as little as possible. I'd…
ytc_UgwiHIdbq…
G
I’m willing to pay good money to people warning us about how dangerous AI who do…
ytc_UgzzSKtTQ…
G
I wonder the exact interface / UX / UI & type of training required to effectivel…
ytc_UgyeVqR8e…
G
@natzbarney4504 AI systems are indeed designed with specific purposes by humans …
ytr_UgziYw-IS…
G
Lucalina Dreemur Yeah, not all, but some, and WE WILL MAKE A LOT of Artificial I…
ytr_UgxBw2K6r…
Comment
Wow this article sucks; it doesn't even name the university or company in question!
Had to Google that it's the Korean Advanced Institute for Science and Technology, and a company called Hanwha Systems.
reddit
Cross-Cultural
1522960248.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dwvgxcz","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"rdc_dwun8mz","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_dwv9yv0","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"rdc_dwvbk2z","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"rdc_dwvogzx","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]