Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If this happens then there's two things:
We will be next Hitler
Ai vs Humans. Me…
ytc_UgxWxIeAq…
G
Can you say more? Not sure what point you're making? Is it, we shouldn't be doin…
ytr_UgxpTackW…
G
When I was a resident I spent one week reading mammograms. I can honestly say t…
rdc_fcsrkoo
G
This article is honestly so funny. It’s an opinionated retelling of a [BBC artic…
rdc_l9wnqzc
G
I think modern ai are not, but theoretically if consciousness is just a mass of …
ytc_UgzjiGo95…
G
I understand your concerns about AI, and it's a topic that often sparks intense …
ytr_Ugx2mgzVO…
G
Ai could be a great assistant for a developer but not a replacement. We can bar…
ytc_Ugz68YjJN…
G
Funny, when I finally got my ass and got into an art school instead of self-lear…
ytc_UgwhL3vgu…
Comment
Alexisthebest ever Yes, but what if that task is to kill anyone who could remotely be a threat based on profiling rather than evidence? Do you think these machines will know the difference between an actual terrorist who's committed crimes against humanity and a guy who "looks like" a terrorist and happens to be walking past a building where it is suspected terrorists are hiding out? Do you think the people who will program these machines will give a shit if they kill a bunch of innocents as long as they get the terrorists as well? The U.S. already has a LONG history of not giving a single fuck about "collateral damage" which is a nice phrase for murdering innocent people. "Oh, we dropped a bomb on a school by mistake? Too bad. We'll try harder next time." That's all fine and dandy until it's YOUR kids attending the school. And what do you think would happen if a terrorist group got their hands on an autonomous killing machine that can be programmed to profile the enemy?
youtube
2015-07-30T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugghx3Nm4RuttHgCoAEC.82DGI6gxKIX7-ICy93g2kI","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgiQlhIkTPakkngCoAEC.82DFFvsPv_27-H5EW9ggKw","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgiQlhIkTPakkngCoAEC.82DFFvsPv_27-HG_bomCne","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgiQlhIkTPakkngCoAEC.82DFFvsPv_27-HLM5uXJkL","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgiQlhIkTPakkngCoAEC.82DFFvsPv_27-HRAbdeSdD","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugjiws5jvbtj-3gCoAEC.82DFDo_qMgd7-H1P9al9aR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugjiws5jvbtj-3gCoAEC.82DFDo_qMgd7-H5jcxhEEn","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytr_Ugjiws5jvbtj-3gCoAEC.82DFDo_qMgd7-H5zMVtLEP","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytr_UghppzR5z6_tM3gCoAEC.82DDSIk0JM17-H2hdl4lgx","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UghppzR5z6_tM3gCoAEC.82DDSIk0JM17-H5a17Ul0g","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]