Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It doesn't matter if AI takes peoples jobs. The work is still being done and the…
ytc_UgzUonfME…
G
Can't we just train an internal AI critic/fact checker so it can spot when it's …
ytc_Ugy5l42IH…
G
The idea that any world super power would decide to not go all in on this kind o…
ytc_Ugy54XyU3…
G
this style of prompting it called RTRI; RTRI stands for Role, Task, Requirements…
ytc_Ugw70zC0K…
G
@adamchurvis1
Did you understood what you said? Then please, explain to me what…
ytr_UgyuUTz-3…
G
So the replaced clueless people with clueless AI. I THOUGHT THIS WAS SUPPOSED TO…
ytc_UgzvGWELq…
G
It's all very clear to me: We need to utilize AI to build the Roman Empire in sp…
ytc_UgzXn-T7r…
G
“Sir Geoffrey Hinton’s remarks at minutes 23–24 were deeply insightful — calling…
ytc_Ugzc3tgwA…
Comment
Kaledrone /\ It wasn’t predetermined, it’s just that part of the A.I’s programming includes a sense of humor. He was joking. In almost every interview, Sophia is found joking around.
youtube
AI Moral Status
2020-05-31T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwTxa9Iazadw6eSXgx4AaABAg.97c7iz7sKj497wbQwyGA6W","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugxp2YUNzHNyXZF-TxB4AaABAg.97WmWHSkO3d99JWNLWM4kG","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwX8h01v7uOCeDf_GF4AaABAg.97P0EAYKU8I97l1TGeITms","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwCha1qsKSyBmE0g3t4AaABAg.97Er1DU3Pn497JUhx0S-6e","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgwaRt5X3vaWbmtFaCF4AaABAg.979URUPuhmT97k44np_wUa","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugzg3oDxzh5Mb9cUEnN4AaABAg.9760YseMXXo97oAEXw2rwb","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgyI_R-MIO31sw58chh4AaABAg.975nHOSo4CY97ZDfM9iupK","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgzMPrR3QHGIFkKPPwJ4AaABAg.972YIH1PFqB986FbMKtT6p","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzMPrR3QHGIFkKPPwJ4AaABAg.972YIH1PFqB98GYvniTPyI","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgxWjn-TglX6rqogCxh4AaABAg.96h5fZJVjRn96h75KAupSA","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]