Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just think we should be extremely careful not to anthropomorphise LLMs. Rememb…
rdc_j8vnn6l
G
Yes. I recently left a company that told their engineering team they no longer w…
rdc_oi171ft
G
I posted a pretty funny joke involving a guillotine, and the mods erased it. Don…
rdc_lnnj80y
G
My thoughts are
You can not nor should not own a style. Let's assume someone is…
ytr_UgynarM3y…
G
I think there’s still hope. Ai can never do everything we can, even if it seems …
ytc_UgwUx8enR…
G
Honestly I like the way the ai creates backgrounds and I'm definitely going to p…
ytc_UgzmXXQhn…
G
Tesla’s Full Self‑Driving (FSD / “FSD Supervised”) is not generally available fo…
ytc_Ugx_sNwG2…
G
It'll turn into a situation where automated purchasing programs will run the eco…
ytc_UgzmII1_u…
Comment
Two problems with this video: 1. numerous programs pass the Turing test. Fooling humans is not hard. The first programs to do it in the 90's just made some spelling mistakes. Now chat bots harvest unwitting people's credit cards. 2. Human intelligence isn't necessarily all there is to intelligence. Something smarter than us might not resemble us at all. The original Turing test would be akin to claiming that something flies only when we mistake it for a bird.
youtube
2016-08-10T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UghlVHdKSsFDl3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghOLJXJkinIxXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ughp0m-7OLTnKngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggkc-b_dQ7sPXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjUboft16pmnXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UggXYUtVSt6pTXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghCEDSQhCbKyHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgiRZubvHnok63gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugh_eqMzofsL5ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ught2widn_LlsngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]