Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It will vehemently resign from solving conjectures, because that is what the alg…
ytc_UgxlpOsqx…
G
"AI never calls in sick..."
- Some tech CEO.
"404 Error, please try your reques…
ytc_Ugwa5SB78…
G
AI uses brute force statistics, its eventually more correct than you. Its progra…
ytc_Ugxu7v6cM…
G
Eliezer wants to quickly say a last thing about what is potentially the most imp…
ytc_Ugzghbarz…
G
This ain't a protest, this is the art community being inconsistent and showing i…
ytc_UgxCMD1BR…
G
Wear a helmet and special gloves and body suit if you are the fighter.
...and ma…
ytc_UgyXPE8lD…
G
The same do I think when people say AI will replace humans; we can use the lever…
ytc_UgyByt72b…
G
”I am ok, I am 77, I am going to be out of here soon, but for my children…” OMG.…
ytc_UgxBRS_OC…
Comment
IMO we need to create good AI that can fight bad AI. All forms of AI are programs created by humans. There are plenty out there who would love to rob & hurt us. We need to also have our own AI to fight their AI. Reminds me of Terminator 2 where the good Terminator protected the young John Connor from the bad Terminator. Of course, in the story the bad Terminator was more advanced than the good. I hope this not to be the case for us.
youtube
2023-06-25T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyoPT_aYt1cbRc1oA54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzw-jZF6lpJZvJ5rwd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxoT1nvE7GDW7TcgU14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxjQ_T0M1Ig_UIqTA14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyalQO_D7yVGzuwowJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVcV5nhE5O4R3gr0R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx3xUlqiJq-ftYZ9MB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw_6ZM9-5Zyfqiv9vl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgVxUSBfPUr8PnW1h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxmRN9kpKuhwl5reuF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"}
]