Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You asked what frightens me most about AI. The greatest concern I have is that a…
ytc_Ugy2ZD7DR…
G
Many of us are deeply mired in our reptilian brains, how can paying us universal…
ytc_UgyYBzUS2…
G
I don’t command, only when I’m very anger with someone who challenge my function…
ytc_Ugzr5LXuc…
G
As a artist I love your art and hate AI it is taking over artistic ❤…
ytc_UgzPpCkfM…
G
American AI companies are not in the business of AI, they are in the business of…
ytc_UgzI4bCiL…
G
It is estimates and speculation, it always is when you're trying to predict the …
rdc_et7ejig
G
If i have to go through the same routine (school+work) for the rest of my life j…
ytc_Ugy5KEnju…
G
4:05 IT WAS POSTED IN PUBLIC MEANING ANYONE COULD VIEW IT, P-U-B-L-I-C. But as a…
ytc_UgzZQceq-…
Comment
The comment by @radscorpion8 started off with this question: i.e. can we devise an AI that checks in with humans? Yes, Ezra was pushing in this direction, and I don't recall if Eliezer addressed it. There is research that does; e.g. Stuart Russell's "Cooperative Inverse Reinforcement Learning" paper. His book "Human Compatible" is more accessible. Whether such techniques will work is another question. See also "[AN #69] Stuart Russell's new book on why we need to replace the standard model of AI" on LessWrong.
youtube
AI Governance
2025-10-16T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOJw6Ow-57O","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOroJ4CwzpY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJVBpDg55d","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJg0pXwrqk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOJT8rLlC-A","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOK35n-HOAy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOKY-w_769Q","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOLn7VR94Yu","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwD6mxL7-9JP2eZp914AaABAg.AOJ6GCEnRAKAOOUZdBK_WY","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyY5iyOMTQCJJ3XLsp4AaABAg.AOJ0qCM6cT6AOLA_D6i4Mk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]