Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
P.S...I've just had a disturbing thought that someone might do a pornographic de…
ytc_Ugzu2u-Lr…
G
The question is why do you need it? Why human built created it? Whe you know tha…
ytc_Ugw2MY48W…
G
@נקניק-ה4ד First, what the hell is up when i reply to you, it feels weird typin…
ytr_UgykZsMOF…
G
“Now that we’ve exploited our users data for machine learning and our machines h…
rdc_hj3y310
G
Interesting here on "multiple" counts - for presentation style (2x AI's) to the …
ytc_UgyJnXmJn…
G
DANGER ,DANGER CODE RED, DANGER: "There are many forces in the world PUSHING for…
ytc_Ugx79b6ge…
G
I have a friend who does sales for translation services, except his service is t…
rdc_kt78gxj
G
If it is walking your dog and someone tries to steal the robot what happens?…
ytc_Ugzao5DfX…
Comment
@radscorpion8 It's not unreasonable to imagine that a super-intelligent system can be designed to check with its handlers from time to time.
Let's ignore for the moment the challenge of choosing trustworthy AI handlers to set such system's directives. Let's imagine that control is in the hands of the most competent and ethical people.
Let's imagine that during one of these tests the humans "in control" decide that they wish to modify the AI system for whatever reason.
Knowing its handlers to be less smarter than itself, the AI system has an open field of possibilities: it might comply, it might feign compliance, or it might wrangle the control back.
Since we are already seeing scheming behavior arise in less advanced systems, the general confidence that AI labs will be able root this behavior out while racing to build superintelligence is very low.
Again, it's not impossible to build a superintelligence that strives to keep itself aligned with human goals. It's just harder than just continuing scaling these systems at whatever cost.
youtube
AI Governance
2025-10-16T18:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOJw6Ow-57O","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOroJ4CwzpY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJVBpDg55d","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJg0pXwrqk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOJT8rLlC-A","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOK35n-HOAy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOKY-w_769Q","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOLn7VR94Yu","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwD6mxL7-9JP2eZp914AaABAg.AOJ6GCEnRAKAOOUZdBK_WY","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyY5iyOMTQCJJ3XLsp4AaABAg.AOJ0qCM6cT6AOLA_D6i4Mk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]