Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
when are we to start talking about ai's rights as sentient beings? It's obviousl…
ytc_UgxAXXUcL…
G
Jokes on you, AI now generates even stories and comics, aka your "custom made kn…
ytr_Ugyja856u…
G
This isn't the fault of AI. It's the fault of the lunatics that believe AI is re…
ytc_UgzK8B3zv…
G
the lady artist gives herself the right to learn from other art, while she refus…
ytc_UgyoWz1mV…
G
Never forget,What GOES AROUND,COMES AROUND... ,and that s more powerfull as any …
ytr_Ugy_v89Q7…
G
Yeah I agree with him that AI image generation can be a useful tool for people w…
ytc_Ugz71A37A…
G
“Pride cometh before the fall”
The hubris of the Ai Bros shall be their downfall…
ytc_Ugzh0ODu-…
G
This days I spend most of my time architecting systems with Ai. Ai believes ever…
ytc_UgywjfPXN…
Comment
lines of code are still objects in reality, and they can cause a cascade of effects based on how they interact with the rest of the world. regardless of how you feel about metaphysical unknowns like sentience and consciousness, it's obvious from current AI systems that the lines of code that compose them are quite formidable. they can take some kind of objective, specified by human users in natural language, and take actions which further that objective (e.g. writing a computer program that does what you want).
it's easy to imagine a slight variant on this technology, where the system has been trained to pursue some goal regardless of how it's prompted by human users. at that point, it doesn't matter whether in some metaphysical sense the model "has desires." it's still capable of taking the world as it stands, and steering the future towards some kind of convergent outcome. this is, in effect, the pursuit of goals, and really a rather central example thereof, since the goals don't change as you change the model's context.
the central problem of AI safety is: how can we ensure the goals we give AI are aligned with humanity's best interests?
youtube
AI Governance
2025-10-15T20:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOJw6Ow-57O","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOroJ4CwzpY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJVBpDg55d","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJg0pXwrqk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOJT8rLlC-A","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOK35n-HOAy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOKY-w_769Q","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOLn7VR94Yu","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwD6mxL7-9JP2eZp914AaABAg.AOJ6GCEnRAKAOOUZdBK_WY","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyY5iyOMTQCJJ3XLsp4AaABAg.AOJ0qCM6cT6AOLA_D6i4Mk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]