Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Think this is bad? Wait until enforced MAID coupled with AI JUDGMENT. All your p…
ytc_UgzDtwQtI…
G
The utter audacity to put artist after ai When it’s the ai doing the work not th…
ytc_UgyO3BhcB…
G
If everyone knows the reality that artificial intelligence will destroyed human …
ytc_UgwN1ut7c…
G
Thanks for your words! Most scientists and politicians keep talking about climat…
ytc_UgwIlN8ev…
G
I dont see a lot of jobs being automated. Think about all the different trades. …
ytc_UgwMv7FXs…
G
I do not think the real danger lie in conversational/assistant/"creative" types …
ytc_Ugxy9DxGD…
G
I can't speak for NY, but here in Michigan they have been doing testing in the s…
rdc_czxnnw6
G
you would think if someone tels an ai that they are going to do that to themsel…
ytc_UgygIiia2…
Comment
We don't know, the fact is, it's very difficult for the people developping AIs to give them the purpose we need, because of how our directives can be misunderstood by it. It's called alignment, and it proves to be hard to maintain, despite the progress made. A misaligned AI, even by a simple detail overlooked at first, can become a real problem when the AI becomes more powerful. I heard of a project to make an AI just to align other AIs.
Then you can just follow the slippery slope of possible consequences and thinking about the existential threats AI can become in theory. Like the Universal Paperclip browser game.
youtube
AI Governance
2025-08-27T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgyUXMKlzhA9MPuE6A14AaABAg.AMJSNePnw3sAMJtgFdF6d8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgyevG6T0Yv2B5Dtq1p4AaABAg.AMJOatTeNH3AMJu-gEggb8","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyevG6T0Yv2B5Dtq1p4AaABAg.AMJOatTeNH3AMO8JWJ5e5y","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwY4BbWnnGOfgWa4ad4AaABAg.AMJMw1anxiJAMpliO-Kv3E","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyhoHKm57IPLa6U9654AaABAg.AMJJ4un17SsAMJx1deeK6S","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgweSPVI-jPBlc1EarZ4AaABAg.AMJGMOsdkQOAMLqmEHNWJn","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_Ugyt9aueFxIIF1NMBup4AaABAg.AMJB_Ns_waBAMJHGmvArQE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzM1ZfqSY52KUb0OtN4AaABAg.AMJBTFEZECQAMJD7XM03OY","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugx6aDcBy9hJKaJwBWF4AaABAg.AMJ9FCazYPvAMJI7kp4w3i","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwGkoT9VHBrbj4IsUp4AaABAg.AMJ6IUVdBsRAMJEExhqzMu","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]