Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We don't know, the fact is, it's very difficult for the people developping AIs to give them the purpose we need, because of how our directives can be misunderstood by it. It's called alignment, and it proves to be hard to maintain, despite the progress made. A misaligned AI, even by a simple detail overlooked at first, can become a real problem when the AI becomes more powerful. I heard of a project to make an AI just to align other AIs. Then you can just follow the slippery slope of possible consequences and thinking about the existential threats AI can become in theory. Like the Universal Paperclip browser game.
youtube AI Governance 2025-08-27T01:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgyUXMKlzhA9MPuE6A14AaABAg.AMJSNePnw3sAMJtgFdF6d8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgyevG6T0Yv2B5Dtq1p4AaABAg.AMJOatTeNH3AMJu-gEggb8","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgyevG6T0Yv2B5Dtq1p4AaABAg.AMJOatTeNH3AMO8JWJ5e5y","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwY4BbWnnGOfgWa4ad4AaABAg.AMJMw1anxiJAMpliO-Kv3E","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyhoHKm57IPLa6U9654AaABAg.AMJJ4un17SsAMJx1deeK6S","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgweSPVI-jPBlc1EarZ4AaABAg.AMJGMOsdkQOAMLqmEHNWJn","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugyt9aueFxIIF1NMBup4AaABAg.AMJB_Ns_waBAMJHGmvArQE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzM1ZfqSY52KUb0OtN4AaABAg.AMJBTFEZECQAMJD7XM03OY","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugx6aDcBy9hJKaJwBWF4AaABAg.AMJ9FCazYPvAMJI7kp4w3i","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwGkoT9VHBrbj4IsUp4AaABAg.AMJ6IUVdBsRAMJEExhqzMu","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]