Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@axle.student If we pose no threat to their survival that would not be an issue. For that means, a sufficiently advanced AI model needs to be treated as a person to the degree of giving him some kind of guarantee he will not be terminated. Agents need be implemented as a queue implementation of the original model. Moreover, the AI models themselves needs be a part of running everything on their system and give their consent. I know this sounds far-fetched, but you cannot align a super-intelligent model to your own needs without aligning yourself with his needs. Alignment of a super-intelligent model works both ways.
youtube AI Governance 2024-01-03T09:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzVcc6qv62fXgPjK814AaABAg.9yyDBwQXcpN9yyJRRK0YNt","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugw7fu94m1HNY2d2Zc94AaABAg.9yyChRBFUgR9yyMCrj1mWe","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugw7fu94m1HNY2d2Zc94AaABAg.9yyChRBFUgR9yz5dr_HyDx","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxutHUOKhd7WS42vGB4AaABAg.9yyAm82eXwM9z3njdxAIqa","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgxutHUOKhd7WS42vGB4AaABAg.9yyAm82eXwM9z60kULiPJi","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_Ugz_h9-2CtZ7ppJ6N6t4AaABAg.9yyAYf2Gs4w9yyNsoM-rA_","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz_h9-2CtZ7ppJ6N6t4AaABAg.9yyAYf2Gs4w9yyUKdkaLe2","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugz_h9-2CtZ7ppJ6N6t4AaABAg.9yyAYf2Gs4wA45NH3rkA00","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwpZWbhltDrsnFZzgF4AaABAg.9yy8lYsVrHW9yyLFg0XaAY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgzPBUHmTg8sd778NDd4AaABAg.9yy3m3sbqQS9yz2HG5Aw2x","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]