Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Satisficers converge to becoming optimizers as well. There is a lot of work on this in the AI safety literature, and it keeps pointing back to the basic unsolved nature of the problem.
youtube AI Governance 2024-11-12T23:3… ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugw8URcwZNEfrTsn3214AaABAg.AAkABDZKPanAAl3aj25n7W","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugw8URcwZNEfrTsn3214AaABAg.AAkABDZKPanAAlUZwau012","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyDwm68EKd1Nc_fPFZ4AaABAg.AAk8gYZ6NDzAE9hea43ox3","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwPdz_tCwa6_6XFV3R4AaABAg.AAjwvAwOg4bAAl40pScScB","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzitJmABghrTth4fa54AaABAg.AAjl4Xw44_lAAl4QT7Ig8o","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxPIOza9X46ztg-tHN4AaABAg.AAjZUj4e5ICAAlBnlwRo2t","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxPIOza9X46ztg-tHN4AaABAg.AAjZUj4e5ICAAunq4hXREw","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyezV4olXojezGLqQl4AaABAg.AAjLN1MGVd-AAlA9zKu0dB","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_Ugz-B4NOFCx3uGYQj8l4AaABAg.AAj96YI3NfYAAj9MIaoBhH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_Ugz-B4NOFCx3uGYQj8l4AaABAg.AAj96YI3NfYAAjF_DhPzkM","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]