Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@ReubenJAdams Okay, let's say the orthogonality thesis is correct (though I think it is a silly idea), just for the sake of argument (though I hate to argue just for argument's sake). That just makes it more important to get the values right from the start. In LLMs that means getting the training data right. In chatbots that just means getting the RLHF right. In OpenClaw there is a file for that. You just type it in. Anthropic uses constitutional AI, where they have a file of humanity's best ideas on morality. There has been a lot of work on the alignment problem. There has been a lot of success. Experimentation has shown us the failure modes. We have solutions for all of them, just need a little fine tuning. Safety research is actually going well. Doomers are delusional or motivated by self interest.
youtube 2026-02-15T16:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugyp_47DibnCEJmD1vt4AaABAg.ATbk3fxHqDbATbv56HMrEt","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwvgEsBGHUeIe7UKpd4AaABAg.ATDlzrRyZE7ATFmaM8OIh7","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwvgEsBGHUeIe7UKpd4AaABAg.ATDlzrRyZE7ATFuKBAfZSI","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgxMrcIwtyIJfDVdCwB4AaABAg.AT9qEcX7Oj0ATFH51sdP2j","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgyR8SvVsAe9DrvZDfB4AaABAg.AT9O4enydXiATDIO4j67O-","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyR8SvVsAe9DrvZDfB4AaABAg.AT9O4enydXiATDKa2Ghf0W","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_UgyR8SvVsAe9DrvZDfB4AaABAg.AT9O4enydXiATFwaljdjxN","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_UgzKKPDr3zJ5u5UBk2l4AaABAg.A2KPTUmhNb5A2WhH2E-4uS","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxscO9I4j44wtaoeIF4AaABAg.A1USgSGtcejAA1ZA0oCMfC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugx31Z7j0FPzXEuobQJ4AaABAg.A1Mc-Lef8I5A1QgPeEh5BL","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]