Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@Aethelhadas The main way you would train an animal, say a dog, is to give it treats to reinforce the behavior you want out of it. Same with many AI models: we give them a positive score when they do what we want them to do and a negative score when not and they have a forcing function that takes the score into consideration when learning. In the case of RLHF, which I've mentioned, this is not automated (since it's too complex to come up with a set of clear and obvious rules by which to do it) so you hire a bunch of folks to thumbs up/down its responses (or even use the community's help).
youtube AI Moral Status 2025-05-26T21:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzCSbUhN9ngL8t7q514AaABAg.AGGzcaofm37AGXsrHOH4hP","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugy3ML8xylPotFiT6a94AaABAg.AGGq7UyTvlVAGV4SBAVYKz","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgxxqOgyzQLc9GCkfcd4AaABAg.AGEulF6_Q9mAIaxg-MEsqH","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugy48SuGPFvSuaZ1D-B4AaABAg.AGER7bdsfbxAGIoS3r4JeM","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgynectnfCtm3KZx8YJ4AaABAg.AGDwGCGmka4AGJ764X_M7O","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgynectnfCtm3KZx8YJ4AaABAg.AGDwGCGmka4AGSwvz9KWXa","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugw3XIkD2vM4XSNVWel4AaABAg.AGDwEA01pYKAGFvjf-8Zxs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugw3XIkD2vM4XSNVWel4AaABAg.AGDwEA01pYKAGU2DHgmW9u","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugxea3K4zPW5grtQSIN4AaABAg.AGDn__ANUoAAGGBL_Bug41","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugw9yLlx7pU6643IWwp4AaABAg.AGDja-a5lDxAGUMAzyTuxv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]