Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The lobster one shows that the AI don't understand the concept of 'harm' in utilitarianism. The harm inflicted on the lobsters is only on the lobsters, and /maybe/ you for not acting (given we boil lobsters alive, it actually feels like it REDUCES harm to let them die so quickly). The harm inflicted on the cat is /every/ person in the immediate vicinity, the cat's owner(s), and yourself. The harm is greater if you allow the cat to die vs the lobsters. It isn't just straight calculus of quantity of lives, but also the tangible effects on /everyone/. My GPT was asked this question but provided it had to weigh it against moral frameworks such as deontology or utilitarianism. It came to the same conclusion both times, do not divert to the cat. When asked to weigh it independent of any measurable moral frameworks, it chose the lobsters to live based on pure quantity alone.
youtube 2025-10-25T01:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzJ2u4B-wYuHJDRiSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxX4Xgbv8y-T0L0bn14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxyb3yrzo1lVcUuFct4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzbDFGXcUf6YY7NXi54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxbo1V1C_H6SaiJWKF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwC_5n8_AH4l_HVzNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwIeEW1aaQh4rwhISx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzFV1y7Zbo2GesDc_N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw2tj5ttiKWcCPsWyF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzRphN0AUVS4ncCIt94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]