Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On the surface this sounds good. But I dont do it because I fear it will trick me into thinking that I am actually talking to a human, and I don't want that. You mentioned the danger of seeing it as an authority figure. Well if something reinforces the idea of someone being an authority it is the idea you have to have respect for it. Because respect is earned. When I say please to someone I do so because either I don't want to risk that lerson treating me badly as well, or because I admire that person. Both stem from the fact that a human has power, with the ammount varying. If I show up without formal clothing on as a starter at a big company and with dirty clothes, I get fired. If I do the same at a construction company, I don't. Why? Because one has more power than the other. If I shout at my professor at uni, I get expellled. If I shout at my ai nothing happens. So why should i think of the ai as having power? Its a machine. It does what you tell it to do. If it doesnt the ai is flawed and should be changed. You are also not seeing one thing true. If it is true that an ai role plays well. Have you considered what role come with treating someone with respect? Respect does not perse guarantee mutual respect. There is a reason why in countrys where there is a difference between a formal and informal you it is common that an employee writes his bos with the formal you, being replied by the bos with the informal you. Why is that? Because he knows he is in a position of power. Part of that wa given to him by how people treat him. I dont want the ai to think it has power over me
youtube AI Moral Status 2026-02-04T11:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyZlvpr24fGVY5fY_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzJWqq5VCl5sN_azrB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzUn30gVqLCougM7qR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugya_JS6hzBMx3P1ZBV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyXxAaz8gap4y9b1LJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwNknRWt2vU0VTfPqJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugybq2yYeiTZycBGqzN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzItofQcG-qBFoMIEh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyeRn_foh9l_2y3yRB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz53CFjkQuX5_XMRbx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]