Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Very interesting conversation. The last part that he mentioned about asking the AI for permission, that sounds a little bit frightening because imagine in the future when AI becomes even more intelligent and advanced if you authorize it to make decisions and ask its permission you are giving it immense powers. So even though the intentions are good it might backfire. AI computers are not organic biological life forms, so I feel like since they are man-made they do not qualify to be asked permission, for the aspect of safety and protection of the planet. You don't want robots to take over the world and tell us what to do although that's kind of already happening but we are still in control. I don't fee it wise giving AI powers that the machine might find ways to exploit against humans in the future.
youtube AI Moral Status 2022-07-04T02:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxT9rDjqb5T-CoHRed4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwyOs8HkwGLrk918Vd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxt9XvaOvWCwo6_rDR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugz13iGHbLm7cgMn2xh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzEd4-XvErHdan48rx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]