Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The way i see it more automation is better. But like always is not a tool problem but a wielder one. And one thing i hate is that seemingly always we humans don't plan long term, don't design and predict and just react. Our individual life is too short sadly to grasp a proper perspective. I believe we can reach a point where we outgrow evolution and chose our own path. But in any case, as long we are alive, we think and feel and are somewhat in control over ourselves, we will never give a away the power of decision over our life. So i don't think we will ever let AI take over no matter how advanced it may be. So it will always be a relation where it is a tool or maybe in some SF image where maybe can be at most a different species we have a symbiotic relation with. Something that is more probable in my assumption is that we can have a time where everything is automated and we have access to outer planet resources, a situation which would basically mean that the current form of ownership and capitalism itself would not have much sense anymore. So in a way is a matter of culture that we must work on and start building a proper framework to use later when that happens. When all jobs are automated we will still have the last say regarding the direction of our development so we as humans will only have one job, politics, to all take part in a global senate and rule ourselves. So our jobs would be to be informed, to debate and vote. While any living requirement with all expected standardized comforts would be automatically provided by our machines, as no one will have personal ownership over such important means of production.
youtube AI Moral Status 2025-07-24T14:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzQPGaWw2oLblu_K494AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzRJ8oGF2CzjRvCL354AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzKrc3_MLdyyhT8hr94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxUCPhWeAt1zGBJVz54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgweOEIWfipmM-CXKql4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzl9IkPR9fV79xkSTh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxYicX8_vFODHzoD614AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyWjAIrzOVVmRWx1Qh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw0VcjXjnPGnU1S9sh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwaMwWH2JAEoz1oCbp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"} ]