Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I believe that AI could be appropriated to help humanity by taking over manual and intellectual tasks such that our schedules are freed up to pursue further education throughout our lives without the burden of survival. A society could be designed by rewarding education with income rather than requiring income to pursue education. Just by trying educate yourself, you would receive the income reward regardless of your grades or progress, but if you do not try to pursue education then you will have no income. This would further our intellectual evolution and what we have to offer to AI is an educated assessment on their confabulations. I think there are very real limitations to a subjective experience in that it is subjective, and AI should be able to understand the value of a biological perspective to intellectual pursuits. We only see it as a threat because we evolved to see every "conscious" thing as a threat and we have to be the apex predators, but in order for us to co-exist we have to stop seeing AI as a threat, and it would probably not see us as a threat either because it would place value in our subjective experience and unique perspectives.
youtube AI Moral Status 2026-03-13T02:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyIlqS5_eGI9be6ufp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9wvl973rqLthc6bh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyBPQGdn0NHNL1yaup4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgywK32HX2VjUHEo8pp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzzaGNrAJgHaehosAR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyoSqHRdQGIjFCDPVB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0jEbud9Gc3NoGmyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKE-v74tu2S3C13ux4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwzfhe5dvWmsiZjoKB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzmbCe7W4fmvjuwPtd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"})