Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.I. will not benefit humankind, cannot create more equality among human beings, cannot liberate human beings from drudgery anymore than technological advances of the past. The reason is simple, the means of production is owned by the few. It will be in the hands of the people who now hold power and they will use it to increase their power and wealth. It wil enhance their surveillance of the general poplation and conceal their crimes even better,. A.I. will be fashioned to support the current power structures, to refine the propoganda that masks the true nature of their power and dominance. The idea of an impartial A.I. that benefits all people eually and transcends the injustices of modern society is naive in the extreme. Automation has always been held up as a promise to people for the relief from drudgery and the toil of work but people work just as long and with just as little enthusiasm for the work they have to do to support themselves, as they ever did. The means of production is just not in their hands. The benefits of this new technology witll not be for all of mankind to share.
youtube AI Governance 2024-01-03T06:1…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz6d42HrbAq02lx0-N4AaABAg","responsibility":"elite","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgxFyS_DO3-1AY37Myh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyATppEIfJ5tqxwWTd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwJsFYsKTJW8xg5loZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxSXQIBVlujrgvZmQh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwrTANlIY4QPvDiPwd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw04a8Dh5R7hn6xivB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy9sHJY5_XpiV2ipCF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy7dhnr5WxOwcfFAbh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw1zzEKMgTp_3260CN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]