Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We are so focused on thinking that working is meant for us to pay for what we need in order to biologically survive, but that's only beacause society never had strong reasons to evolve past such paradigm, and that's why there are currently so many people doing jobs they are not "best for" them: nowadays masses of people are looking up for a job just for the sake of getting a salary, without considering that there are people more "worthy" of that job for the fact that the latter are way more passionate about it, thus capable of providing a better service than the first, who are there just for the sake of paying rent or whatever... but what if an AI-automated robot become capable of being a better worker (in term of service) than these people? At first, I think that the first kind of people I described in this comment are going to be substituted with machines, but soon even the very passionate guy will end up risking unemployment, as it would become pointless to correspond a salary: a machine does not need a salary, so why should I pay a person for doing the same job? At that point, the very concept of salary will become obsolete, thus the current vision of "why should I search for a job?", and people will finally have time to think about what it really means to be "useful" for their community - that is, unless governements react too slowly to the advent of AI-automated jobs and pretend people to "feed" economy regardless of that.
youtube AI Governance 2025-11-18T14:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxqHzVlUvyo6ymWNLt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxVs4nMFTfeMTk3cCx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzY7MIXm9h2zkWwHap4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw0kI9PKQKHFrEZanF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy6D83PoRolGZT80ZR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyxPkQb6L7GFT8Dcct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgypmTzgkSktdoBZY3d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwJKhVSADMtJne9HmF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyU12hBDskqxOhkzvt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz6zlhkGYpb20NdT6B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"} ]