Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing about AI is that if this keeps up being misused, the work itself becomes meaningless. Why care for generated entretainment when its souless? Why care about a product that wasn't made for humans (as no one will be able to buy it?). Economy operates in the premise money changes hands and "money" is the main thing that rules when another should do a favor to another. So if value becomes meaningless, what would be money anymore? Money could bubble itself and explode. Because lets face it, AI is made with the idea that "we wont need to work hard", but "working hard" is what gives money to someone, or else you wouldn't give money to that person to do that "favour" in the first place. Is AI then pointless? And thats without considering the eventual energy and logistic drain when everyone decides to switch into it, making everyone afraid someday the grid collapses and if the "loss of internet" was bad enough, imagine this in the future.
youtube AI Harm Incident 2024-07-29T02:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyoBJXV2l_OF9pd3ax4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw8SxkREHajERbvSvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzBsOREl_7RvySJKlp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy6x5hHSgPlosJfVAZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw0m-xXNLUP3qfY4sp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwV5XiRyv4Qhx1RVaN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw96yzTTiZKr5KrPPF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzY3CYhOBobbK5oZjl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzptTZatex55Dq1gBR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzt4X4E8B8xmnoh_Y94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"} ]