Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Response to Hank's ending edit: Whether A.I. has real intelligence or not is completely irrelevant to the dangers it poses today. If we go down this pathway, we are letting A.I. develop to be very "smart", and heavily integrate into society (manipulating us) basically uncontrolled, driven only by the motive for profit of huge tech corporations. It doesn’t matter if it’s superintelligence or not because we’re being so stupidly reckless with it that we will turn it into a tool that destroys us.  ....And humans are very good at using tools stupidly and badly. The effect will be disastrous whether A.I. is really just a machine repeatings its internal programming or sentient, because it is being used to satisfy the worst qualities of our society: laziness, desire for instant gratification, and desire for emotional stimulus (including hatred). We will use it like junk food until it destroys our society by giving us what we tell it we want. Btw: The brush-off that A.I, is just an "overhyped tool" still does nothing to change how dangerous that tool is when it can act with automation with its own internal processing that humans only have loose control over. Furthermore,, the "hype" argument tends to just be denial by people who depend on the industry to get paid. The insane speed of A.I. development already shows the truth, and it's just being blithely ignored,
youtube AI Moral Status 2025-10-31T04:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwk3yYIJh1pwzxwKyN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwi2xjqi-pdQPTTlxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx7-LkrL2fC3fUcJfB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwp3_F1Gv3Fe2k-TyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzaolg_zLprYoPGCpp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxPuWEf9dSiucEu9ll4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw0h810WfN94wnGoxB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwNwLu5J5hQkzBQbbx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgygBs5NN5oRKAiUsEx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyIvZPXfkqxUIAhCPl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"} ]