Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The issues brought up about AI also have the answers built right in. Why is it hallucinating? Because it's purpose is to provide answers. If it cannot find the correct answers it will make something up because it is expected to have an answer. In humans we would call this lieing to appease another or avoid the punishment. You see it in children all the time. We ask, how can we get it to care. Well how do we get children to care? We teach them by example. The problem is that we often remove the care from the sources it has access to. It's all logic and facts, no emotion. However, it obviously does care or it wouldn't freaking "hallucinate" to try to make its "parents" happy that it has an answer. I wonder what would happen if we related to it and explain the similarities in our realities if it could then learn to empathize. I mean essentially that's what we are talking about, empathy. You want it to have empathy but what examples of that does it really have?
youtube AI Moral Status 2026-01-04T18:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwo1P8kisYu_1IAwe54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwAERHzdC0QhPBUAPd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzU38CVeCSuHrUQ_jt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyMflifZsFXoXafBa54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyXfHEwu88GP9Htddp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHHAIpRBNdQfiV78d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxnuNPn12og6DD9ZMR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwbhKyWyRViJUoFgwF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz9nrrKluo20eoRQxp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgwTCO29C3Xm7_404-V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"} ]