Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Just look at what happened in this interview. First Tucker asks Elon to name a specific threat. Elon talks and talks, but does not name a specific threat. All he can come up with is that there are unknowns. (Note: Elon is considered by many to be smart, so we could expect him to answer a clear question with a clear answer.) Not having gotten an answer, Tucker has to ask Elon again to name a specific threat. And then Elon says that AI could influence people.That's the threat. Elon never objected against our former president trying to influence people with daily lies. To the contrary, Elon offered the guy access to his $44,000,000,000 megaphone. As a matter of fact, he wants anyone to have access to his megaphone, no matter what they say. Free speech, he calls it. But when an AI generates something that could influence people, we should suddenly be afraid? Preposterous. Note also he completely overlooks the danger in his back yard. If someone can hack into the software of Tesla (which can be updated over the air) they can turn millions of Teslas into killer machines. Of course Elon will give his word that this won't be possible. His word! I tell you. Looking for regulations? Start with self-driving cars.
youtube AI Governance 2023-04-30T17:0… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxh6e44i9fRvEpSGzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgznpNOdl7Yt9Rwdci54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxpnN-ZkYLdooz_Q114AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxlK_Rx9WFncyQ_y8V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyQF4ffOcSOvb5hxLR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJfmuQq8kt4hZDKVF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzwArRuJoIjR_vauyx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz0Ascv4aozcjKZQ794AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwxIqoYDo4pJswnArt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzKKVm6xt38ir-9RyV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"resignation"} ]