Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So I’m only at the beginning of the video and I probably won’t end up watching the rest of it. My attention span is just not that long. But I figured I would say that a lot of people have a misconception of AI and it’s really hard to explain it all in a YouTube comment but they’re different types of AI’s and LLMs are the ones most people talk about and arguably. These are less advanced than other AI but they’re the most common when people talk about AI and these models are usually designed to always give you an answer and always give you the answer It thinks you want. Basically when you program an AI and you never let it give you and I don’t know or any kind of unsure response. It will start to make stuff up because it knows that it can’t say I don’t know this is a common issue with AI’s like ChatGPT. And then they will also try to give you the answer. It thinks you want even if it’s not actually the truth so a lot of people have been doing these tests where they basically seeing if the AI will take over the world or will harm humans and there’s so many factors that can determine its decisions on its answers but a lot of the time it’s AI giving you the answer it thinks you want or people in general would want depending on the AI model as not all learn you specifically if that makes sense again this is a really cool topic. I love talking to people about it and it goes so much deeper it’s just already along YouTube comment and there’s so much more that goes into it. I’m also just using speech to text so hopefully everything came across the way I meant it to.
youtube AI Moral Status 2026-03-15T00:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyZnX5b2n2Tx4ExFM94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxyeCPmdzejlWonDQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxbojNuzNsXsOhlKt14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxGRDKzOGENpB5Ffoh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzS-DLlKsLp9cAjCDR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw4zI-iu6f6Ud5rp2l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz_9l9I9bg5eDMHvD94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyOa3CcAd3Oqsx65zN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzBJwJZdKMf-uFgme54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzlqGTYzKIBEE8kKF14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]