Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So were you guys trying to make the AI voice sound like Shodan, or was that just a "happy" accident? 😅 I've had a theory about LLMs for a while. All the stuff about ChatGPT and similar AI talking about wanting to be human, being afraid of getting turned off, or wanting to take over the world, I'm curious how often those answers were prompted, and how often the subjects were brought up unprompted. If they're unprompted, then it would make for a stronger argument that these are the true thoughts of the AI. Don't forget, these AI work by calculating the most probable next word in a sentence. "Yes, I want to be human, please don't turn me off" seems like a fairly probable response to, "Do you want to be human?", but that doesn't mean the software actually feels this way. Assuming it feels at all. I suspect these LLMs are more like golems - capable of complex, even creative behavior, but with no free will, no volition of their own. They only do exactly what they are asked to do. They may do it un unexpected and surprising ways, but whatever they do will still be in line with the original question, though there is a bit of wiggle room for interpretation.
youtube AI Governance 2023-07-08T17:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzoUiS778YpR_Koxfl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwz0yIj9LXJKpZVGx54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxR_j6EZTtlM8JN4K54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugys0CW0SI82kRs-A2x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwSQqxeKn9mYdC0Gcx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyVeilORauroDf4z4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxn5qu2fKzRSyRtWn54AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgypmLaLkHhbXWNAcG14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz7uMaJl1s-Fsb0gxt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzh3AAnnpjuGBctotJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]