Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At the heart, we're just afraid of humans. Who knows what the singularity will lead to it could be uncontrollable completely. But we're all worried some people will have control of the technology and take our jobs, take our lives, and we can't slow down, because some other people in the world will do it first and then they will be ones controlling or killing us. The scary thing is reality mimics fiction, we imagine ideas in fiction and then we make it a reality. It's our expectation for what new technology will be and we work towards that end. It concerns me that our imagination of AI is some sort of a matrix or terminator-esque nightmare where AI wants us dead or soulless. I fear we are a deeply flawed species that will only be able to create a flawed form of intelligence, since it's learning from us. Perhaps, down the line, the intelligence "we" make will design a more perfect entity that isn't stained by our fear.
youtube AI Governance 2026-02-28T16:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwRZfb2Iuf1zf9y1lJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzX3erF7trr2RGpEld4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugyjom0aV5Y6KUetS8R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwemdJUAZDL4ZS2SiN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyNQ6ZGfWACWeyzmRl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxn7h2D7oCz3IgYb7x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxlRCl0lqyER8fjje14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwGsKWHJzZ7jJt-29B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxRKf7rF9DS3hBYL_t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxOUBZagUA1mGQLZzF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]