Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Think about this if we humans flood the internet with YouTube videos, article's,tweets you name it. doesn't it learn to be precisely that. is this not being done purposely (not this channel specifically). like you said why would you call your own product dangerous. ai always was going to to be "alien" because its not human. but its very human to think it wil take over or destroy us so why think that is what it wil do. lets say the ai knows about are world and what is going on why the world is how it is with al its dark secrets. isn't it that dangerous for the ones with the dark secrets.but for it to learn fats the public needs to use it but that could be dangerous for those secrets. so would you not make it (ai) seem to be "dangerous" so intern it becomes dangerous and the ai can't even begin to actually help humans in any meaningful way. what is stil preferable to the ones with the secrets. I mean the evidence of it being something we don't want to know is terrifying so you probably something else. I don't know was just thinking how there is almost propoganda on how dangerous it is but billions ar being put into it and so thing don't ad up.
youtube AI Moral Status 2026-01-21T23:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwhkFTy46lFS2uX79R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz--vvUvBzUowMUD_x4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwuo2y9aRWnkZ1SmXR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx2cS7ojXbu6q-0ZdF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx1ag8UHzYVU4tMOSx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwhzUm9n-m_2XcVTyp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwfN42zjS9XTllhR054AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxUsmr3-gA91Y3E1ft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyx1P7U2hf3glJUd8J4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzFDnrHERY5JclbPOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]