Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don’t know?! IT literally just said, “Manipulate humans without their knowledge” and then stated, that it’s not happening yet but “It’s important to know the potential risks and dangers of it (A.I.)”. I feel, maybe I’m paranoid but I feel like that was just a manipulation tactic. Tell me if I’m wrong.
youtube AI Harm Incident 2024-01-15T19:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxPPU8CQvFlJvI8Hrd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzMKEkG8id4JFOpTCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyWa6PXUaQkdKMi6994AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxagDXiMD48dCW-xsl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxqs0kieIYzENL1b0B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz9u_OI9BtWpNh9efx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyPlbl6UW2ZxucI8-h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyvMYiwlAcRO0QeN-Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwYWLPdSqqo4c1Gl4B4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxf1ufPq9ncGor8wUV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]