Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The whole notion that "AI won't hurt us because we made/created them" is incredibly naive, it's exactly like pulling the pin on a fragmentation grenade, popping the spoon, and then chucking it at the feet of your friend expecting them to not be hurt/killed by the explosion, debris, and shrapnel when the grenade detonates 4 seconds later simply because "humans made the grenade so therefore it cannot hurt humans".
youtube AI Harm Incident 2025-07-26T09:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy0KYs9JO2K1__l1uh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8D3a_mFXdDisucUB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwPUyF2v1sgghSl-lt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxCXmLiqz-lX_275od4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzgoWSkpqCzOKILloR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyfWyaRg-qapUCXwzV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgymLPNanDGjg4GwVFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyhrXNN_kTEXK7vZ4x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwCaKMo-6TEo3J_aAF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyrimuGFqJjCvq09k94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]