Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ai needs to stop being trained on the internet and more like an actual child, so they can develop a logical framework around moral decision making.
youtube AI Harm Incident 2025-08-12T16:1… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugx8WoSZB1omBDTlx6R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz9WenEPfaBHrfY8114AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugww_scMJmoiIr05yiR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwwIJg8Q8Ofn6mQgNR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy1Cm3L5dBMk8H1iD94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyinpnyaUy-fPbKf2B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzididNs9T7oDN87Hp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxrjB8M2GRsq3OYf154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx5k5eh2JxYYLCU2xV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwV8tGL4bN6glrzmBB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"} ]