Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've been saying this for years: training LLMs on unvetted bulk human conversations, fictional writing, and magical thinking will not result in anything other than a roleplaying facsimile of human intelligence. We need to build tools on ontological fact datasets, not bs, and not synthetic data built on bs.
youtube AI Harm Incident 2025-11-07T21:0… ♥ 82
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyregulate
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwccL-tEf1teXcEePZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzw7TtO_yb3Naij-o54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyanQ_xog7LXQPjmAx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyKb3L8zLZKrL6noe94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzzl_KxEudZxMxGvKx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyk_0cnXF8VxWHYhJN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzsHbHQEflgaf_3n214AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzkMVc5BwMMVMOI7YV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugy9cGqN9etDtKbQw2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgydjVvWvFWStr9z0BV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]