Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like the intro is meant to be some huge revelation that the teenage girl is more likely to commit a crime than the murderer or whatever, but I think it makes a lot of sense. The violent criminal got convicted already, the teen girl had no repercussions (at least by the description given, I think in the actual case she was arrested). The violent criminal probably realised that his violent crimes achieved nothing, while the teen had a fun scooter ride for a bit so was actively rewarded for stealing it. Edit: I should state I'm not in favour of an algorithm determining someone's verdict. For the same reason all the other comments are against it - no accountability. But I am saying that the initial example is one that I would expect to actually be the case most of the time. Any AI trained on enough past scenarios will have a good chance of being right, as is the nature of AI, but it's still just probability and probability can do all kinds of wacky things.
youtube 2022-07-26T20:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzG42WDC-z6EpnwyYB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyN0MORpZaY1v49Lfl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw-51QLPjQ6yxdapfx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxwxbiKbar9ktWP3iZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwqf155AC7MjMLIQyt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyNT-lVYcDsR1Z66KR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzDSexj0m_MZQrtp894AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz5xL4nSIRF2WVX5pB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1i4_BYgtF5y6aGpN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyeKeAQU0xLkw3RKP54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]