Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
could it be that AI is currently bad enough be open source, but since its getting better and smarter its getting more and more dangerous in public uncontrolled hands? for example if quantum computers would just be let loose for anyone to buy, it could lead to huge cyberattacks. But if they were terrible enough to not be dangerous at first and their developement just accelerated uncontrollably, it would have a bad outcome for the internet and passwords and stuff. so to summerize maby AI is harmless enough now, that we trust it in public hands but the acceleration of its developement is catching up to where it could be very dangerous.
youtube AI Moral Status 2025-11-05T10:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyNQWlffPiwXII38Ut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwIWGMqA46eD0_khKV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwbEDqgUurgYiRH-xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhHmXr4G28Xx7zA0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5XBIuUdSqwlGaa-14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-W_mGG5862d82-OF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwjTR0ClrcGZ_Oebwp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzMNuramyz21pKhxAJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyhVfwEzTPiw9VXD1B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyVxezbFIcXOeMvwBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]