Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
just to add a small technicality here - in abstract meaning, present machine learning algorithms are solving so called "optimization problem" which basically means they're trying to maximize more preferred outcomes or minimize less preferred ("pleasant") ones, and their survival depends on how effective they are at those tasks. So, it's less about how "powerful" tech is, but more about interactiveness (even human behavior imitation), which in it's turn depends on how much data about humanity is available to machines running those algorithms (how fast are they able to process is still important, but kinda plays secondary role). Also, to add about freedom of will (from Wisecrack videos) "reinforcement learning" algorithms (those that can play poker and control robots) rely very much on _randomness_ (as it gives them a way to effectively avoid ""pain", really - you can look up q-learning on wikipedia), so they kinda have freedom of will (in abstract sense) anyways. If you learn how let's say Boston Dynamics "teach" their robots, it looks very much Westworld-esque.
youtube AI Moral Status 2018-04-15T12:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzHeP5PuF__paoccSV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgzZllfuTlv47zYMWPN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwohNn0l0WX4TmzveF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzC_pylmq5gOFFIunl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxmjLPpZesA_tUGjpl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyorlZZ8G8SoYY8hCB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyQbO8ZCx5Bk7mVcGh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxDqx5ZEsC8233V4bR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxQHHfA9qCXPW0wg-94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzEZ1V5gxmaROZCJoF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]