Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Honestly, all the probabilities whether AI will turn bad by 10-25% are not factual. They sound quantitative, but they are all based on feelings. Some of these feelings are based in what researchers of AI say and others are based on tech-bro feels like it. The latter can be discarded more the the further, but they are both tainted by many biases. However, under the current political framework, i.e, fast leap forward for oligarchs and fascists, continuing with the research until it becomes dangerous for various reasons (and consciousness is the least knowable one, others like let it control a weapon system and then lose control, as it malfunctions and starts to misinterpret reality or used as research and consultant bot to support political, technical or scientific decisions which will lead to a catastrophe or creating even more tension in populations to an extend that there is a civil war or used in trade and the stock markets fail (we have done this with math prediction models already) or goods cannot be delivered). And it is absolute sure that at least on of those tech-bros would continue regardless the consequences, as they see themselves as invincible.
youtube AI Moral Status 2026-01-08T14:2… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxXvP06xB_rvHXU8nl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxB2lUMC10V2WCKMdh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzG6m5nNk-ZQp4yPdd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzdLgUpm0zqRww_36x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxXKB0Q9EOyb0TYAQ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz9jWegCqJ5MLH9GXF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxjHKweqa7s6ZC0JHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy-KZ4-7G2BKQOny894AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy88yz9_C5B-z5vALJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx-G5YAEcxVcUZLiZt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]