Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
we have to stop using our human example of "thinking" to define what thinking is. just because a super advanced general ai doesn't think like a human, doesn't make it less dangerous. hey hank. your conclusion scares me. that u dont know if its possible to make an ai smarter than humans (that part is being repeatedly proven already). its exactly what is making all this soon approaching horror possible. the ego that we're special. dont confuse different kinds of ai. if you carefully describe a complex system to even a chatbot, and then ask a question about it, its pretty clear that chatbot understabds the system you described so is ALSO using a general intelligence ai for processing information in a way indistinguishable from thinking. we already know ai can think on some level, and its getting better. the real question is: what will be the limit of how well it can think? its already faster than a human brain, it has virtually no size limit, it can design processors faster and smaller and much more efficient than we have already. what limit will it hit that would stop it from getting smarter than we are? its already smarter than humans in a bunch of individual ways. people thinking ai cant get smarter than a human in general is just denial. there is literally no physical limit to its improvement anywhere in sight. its brain is a million times bigger than a humans brain, it can use a billion times more power than a humans brain, it can do miltitasking of a billion things while humans have trouble with 2, or maybe 5 things at once. the limits of what a thinking ai can become is terrifying, and astronomical. this IS GOING TO HAPPEN unless we change course drastically as soon as possible. were quickly running out of time.
youtube AI Moral Status 2025-10-31T11:1… ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxh-xFMDzRtijV_RWN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyD5wue98PbE5w6vQh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugwwr9Hz-Xm0ju_Z87J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugy_RYe1DkyFLHZpett4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyZGfs_ksrhC3MEb9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwPd8HSWtVjiRbdrup4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx0Ea5lnMKyM_Ze4BV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwq9ZICKsOqZQpO8oR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"indifference"}, {"id":"ytc_UgyqkWMwN_-RNd09MSl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwK4s6z8uBakQbWtSh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"fear"} ]