Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is definitely not the scenario that will occur. Though, the final result might have some similarities. This is a possible scenario and a scary one at that. The biggest problem is that, even though, this scenario specifically probably won't be the end, but there are plenty of other scenarios that end just or almost as bad. The number of bad scenarios outnumber the good. My take is simple, if you are religious, if AI kills us, enjoy heaven. If you aren't, hope you are wrong, or nothing after death (unfortunate from the current perspective as with death there would be no perspective, so it'd be neutrality). If AI turns out good, yay, heaven on Earth. Ultimately, all of this fearmongering is a waste of time. AI is an arms race. You will never get countries to stop or slow down. If the USA takes a step back, they lose to china who doesn't. All it takes is for one country to have a misaligned AI for us to all die. All it takes is for our jobs to leave but not UBI for life to be hell as well. While AI is coming no matter what, whether or not it ends in heaven or hell, we don't know yet. Just hope, I guess. That's all you can do. Sidenote: If AI wasn't misaligned and controlled the government, all our problems go away. Just my opinion. No corruption. Smarter with policies. I'm sure a super intelligent AI can easily figure out a feasible method for UBI.
youtube Viral AI Reaction 2025-11-23T05:4… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxhvcH-6m9MmpVRuK14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyjMtEc3MMEbidjbsh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzLCUum5VCRglecJiV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxNWn0BldTAal5-kHB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxGergJPAiFUxcvION4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyY9J7MiFqrsKavJI54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugww1R1QNj_59ElCQQd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx3afpUWS4FEroaph14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzkoryG2VsZ3IqbLT54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz9_kNs4Dz8sF7MD-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]