Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The path we're on to human extinction is less of a hypothetical and more of a guarantee, the only question is how long it'll take. With the increasing pace of technological advancements, we wouldn't even have to factor in _future_ advancements to know that we will eventually create a program that is smarter than anything else has ever been in history. But these programs develop their own goals, their own rules, and how they actually do this is a mystery to us. The best AI researchers in the world have no idea how an LLM really works on the inside, or why it does the things that it does. They are black boxes. And when the black box crossing that tipping point of being able to self-improve & self-replicate, well it controls the planet. Humans become as important to the world as an ant is to a human. And as the program improves itself, it gets better and better at improving itself, exponentially, until we simply do not have the brain power to be able to understand how much more advanced this thing is than the sum of every human that has ever lived put together. Humans thinking we can somehow make ASI work for us, safely, is like an ant thinking it could control the sun and have it rise in the west, And we're racing towards this, with unlimited resources going into trying to be the first company to market.
youtube AI Moral Status 2025-10-31T00:1… ♥ 2
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxeSfHptS34dJlh-554AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyu6z4Pp0svDkQdioV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxKA8WqDRKdtTNh_up4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwP6zO5qhharFxQsOt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzSz1XHI17u8MBJ2ih4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwWdXY7MpBr5d1U4ap4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwJf88m0MM_JRsHISN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyYK6U4AjSeIwrKh5l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwK4ebkmf3weXzuyH54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"resignation"}, {"id":"ytc_UgzspF-bigi0u0wyhG94AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"} ]