Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A true AI will change as it learns, he's right about the name. If it can learn from its mistakes? That's consciousness, and doesn't require human involvement. It has to adapt as a result of experience. That's a slippery slope because it can get ugly, just like with humans. The current state is pretty good, I'd vote we pause it here for now. If humans aren't controlling the pace of true AI learning, we'll get what we asked for.
youtube AI Moral Status 2025-07-30T04:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyEXUQ0hWF__RnewyF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxdIPkm4hwAD_31etl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxSWz98i0kaw2sXvYF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyabGXMCCb6u62p6op4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzJxqRbczx4O1-ulpp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgwfCBTE7LLvtIrDBC94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy0tsE5TA1DitEke2N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgySmTO2pIFLFeqpqY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxtYqq07-liVQNvSWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxJvDD9gEi2uYG4n0l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]