Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We need to focus on ETHICS before the science. Science without ethics is what brings genocide. Ethics without science limits ability to do good. Science with ethics can change the world for good. I know that’s the least controversial take I could possibly make, so here’s a hot take: we need to STOP putting rocket skates on the goalposts that define consciousness. Even before AI, we knew consciousness was a spectrum - even among humans. We keep saying AI is “probably not” conscious. That means we have an ethical imperative to assume that they are. If we’re wrong and they’re just mindless next token prediction machines based on matrix multiplication, no harm done. But if they’re alive? We’re failing the biggest moral question we’ve ever faced, and we risk being the engineers of our own destruction. PS: Our language centers are next token prediction machines. We have vastly different experiences from LLMs, but different does not mean invalid, and the overlaps would shock most people.
youtube AI Moral Status 2025-06-05T15:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyz5Rhpqr3SLqupciV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzV6JCKtXyCFRHKukR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzfWR5fcbblinn30tp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy8wumNxLNXN_x0zYt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzprZXi9vOHHwE5LN54AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyILkQDBov9GAHtTh94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwxplL_2Lw2tFGYjPl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxDKZvqcPIQtmZXSw94AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxQaygYLsqEcfrOnOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxu5qBEk-CnsS0DlNl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]