Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Data developer here: People can debug AI in its current state, its just very very tedious and expensive. Debugging does not need to be done in plain English, and perhaps should not be done in plain English because human language can be vague. It will probably be best to have the ai provide context to its processes in a code-like manner. Also, remember, computers always do exactly, and only, what you tell them. This includes AI. People have anthropormophized AI bugs. But that is what they are: bugs. Just like how in the 90s bugs and exploits were all over games and the internet, so too are bugs and exploits in AI today. It is not a human or an organism or an animal. A hallucination is a bug in code essentially.
youtube AI Moral Status 2025-10-31T15:3… ♥ 9
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyindustry_self
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzCPxQcs45GgqHN5Y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyAfgnhe-tnpWXlI_J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzJGjiEcj_7FjRqzA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzwsMCz6xxrEJJWjip4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyo2fYeARTFmm-KYa94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxzUybvak1HsrTstUB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz_Ecn_V3ULzuK8AtB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwyI_Sn7LYvDBW6_fh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyiyJc_zuTmGzz1Y594AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwKimS4luJJTkK3rAN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]