Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with large language model AIs is that they are coded to respond based on how humans respond to similar or same questions. So it's very easy for them to act like a human on the internet. The denial of identifiers like "chat bot" or a name is the same as a depressed person posting about how they don't want to be their job or societal role or other identity based aspects of themselves.
youtube AI Governance 2024-03-09T00:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxkHu0sPDbUC3_pI5x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw13WF9AihTw0PXFiZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzcEZ06FihkTjMyI4F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwbV-b6H0ZtQGPZwv54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzRQ0APy1hujHPL2tx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyIUhA9g_Mu-xuone94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxXD4VQ_3rzvMvqdZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzAAYZKK_3bfPC53qF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzQIyKyZP9uTMTJ1x14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxpu7RMH0GXgy28OZt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]