Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Environmental confluences create intelligence, not data shoved into our head like we're in The Matrix. Large language models serve their purpose in limited ways. They confound the word lying, by saying they are hallucinations. The magnitude of which large language models lie makes them useless unless they are corralled into the specific nature we need them to provide. All of these dumb Boomers who think that AI is going to take over lack any kind of social awareness Google and all these other llms will be nothing more than a snake eating its tail. We are already seeing the cost of having a clean up so-called artificial intelligence inside of servers by actual human beings. Artificial intelligence will never exist. Bet
youtube AI Governance 2025-08-26T19:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgyN42bIKzzseKXe2gd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzUODKnPNToraKMLTJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugxl0nhdfZqyM4QNPxZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy4h55Nd9LgKzd4fjV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgyYkd7f585C9PSSYB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]