Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Common misconception: "We don't know how AI works". We do. What we don't know is their exact "thought processes". Analogy: You might know how a board game works, you might know all the rules, but you don't know how a certain game will evolve and end. But there is a place we can look: Us. These AIs we are talking about here learned our behavior, and that's what they copy.
youtube AI Governance 2025-08-27T06:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgxPOdDfPKgyfEKL-oF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxCkxXD_DWRNkrkpuN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugy1t3lXh0kjmnXpf1t4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgxCKo7uPz-Ic7NqIFB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxCxEDMk2OUPyp2OE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]