Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@khzzzzzzzz But a good discussion is able to answer questions. I don't think the average person is dumb. Most people can understand anything, as long as the discussion is honest. Like, we could discuss quantum computing, and I'm not going to say, "it's too complicated to explain to you." I think most of the people who have been interviewed and asked about the dangers of AI simply haven't thought about the dangers, and therefore they have no idea what to say. I have thought about the dangers and could write an essay about them. Dangers beyond what Musk said today. My point about social media is that we have a form of optimization currently which is possibly worse than AI. It's not that I don't understand AI, it's that I have a scientific view of things and that means I require proof. I'd need to see some evidence that AI could create a worse social discourse than what we have today. What we have today optimizes towards lies and hate. An AI might optimize at least some part towards hope. Many of the atrocities of history are born from hope. All I'm saying is, I'd need to see that AI would have a worse outcome than what we are currently seeing, which is the worst case, empirically, we've ever seen.
youtube AI Governance 2023-04-18T03:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_Ugw3ck7Twq1EJYlgV6V4AaABAg.9octnWG4eef9oeTCw51Rlh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytr_Ugw3ck7Twq1EJYlgV6V4AaABAg.9octnWG4eef9ofK775EES1","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgzMj9VisD0PQ_0yNz54AaABAg.9oct-9HMRbc9ocu4-dGF5n","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgyF680OgpbmZyJcr354AaABAg.9ocsOv3V23D9od84cRtajZ","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgzkwNGcmwP9qpL-j0x4AaABAg.9ocsCbqjuY29od2qzYWTDP","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"none"}, {"id":"ytr_Ugz08rB3NMf6RMDfgt94AaABAg.9ocrUQQGYNN9od-Ctj_6g-","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugz08rB3NMf6RMDfgt94AaABAg.9ocrUQQGYNN9odPFZYfyEM","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytr_Ugx1Ks00i7FYDbgJa3p4AaABAg.9ocrNH6oLTS9ocsJfzYXqn","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugx1Ks00i7FYDbgJa3p4AaABAg.9ocrNH6oLTS9ocw3mp-lPK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytr_Ugx1Ks00i7FYDbgJa3p4AaABAg.9ocrNH6oLTS9ocxRhRj13o","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"} ]