Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here's what AI says about what Deepak Chopra is saying: Deepak Chopra often speaks from a spiritual-philosophical lens and makes bold, sometimes speculative claims. When he says AI can tell us "what the Buddha thought" or "what Plato thought," he's likely referring to AI's ability to simulate informed guesses based on extensive textual analysis. But here's the key: AI doesn't know what the Buddha or Plato thought. It can approximate their views based on what’s been written about them or by them. AI doesn’t think, feel, or understand. It processes patterns in data and predicts the most statistically likely response based on its training. It's more like a mirror than a mind. AI has no consciousness No awareness of truth or belief No access to the interiority of historical figures Chopra’s claim isn’t total nonsense, but it overstates what AI can do. AI can't access the mind of the Buddha or the soul of Plato. But it can provide credible reconstructions or simulations based on texts — which can be useful for exploration or teaching, as long as we stay clear-eyed about what AI is and isn't.
youtube AI Moral Status 2025-06-28T01:3… ♥ 53
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwwgH2fOdjMb3OOgkx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzZ5lQa9fUSd51J-_54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw_GiFAy1iDJeoXfbd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwMoWkFHp_x8cxjRcd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyAlizkGTtdPr5UVKV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgywjeAr1TG5d_2ho2x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyspM3azjsOeyfDvZN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzm2LQVaPoLDi5D9C14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNNehkPwDe-ajLxw14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugwa0t2FpCihGq0EsrZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})