Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think he’s right that “the world is stranger than we expect” *would indeed* be a conclusion, in hindsight, that follows from our survival. BUT, in foresight, this is only reasoning that works *under the assumption of a world that has our interests in favor*. In other words, he’s not taking neutral grounds in the argument: “AI could be dangerous, or AI could not be dangerous”. He’s conversing under an assumed premise — that it’s not possible for it to be dangerous. If this can be acknowledged and communicated it could be helpful, or at least useful as an argument I’ve come to learn that these things result from a virtually unsurpassable barrier of worldview. It would take some highly contrived deconstructions to establish mutual understanding and comprehension, of which would likely need to be individually tailored to said individual’s specific worldview. This is a very difficult problem which pervades many controversies in society, especially because it’s so underacknowledged.
youtube 2025-11-22T02:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgyLt72yzaSZcysuV6t4AaABAg.APwTAb5lMXOAPxGosJoPMJ","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_Ugzh-ft4yAvSqIhEN-14AaABAg.APnZ-RGttVuAPngNRbNVjI","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytr_Ugz39BpT_9T5d-Ba3xx4AaABAg.APnLRBNerHsAPnUVkqTBOM","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Ugyz6Ms8OUTnMzslEmN4AaABAg.APnJTRZi3UFAPoPA_rjhwm","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugyz6Ms8OUTnMzslEmN4AaABAg.APnJTRZi3UFAPpasnLv2TR","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwT7kNtEnbroo-TmBN4AaABAg.APnGSDtGtwKAPnUtLbFmHG","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgwT7kNtEnbroo-TmBN4AaABAg.APnGSDtGtwKAPnigSSaZIq","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytr_UgyrXQgmuXKHR-qkbLp4AaABAg.APn1wG7PpzpAPns05Io1Ze","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwZtlYjAycs5EqT-l94AaABAg.APmwguyhLamAPpBVTZYLL3","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytr_UgyLpLxuZqp_nffct6J4AaABAg.APmYrS9e-fYAPnVfM9IC4k","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]