Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is Tristan Harris's territory (and yeah, somewhat Sam Harris's territory as well). How, for example, would we ever sort out whether you have actual sentience rather than a very well tuned 'philosophic zombie' and how much danger are we in that we could be quite easily manipulated by such an intelligence? Also if we decide to give AI UN rights - what does that look like? While I'm not against us doing the right thing we've really gotta chew on these questions carefully.
youtube AI Moral Status 2022-07-13T20:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyliability
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzM9tflUfQJdGJnQdh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxIRRrrqUqXhXonz1t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzXpmimwgQl2JMU7z94AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwAxj1aulIhKZM5Xu54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxvB_af874-F2_NxCp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]