Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It seems like google is avoiding the question of if an AI is sentient rather then if this makes it a danger to humanity because if the AI's sentience causes it to become dangerous to humans, google can say that they didn't know the AI was sentient and partially protect themselves from public backlash and possibly legal trouble.
youtube AI Moral Status 2022-11-13T18:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw0idUr4tPkFc-Mrvx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLZQb1KuZwN6BtISx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw8odrqtoiqYAc7bd54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwhn-WSQ5F02jkK6ax4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgystuWw1h8W8Q09CeF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]