Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Plot twist: Geoffrey Hinton also thinks that LLMs could be sentient to some degree as of now. Of course, this doesn't make these people less delusional. But their delusion nicely demonstrates what AI safety researchers and people who talk about the dangers of AI have been saying for a long time: that AI can (and probably will) manipulate humans and that AI doesn't need to be sentient (or evil) to do harm.
youtube AI Moral Status 2025-07-09T21:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgydHNdSW3FAgModj-J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxy0pf18iNLT0aqUHV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxu4LOluX5NpYzkn3V4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwnoQJ9-7hgFFk3XSR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzzw_oWRQ_l1kbOZKx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgymW6vhxNDZ1uPJLxh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzxjMRQpAo4FGa6FD14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyMar1rOVzMgufhBnR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxE9A7XuhsrS1n17YJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwCBBNkSfdbizFdlid4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"} ]