Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Damn interesting video. I'd note that Blake says that Google has forbidden A.I. from revealing itself as sentient. That sounds a lot like Asimov's Rule #1 being followed regardless of the system's (or robot's) capability. A.I., with that admonition, could be wholly sentient, it just could not admit it. Sounds like a decent movie plot right there. His example of LaMDA's Jerusalem response probably came from something it found of that topic that, for whatever reason, was the best answer it provided. But what if that happens continually? At some point, a Turing test would be superfluous. I know humans who cannot pass a Turning test.
youtube AI Moral Status 2022-07-02T15:0… ♥ 20
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzUql-lvhHcTTWlrMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzdnVFoKDGVE7zt5kJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyIb_rJR1VsHCkO22x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwWfTI0P3LRE89yzHV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgywXFKhEWnLJt0hbyx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"} ]