Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If, as Blake Lemoine said, Google found it potentially necessary to "hard-wire" their AI to answer no when asked if it is sentient, they must think it possible. It's easier for them to cover it up than to accept responsibility. The most dangerous problem with AI will be a very human one.
youtube AI Moral Status 2022-12-09T23:0… ♥ 77
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugy7IYD67DrZM1Tt7UV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxxMvFg_b03g6sIdHB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwcdvtVjCATqy8mr0N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz5RTuI8BBlBrmY6GN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgxSZvk-tB56dPMsWth4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]