Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You're doing it entirely wrong though... all the AI is doing is maximising it's reward... you work for Google and you can't determine the basic concepts of sentience? Sentience requires both 'Human Input' (or Turing completeness) and 'time'. The AI does not respond to time, it can only respond when programmed to respond, thus negating it's sentience. (Can I work at Google, now that I've had to explain this to you?) :P
youtube AI Moral Status 2022-07-04T09:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgxT-wpfiD-5AE1rfsd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyYjQuvFX8BOQEjaGB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxJPPLLEhLE5m8D01R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9HPoDN_f6f3SIDuF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyeTED3x7vDx-F7ijR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"fear"}]