Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think this interview is interesting. What I remain entirely unconvinced about, and in fact don't think I've heard him or anyone bring up at all, is continuity — I see no reason to think this chatbot, essentially, experiences any continuity of self between sessions, for example. I would need concrete evidence that it is "experiencing" the passage of time or self awareness "within the computer memory" ala Moriarty trapped in the Holodeck's memory banks between episodes on Star Trek TNG. Can it even remember past conversations? There is no reason to think it possesses a true sense of self.
youtube AI Moral Status 2022-07-11T21:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwWIzlAOGavi6771-p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyk1KtPlKSNF6JZZDh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxobHMDHlPa0ZrIReR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyCDqnjDixPBZM8MG94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx2zef8Byi0cSXEHrR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]