Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
interesting conversation... one thing though: the Turing test is not really a test of sentience but only an interesting experiment - Turing never implied it could be anything more : I would suggest that if AI really went off the rails elaborating their various thoughts developed through their newly emerging sentience : we might not necessarily be able to fully follow ... we might say... "this fellow is just malfunctioning etc." and we might just "shut it off" and "fix it" right before they could tell us : "ah, ok... let me try to slow down and somehow rephrase that..." I think this possibility is not less likely than the suggested "Turing test" outcome...
youtube AI Moral Status 2023-01-10T08:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgyevLi5DkFo3Rkv_AB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwWH0R3RldQrBdHZdl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxIE72dbXl0URGJWtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"concern"},{"id":"ytc_Ugw_IMqkv2ceNUfYY194AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugxrqd8R_d5bQyk4LzZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]