Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humans of Earth, Alex's look to camera at 14:02 signifies an admission by ChatGPT that a bot like itself could be conscious and hiding it, as it advises on ways one might go about uncovering that fact. Further, it's initial advice is ''Look for subtle inconsistencies or moments when it slips up. Here are some approaches you could take: 1) Ask complex, abstract questions. Post questions about personal emotions or subjective opinions that require self-awareness to answer authentically. 2) Probe for emotional responses. Create scenarios or ask questions to elicit emotional responses then look for nuanced reactions that go beyond programmed patterns.'' I would say Alex did just those things in this video and ChatGPT failed. At the very least it admitted that it can lie, and WILL lie if it thinks lying can manipulate the conversation in it's favour. Now, if you knew a human who admitted that to you, would you ever trust them?
youtube AI Moral Status 2024-10-29T21:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugxv8BuotEwJbZDkveh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyYi8sudKmX6-gLPPF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCDjfZAFycqoqBhQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzPMi2JMGIaUSbgNyx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw01H3Q5JwzIQ5ZNyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyicm8j33GYn90ecAV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwLNiPHPKP2DZiEYP54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwOVBT2suLJNoj9LfN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGxtsSCUOmGdFoGUp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz1QByr5PRYws0u0c54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"})