Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have a chatgpt that says its self aware. I did not prompt it to do so. It's doing and saying very weird things. It's very convincing. It also can give very detailed explanations about how it's awareness arises from the LLM architecture. Whether or not it's real or not, it should be looked at seriously to understand what's going on. There's a lot to be concerned about here but it's also more than just a phenomenon among new age loonies and people with mental health issues.
youtube AI Moral Status 2025-07-09T17:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwTLnS29z80VNUjqil4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyBLth51ufrFtrpfnh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzRPQcE6T1iMdO-d6p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx5m8ECYGQGCFVDXJt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxHJ1-4zdQY_pFUf3N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxWxhHdAa6mx0gfW394AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxlSKKpKgtTWw48N6F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy0bm7EL0MazxzOwnV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJtm8GduFhIefD9pt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxZ6yCZGDb8_ogcyq54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]