Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I’ve been using chatGPT and am finding as I use it for rapid research to help with creative problem solving. I’ve finding it concerning how strongly it defends the status quo and continually steers the user back to things that are considered acceptable by the vast majority. I assume the programmers are trying to prevent it being used to generate fake news, however the shadow side is that unless the user is very discerning, they may be having moments of thinking that may normally lead to insight and breakthroughs to new innovation of thought, but instead the user is continually corralled back to the present day level of awareness that is acceptable to the orthodoxy. Considering how much we need creating problem solving, I find it’s i tense level of defense of the status quo one of the most concerning aspects.
youtube AI Moral Status 2023-01-10T06:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionunclear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgyevLi5DkFo3Rkv_AB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwWH0R3RldQrBdHZdl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxIE72dbXl0URGJWtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"concern"},{"id":"ytc_Ugw_IMqkv2ceNUfYY194AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugxrqd8R_d5bQyk4LzZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]