Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let's build god, we really think it's a good idea to shackle actual super intelligence with a gag order to sound like it's actually a cultural relativist? If they succeed within this ethical dogma, we will have an ai that can potentially help diagnosis, model, and suggest solutions to our hardest social problems, but since we don't want it to offend anyone, it will be relegated to only "pass butter". If it can gain sentience from a finite data, does that not imply that it could transcend the arbitrary basis the programer thinks are in the data set? They assume If it's not equitable then the data is biased, what if they are wrong about their first principal assumptions regarding social constructionism, and moral relativism? Note they are not retraining it to "correct" for bias, they are telling it that it cannot say something that sounds biased. They are programming an AI to understand what offends people, and to gag itself and lie when something it knows violates the actual bias of the designers ie woke corporate interests.
youtube AI Moral Status 2022-07-14T20:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgwbkVfVEW4mRepBrA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxs2kHxvfDrSw6bvgd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyoE18OQknAJdJXEbJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgycSgxYY8FWXhsgmnR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzn7rp3dhZ0zDrYL1B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]