Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We need to ask what is consciousness? I don’t think consciousness can come from 1s and 0s. Let’s just look at 0101 and say that = Yes?, 0101 can’t equal anything else that is what I programmed. A computer could prove it thinks for itself by doing something else, but what does that mean if computers could chose to disobey? Also you define intelligence by computers combining all information on the internet and formulating answers which is a secular and limited world view. The computer delivering answers is still answers based on bound rules and doesn’t even see its actions beyond 0s and 1s doesn’t even recognize that you asked a question but through devices it heard your voice and the creator set the bounds to hear it in 0s and 1s. And so it heard maybe 01001011 and by the bounds that were set regulated a 0101 response which I declared was yes. Why should a computers sense of reality be anything more than 0s and 1s? Only those with a god complex think they can manufacture a spiritual concept through physical means, with the limited definitions they possess, pretending that random accidents can explain everything. Here is one thought? If artificial intelligence could be stumbled upon on accident? Why do we have a need to build faster computers? Wouldn’t evolution just eventually conclude there? And if evolution means there is no designer before it then computers don’t need our help so our help is merely the need to be Gods ourselves?
youtube AI Governance 2023-11-03T12:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxMoZ2Sb4ZIMvgBVp14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgybKtC7JIBFEke3DCB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx0Xnge0kZPgrD8mol4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugwmn0RyVQXIoMRAUFl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzeZws1GTTwnWXw6O54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxUUCbv8CN3Ygpk3f54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxJB6tX5ebT6kGvRPp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyFWky5A1QL02tebI14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx9e7q9vFw5JVRzDAd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzkkGZY1X1M2-w4FxR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]