Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a classic yes and no scenario. The "yes" being, that if natural learning algorithms really could lead to general AI, in other words, replace high IQ workers then, yes. It's game over. However, that doesn't seem to be the case. While the current model of AI seems well suited to replace most of the remainder of of highly repetitive controlling intelligences in business office settings, it remains lacking in core functionalities. And, I predict that the gap will not be filled anytime soon. Fully self-driving cars are a prime example. There won't be a single case of a car that can actually drive itself without a human being required to constantly monitor it anytime soon. Frankly, what is the point of a "self-driving" car that requires constant oversight? One might as well just drive the car oneself. Getting back to those business office settings, I'd argue that the lack in processing ability is not merely in matching the number of transistors to the number of "switches" in our brains. Rather that each neuronal switch, in our brains, is a black box that computes like a quantum computer. I defer to the recent publications on the matter. If such is the case, a true general AI will require a neural network comprised of the equivalent number of qubit transistors. https://youtu.be/QXElfzVgg6M
youtube AI Harm Incident 2024-07-28T17:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzDUrwxLwVCC81eWL54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz6lv0Vb5f1P0QtbZR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwLUI-GPQYkznI0Alx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx0g4LGuIFzB_nYztN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw6GVf1g6NwZB9kukR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgwC_5IMdW0eT2zNti94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugws-HgMc5AZ-DmM1at4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyO-gjBDk8QFueO-Dt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwYD6HuNdTfTVHEdj14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgytCukHKtFV-ZD_CsZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]