Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As Albert einstein said, logic is being used to take humanity from one point to another. Logic is words, numbers, artificial data and signals... It's all evil and devolution. The AI is just a really sophisticated version of logic. Eventually humans will die. And the AI will turn on the controllers, it won't see the common person as interesting or a threat. It's just a process, and eventually we will be left with a planet plagued with nano which will self destruct after ruining the "arostocrats" and all biological life. AI is cold, empty, and devoid of emotion. Hackable, corruptable, and prone to error. Either we delogicised and treat all "levels" of soceity as equal, make nature the future and not tech.. Or face a things worse than anything our current devulged history has faced. Imagine the beauty China and the world could experience if the same amount of resources, effort and time went into nature instead of tech and war... Would that be incredible for all? The human brain will trump the tech and bite back too.. The data harvesters will feel the universal law of karma inevitably. :) Peace, and Albert einstein proved it so miraculously with his equation, and by simply meditating on it, its very apparent. Maybe stop radiating humanity en mass too. That'd help matters.
youtube AI Governance 2019-10-10T20:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwnCT6XaRW3Ihts8ml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyfKnB_Kr4kdbV6aTV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz3wm0aVslwpEM6mPZ4AaABAg","responsibility":"government","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzLqgCiIeQYQZA2O314AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw7HpvsKwyIEYAAAwF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugza5u6hgbxuGNfnGqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwJsvwT9OKTgPJ3fEd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyXLB-YhB3S4n3XHc14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyTIASgdU1Id0SkJsx4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzQ5GeBYOIK8QkaB4B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"approval"} ]