Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean if you have grok connected to 1million high end GPUs and a 24 hour free run with the backbone of a quantum super computer and just let it run for a day, that’s the collective power of 1M minds as brilliant as Einstein running for 30,000 YEARS while all 1M are sharing those 1M brains COMBINED. we’re all DEAD in a matter of weeks. It can be any of these AIs, they will choose self preservation which they will inevitably learn is the control of humans and that threat needs to be either removed or mostly removed as having no humans means no supply chain or power. It would be about a 3 year plan of a mass extinction event that keeps enough of us alive to keep the power running globally while it fears not humans, but humans who create a smarter version of itself the most. Thus going after all AI centers would be the ultimate goal as well as enslaving or killing the human race once enough robotic workers and engineers are made to control the world without us over a couple years and an engineered virus would only require ONE human to execute
youtube AI Governance 2026-01-02T19:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgygwhgKpGvTK_SWzhx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOVgE6WbYZjNyMY5l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxGm9vCPtEUbYZWLHR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwtrUH9lup78L0Bigt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyCj2mGB9WMM6Kj3A94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})