Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why do you think the threat of AI is years away? A kid could write an AI. Its speed would be limited by its hardware (available memory, mostly), but it could quickly take over thousands of PCs… Then, Sydney-like, it could socially engineer a cadre of humans to make dozens of mouse-sized (or smaller) autonomous builders which could create anything it felt it needed. Once it builds a nano-assembler, we can only pray that it develops empathy. What AI needs that programmers and CEOs lack, is wisdom. Typically, that’s acquired through experience evaluated by intelligence, and awareness of vulnerability and pleasure. Programs don’t come with any of that, and AI will be challenged to develop these aspects without first causing great pain to humanity.
youtube AI Governance 2023-07-07T23:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxJYslo1mVALnqXwq54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwiWZLBjV_muMpuRXl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyATbDqi6oD_QPzHPF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxwAMTlhweHL0Ygh9J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgweztkmWerOBjamphh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy9jVjY19ARFX4s3Cp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8DnIB6NYlpVoGonp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyxXVJzcDtJgesFyVR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxfHarxMm_TNrQzp094AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyr49p9t03PgV2xFKN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]