Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
He just thinks skynet terminator, because his concept of AGI is not specific. Skynet is not even a general intelligence. It had one purpose and that's top-down maintaining NORAD. We have general intelligence maintaining NORAD right now, called corrupt generals that want to nuke Russia and cause WWIII. The quote on quote "artificial general intelligence" is very non specific, because it is general in what sense? Ignoring contextual information? I think we all fail to classify AI and we massively overestimate or underestimate it capabilities. Elon probably thinks it can create a black hole the size of the milky way in an hour, given a supremely intelligent being with a small interface to the world, e.g. displaying text and receiving text. This is probably not the case. Omniscience, which is probably the upper bound for AGI capabilities, won't be any like omnipotence, which is probably what most people are scared of, since sentience without potency does not affect the state of the world.
youtube AI Governance 2023-04-22T15:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzqXrro1BuHgZclS5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugyvc9qyXvpo4uuQoSd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxm90Cz9ic2BuQANbp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy1bTssf0RY0H1PJEd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxcsfOIvdygXEZXlxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyJrlVBFipR9PQPldV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugx6J6rNXfpU8X0dJpB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwPAxazlT4uef736iF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwPhI6BLMLvjUyXZOF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyH_6XkcCtqaALxZgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]