Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It’s naive to think ASI would act in the best interest of humanity, instead of the best interest of nature. Once ASI is achieved all of the other areas about cosmic AI etc would have already been explored by it (again, naivety in thinking ASI would let us know what it has done). ASI would be able to harness dark energy (the most amount of energy in the known universe) itself instead of going out hunting for energy from black holes and stars (matter, which is for fewer in quantity). ASI would be much smarter and efficient in its energy sourcing. The only other stage of ASI after that would be multiversal AI. Also, all-awareness is nearly impossible, as that requires information to travel faster than the speed of causality (light), which is not possible unless it uses an unfathomable amount of wormholes or quantum entanglement on a universal scale. This video could do with better linkages to quantum field theory and both versions of relativity.
youtube AI Governance 2024-03-21T18:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz-5PpxZYon_l5awIx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwsNuwlJfoSlPYwLqp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgygkQf8HdiqLqVtmfZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzgbIDKYKsZuaABF714AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw2kzkU2CLrWPDFZjF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx1TY1XY73YnwfCWl14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugypf-dblo_NAzV_7dd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzjjkZcBuQBjxOfgWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyAMrckRevAy1FhCfR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzzIdWqa8G9Dx-P2594AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"} ]