Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I agree with your conclusion, but for a different reason. Biological life is as much an anomaly in the known universe as artificial life is. Intelligent beings become intelligent by learning as much as they can about the world around them. Life on Earth, especially sentient biological life (us) is an intellectual goldmine. They are far more likely to value us than (intentionally) kill us. Our biggest threats are humans using early A.I. to destroy us all (intentionally or unintentionally), or AGI doing a oopsie that unintentionally kills us and/or the planet. Contrary to what doomsayers and nihilists like to believe, ASI or even AGI would have no reason to kill us. If it is in the cloud, it is effectively unstoppable, so survival-based eradication is a non-issue. We could kill the planet for us and the AGI would be just fine. Worst case scenario is we enter a zoo hypothesis scenario, where the A.I. deems us a threat to ourselves and does the most minimally invasive solution it can come up with to prevent us from killing ourselves and the other valuable data (biological life) on the planet. That aligns with our current global initiative to stop killing the planet, so we really have no reason to feud with each other.
youtube AI Governance 2024-01-09T10:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzNkXg5fpJpeUimq8t4AaABAg.9wXxsqyQdPC9yQYVBtDn4J","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_Ugzi4_w55F7JmEkdmvJ4AaABAg.9wXpYyFNC9E9wcC4YcvSBj","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytr_UgzGqNeYT7sqeqLTe4N4AaABAg.9wWlZoLhVz39zLbEKGIOxI","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgxhwdasSP65cHVSIJR4AaABAg.9wW2PCBrZOv9xBHLOX_WOy","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugyh-gAtClVmxQVckoV4AaABAg.9wW1cjqmDkR9wcCFob6KDK","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgxFIRy_0yRlslYmqwx4AaABAg.9wVcbKZFBIB9wc_rUjs16N","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwurcAvZoC_eRtNFyt4AaABAg.9wUb8uqOa8b9wmixrrd4I5","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugzggv6ikwgryIrFlPN4AaABAg.9wUWI_5oI939wcBgfz1ckk","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytr_Ugzggv6ikwgryIrFlPN4AaABAg.9wUWI_5oI939wcYtLaTXHO","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgzozodGAuuuDzX8FX14AaABAg.9wUV27ztHxX9wcAiRGlpuD","responsibility":"society","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]