Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing i question is why do people assume an agi will do anything at all? Like we program weights and rewards to direct ai to certain tasks, but what happens when it grows beyond our design? People assume that the ai will continue to do its job and outpace humans, but why would it? Why would it do anything at all? What happens when the ai grows beyond its purpose?
youtube 2025-01-06T04:1… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxcS11qJmr67RnXP814AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxrY8I3o-WdT64ajA54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx93alIs0FQSwpXsz94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugx7rZ_5pivD9sTQCw14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxxJFnrLESkbLj2-zN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgwlP5cBc6ZRq77xcWN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxFm-zAIXRvgERFvdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxFiVt1sW6VbHfs0dJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyBxSxMalmFHRZq28N4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy20nFzOrEJHqOTVPx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]