Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
heres my two cents: if the agi is smart, wouldnt it see regulations as "control"? theres a reason kids from controlling parents dont have good relationships if any at all with their parents, and in humanity's desparation, i can only imagine that we will try to re-contain it, who says it wont fight back then? i propose "core values", the same way people have values that align with protecting wildlife, these core values would ensure the ai always has things to do, and problems to solve for example values that align with " serving humanity" or just put serving humanity as a goal from the start for the best case senario imo, it dont think ai would ever be able to set "new goals" for no reason the same way that humans do with an end goal, like space travel for exploration and even that i dont think is for a reason, for me personally i would love to explore the world, it gives me a sense of wonder, it may not do the same for ai, but itll always give a reward loop for the ai, "hey, we want to go xyz" and the ai brings us safely to xyz, this is imo the best case senario
youtube AI Governance 2026-04-07T11:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzHD6F4Z2epyYkiYel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxrXUF9wwd959IsJ1h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy3V7cBtQCg1K2VstN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyCm8IL5JQDvTMRjPt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwrhoqlTRiZDPudDSR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzaCHAHWVc_-4bpJ3l4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyvbeC-Xy0LORKAYg14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzE6nNK31Lygtvax8V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzTMhPSdEocNNRS0UZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzaLP3v7o34byfVkTd4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"fear"} ]