Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
HOLLYWOOD MOVIE... Here's the thing about assigning risk estimates, you can say 1% or 25% or 100% it does not matter at all because once we pass the point of being able to build AI that is capable of destroying humanity and building in the safety controls you have to add back in the EVIL element. That is to say that there are forces in humanity that would do evil just because they can so you would now need to build in and AI evil defense system where good AI fights evil AI. Now the problem becomes does that fight destroy humanity in order to win? A logic circle, it would be logical at some point to recognize that evil AI can only be destroyed by destroying everything if the only acceptable outcome is that evil AI must be destroyed. Now good AI has achieved the goal of evil AI...
youtube AI Governance 2025-06-24T12:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyolKgzen8ewYmRVg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy6alxdRnqQ1YvAk9F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz-2CcJGtGyGMNVgXZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwerruDXJiXyR6nTEF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzxjqv5GYYxWjLswZN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyGjyK9dW5IcR3nRrt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzknxbakj5ngyG4oOx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyvqZi5XV3wEAE2CU94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy674Yux2-5xrsDW254AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyUyZ5dL--3vEmddFR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]