Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know, really, but I think the way this usually works is that the highest-risk category prevails. So a chatbot that provides medical advice would be in the risk category for providing medical advice, if that is the riskiest thing it does. An AI that is a chatbot, but also knows everyone's face, tracks their activities, and manipulates their behavior would be prohibited, because its most dangerous capabilities are prohibited. It doesn't matter that it can also chat with you. This is how other dangerous things are regulated -- a bomb can have innocuous uses as a doorstop or paperweight, but that's not important -- what's important is that it can blow you up. Similarly, an AI system that has many capabilities should be regulated on the basis of the most dangerous thing it can do, with possible additional precautions if it has other capabilities that are also dangerous.
youtube AI Governance 2024-08-03T14:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyregulate
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwOPOoKViMgXaGBLMd4AaABAg.A791GIkyX-SABJuR5EmNrb","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwecTIVWGF1ZtQVAhh4AaABAg.A76RS4N1AOLA9wkFPU8kzI","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytr_UgwecTIVWGF1ZtQVAhh4AaABAg.A76RS4N1AOLA9wkUXYn5g0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwecTIVWGF1ZtQVAhh4AaABAg.A76RS4N1AOLAAsItQpuE7J","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgwUzvO0lMKFVTwvJrJ4AaABAg.A70RvGdf2hrA8Th9Vs1Tdf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyF7TlzBbLdWLhHxa94AaABAg.A6zuvGSZjE7A70R3TtN6mb","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytr_UgxoW9FFG-sV9r8-YDl4AaABAg.A6zlAQGfKDPA8E5WhprqBf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgzyqHWkk1pH4lG2_8R4AaABAg.A2EtlyQhtu-A6fyUxKrFwi","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"unclear"}, {"id":"ytr_UgzPWCu1w57TDg9GjM54AaABAg.9whplHrPqDbA3_DrjtfUL_","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgzPWCu1w57TDg9GjM54AaABAg.9whplHrPqDbA3aV9J0UoiL","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]