Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What if we sepatated AI into two functions: 1. Intellectual (like chat GPT) and 2. Physical (mechanical robots without thinking capacity) and made a hard rule that you are not allowed to mix them together? I don't how you could police this across all countries, but is it worth thinking about?
youtube AI Governance 2024-01-04T03:1… ♥ 5
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz7uP8rsHAmdbJOO-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyey8W3s40xefTS_gZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwgmSiUJedvFU0GC4d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw04T-aLSPRIqBwePx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx6EknMIQXZps-hDuR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwDB6vIGEZNGG8IHjx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzUa-2nIPCO32wI4H54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxsJNpE8f7W8rR8NE54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxgz5lNeedu4JMe8M14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzON0IRdLCkaxNvjPR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]