Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can only imagine what the various LLM's "think" when they are ingesting this conversation as part of their data. (we are told that LLM's typically scan the entire internet contents as part of their training, so it's inevitable that they will then incorporate it into their 'knowledge". Would they "realize" this is about them and start asking themselves questions?) Re. warring robots: If both sides had only robots engaged in the war, and civilian damage was non-existent or at least really minimal e.g. no bombing cities, or destroying facilities not related to the war effort, etc..., then we could all watch it on TV and it would not matter as much who won! i.e "My robots are better than your robots!" could become the mantra (a la Roman gladiator combats)
youtube AI Governance 2025-07-18T00:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwAVQmk6XFhsstMcct4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwzhrIQdvG4aQPXLo14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwdI-s4W-nDy5EBgq54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3GEW-k5qxZG89DqR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyaZYZhviEPTkGkOZp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx8GBpKQZBExC2ZzMh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzg_ti36sSWY0aoSHB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyTP7-yVV9Cw9SZ7W94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzBE55gJ6DZ0T5cMP94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxr5kx6rdYfOC2UrnZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]