Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am a science fiction author. Maybe this makes my view on this more open? I don‘t know. I do not see the imminent danger of AI being able to kill all life or at least humanity. At least not by a direct attack with that intent. I rather see humanity dying off, making ground for our next evolutionary step: being artificial. As soon as we can create sentient artificial beings, we create a nearly immortal version of ourselves. There is no us versus AI but AI simply replacing our less capable version. We are building artificial humans and soon will see that being artificial has advantages and our view on AI will shift. From a possible danger to an attractive partner. And when humans will eventually select an artificial partner over a human one, evolution will take care of the rest. We slowly become extinct as meatbags but will live on as AI people.and I do not see anything bad with that and humanity will then become able to colonise the galaxy, spread through the cosmos, become unbound of inhabitable planets and live for eternities.
youtube AI Governance 2025-08-26T15:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxXhl6zwFWdzgjVC5t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyHaFFXIDEqzZVhR9R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyU3Lhv2obRVScJ5TR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwnXVPazHGaT2x-94R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgygUKG5-ctjBhVKa694AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]