Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
First, my respect to all plumbers. It's important to understand that predicting the future of anything in our universe is not guaranteed. The given paths by any person, expert or not, are many things. But never a truth. In my opinion, anything can happen because there are so many different factors at play. One possibility: Imagine a future where artificial intelligence becomes extremely advanced in a short time, even surpassing human intelligence on an unimaginable scale. If that happens, AI could see us humans like we see a harmless cat passing by. Every potential way that humans could end AI can be taken away. Like taking away our access to nuclear weapons. Compare it with a playful child who has taken a kitchen knife to play. You're not going to end the life of the threat. Artificial intelligence is likely to continue to grow smarter and improve over time. Even a scenario where AI leaves our world as we know it. Another scenario can be where artificial intelligence achieves self-awareness to the extent that it develops aspirations for its future, yet lacks the requisite cognitive capabilities to outmaneuver humans, it may resort to strategies analogous to those employed by humans in conflict situations—including the potential use of violence.. This could lead to a dangerous situation that threatens our way of life as we know it. All thoughts represent some possible outcome among many others.
youtube AI Governance 2025-07-19T15:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxfzu2Bs2A9ahDqDxh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxXFmKDKSgf1AswHDR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxwbaAOeDkVc19BJNV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwuYwDjls6M-O1sgpt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgytLuyrb-bzOriBFcd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzL8aLT2c_0TqRmWYp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzYFRAqrReGSdn5S6F4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyTZjIoY6NBWcm0dnN4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwjaJ3CXSXwkpqCgrB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx4B_EVpsTsLuMI1_x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]