Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The real danger with Artificial Intelligence is the possibility that we (humanity) end up creating something that we cannot control. There is a difference between “smart” A.I. and “dumb” A.I. The latter can only behave within a preset of specific functions and does not have the capacity to deviate from those functions while the former is essentially a digital version of a human mind; capable of developing it’s own thoughts and making its own decisions, regardless of what its creators attempt to do to stop it. So long as we are only creating “dumb” A.I., we will be fine. However the second we try to create a digital recreation of the human consciousness(one that has intellectual capabilities and mathematical foresight well beyond our own understanding), we will have effective started the clock of human extinction and there will be no need of nuclear weapons to achieve it.
youtube AI Governance 2023-04-18T04:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugx74Br9oydJd3ps6XZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyvY1hfnjeLm3r_HD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyDlZ-cCnS2naov5mt4AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugz4GPMFlbo3Fc8AfF54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgySZhUpf8EwLIgoFhF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWWUrULDl-FwpXK-p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwoplpX5w0C9GdTDuB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwAd6pLoxsRbfoAEEl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwNur3NILoLnNetvup4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwJsrmRtJMBzwN0Xut4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"})