Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"We don't know how these things work, and therefor they are dangerous." Where have I heard that argument before? Here's the 'truth'. You can absolutely, right now, go post some misinformation created by ChatGPT onto Twitter, and I can, right now, task my 'Agent' with 1) Finding sources of credibility. 2) Cross-referencing new posts with those sources. And 3) Add to (or subtract from) the list of 'Credible Sources' for any given domain. The real solution is to leverage the new tool to combat the new tool. Within the decade, any regular human will be subject to such an amazing amount of disinformation and wealth disparity that they may very well be unable to survive, much like the Amish can't survive in the 'modern' world. We did, effectively, the same thing with cell-phones and internet, insofar that governments now subsidize access to both for their poorest members, recognizing that they are required for modern ways of life. It'll just be the same thing, again. AI isn't a new paradigm, except that it now allows multiplication of labor over the intellectual domains, which any reasonable person will learn to leverage to adapt to the new environment. It's just the same old story, and once you're old enough to have seen a few paradigm shifts, (and/or study history) you get a bit jaded with people stifling the proliferation of useful new tools, just because "This is mystery... Mystery is scary!" Can these things do tremendous amounts of harm? Yes. Can they do tremendous amounts of Good? Also Yes. We're forgetting that the tool is available once the tool is available, and can/*must* be used to adapt to the new world the new tool brings.
youtube AI Governance 2023-05-03T01:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyGaNLKGKyDbqqSiVd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgybMPlwirJDAcMezQB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx70EJUl1h339mE4X54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzZmnuLrU0WWSh6Fqt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzipA-0RzB7UJVYp2B4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyllDXMMU4_nGTtcnZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzuhx-UOKG9Pbz7NE94AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYYdAD-jv0UkBWeJ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz-rB4ShicZWQ0oe_F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw_7EfV_GZr4wsEJYV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"})