Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A.I. fakery is the biggest threat to SOCIAL security. Having already developed a dependence on remote information technology, can society function if a hard dialectic emerges with absolute blind trust at one pole and complete distrust in anything you don't see with your own eyes or hear with your own ears in person at the opposite pole? I have strong doubts, primarily due to pacing issues. We're pretty adaptive, but without time to consider, we tend to adopt some catastrophically maladaptive solutions and then teach our grandchildren how wrong we were. The threat deep fakes represent is that accurately parsing information becomes impossible to achieve within any useful timeframe, rendering remote information exchange practically useless. As the tech smooths and advances, even simple remote communication can't be trusted. Was that really your mom you just fought with on Skype, or is some bad actor using deep fake tech to mess with you? See the problem? The social upheaval not only possible, but probable as the result of this technology proliferating is on par with a supervolcano eruption, civilization terminating. The scarcely concealed corruption of our political class means legislation is pretty mouthnoise which doesn't work as advertised, ever, so don't expect that to build trust.
youtube 2023-06-21T18:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgzVAveI07Rohsz5fCd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyZ7-TpHHutLJ17CT94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugyiw-m542KdOcTGal14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzeWBt--4Ex1eQ3Ab94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugwk2YOk69IQME1brBJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw8atH1JkkELxc7kCF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgzNp9H1TskPGMXt3yZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwA1fQobUxl2uEwxtt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugz4Rgk_1eeitlnhPgF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugw4dAn8khnKEpUV8Ml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]