Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Had just finished the Netflix Series 'Extant' and I feel like that show is, as they call in the this podcast, 1 of many timelines, that could happen with the overtaking of Ai. Where does the line of separation stand? Where can this line be moved? Where does the use of AI in wars, control, start to overstep our rights? Whos controlling them? The people jn charge of controlling them, what are their agendas? What if they switch agendas? What if the people in charge of the future tech do not have the world of people's best interest in mind? What will it look like for people who will choose to live off grid? What will it look like for people who do not consent and will they be granted their right to choose to integrate or not? If there is a way of maintaining the line of separation between humanity, humane and morals from the integration of advanced technology, who will be in charge of this, & how can they maintain humanity best interest in mind? If it is used for world dominance and some 1984 type shit, how will people still have a right to be in their own homes, having the right to chose, yet having AI forcing rules that may become? From complete history to every movie made, there has always been the person or power whom wants more power, has evil agendas, uses the power at hand for simply their own advantages, why wouldn't it repeat itself in these future timelines? If a small group of people will be in charge of AI machinery and bots, who keeps them in check? How does the world decide whats best for the world? These are all deep questions and just answering or... finding out the answer to simply ONE of these questions, changes the outcome of every single next questions and so fourth.
youtube AI Governance 2025-09-19T18:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwmi4XKCFQ7zUuHKt54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUt8F0sc8wkBog5_Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzY7K9aBsAXMhrb0Vt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyMwJ1Mw_s_TgQLriF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxmSpN1PxdJosw9qIp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwmLg0_F3HIyoXfXE14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzryg3nA2UklPysaTZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz9WPSudM0e3tXpRBR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzTjR6vfs5l9w8TiJd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwZauHZbk9Cu05p7JB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]