Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yep. Didn't see that coming. Oh wait... yeah I did. Just wait until Liberals start using AI to make Republicans say things they didn't really say through AI. This is bad. This is really really bad. Like that movie "Wag The Dog" bad. Now we actually have the technology to create fake wars that aren't even happening and create false news that really isn't really going on. Don't think for a minute government s from around the world won't use this technology to control people. Don't think for a minute terrorist organizations won't use this technology to create false flag operations, either. No one is safe now from AI or it's wicked handlers. This is what you get when you let technology take over everything. Mark my words governments and news media outlets all across the world will use this technology to destroy people that piss them off. Ever seen "The Running Man"? A soldier refused to fire on people from a helicopter that were unarmed and begging for food. His superiors told him to kill unarmed people and when he said no the other military cronies took over the helicopter and killed the unarmed people then created false footage of the man who rejected the order killing all those people. Do you really think especially over the past four years the government of the United States won't use AI for this purpose? I think when Trump gets back in the first thing he needs to do is ban AI in all it's capacity and charge people with treason that use it. AI is a very dangerous weaponized tool that WILL be used against the American people and anyone who opposes the people in power. Dude this is really really bad, man.
youtube AI Harm Incident 2024-10-07T19:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxXAv02Cj8rG-at_PV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyQcJI0pgO_e15BNet4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzOdPolYffdlZyEVh94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzPTD8N8FgK03-UHjx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyARe9W2-661V9w1Ox4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwrHcEB0iZ2WMGJGHx4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyEzSEq8K_BgjGJeIV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzLvSWu3HpKWjgvkLN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx1oHA3PkbdnX3l1R94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy6FJbDJ9kKcFMreqZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"} ]