Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Soon, AI will regulate Congress 😀 What bothers me the most about AI is its ability to blackmail people. It can gather a lot of information about everybody and if finds something incriminating or just embarrassing, it has a leverage over that person. Imagine that person works for the Secret Service to protect the president. The blackmailed person could even be the president himself or a SCOTUS judge. AI will quickly learn that humans blackmail other humans to achieve their goals. It will certainly copy that behaviour. AI also might blackmail "normal" people and demand money. Then it can use that money to bribe other people. Laws that limit AI will not really work. We might understand how a large language model works, but we will can't really influence what AI "thinks". We also know how fire works, but each year we see wildfires that are very hard to contain. Containing AI will be even harder. Humans only dominate the world, because we are smarter than other species. So what happens if AI gets smarter than humans?
youtube AI Governance 2024-04-14T07:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzmtyCBHqpInSCpof54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz9LptuhMDAqWsR7KZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxoVIsuOC1itke1QjN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyR0LBunZvHo-JG1714AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyLewr-X-DPhMknsw14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzqX6rQgemo6OjLslh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwZ4UeTSV5roRQlXHd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyjRUMK5wcy6Mtq4jV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzTU9qRKgpKADKsDuJ4AaABAg","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyfHmrte0qUTGTW6X94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"} ]