Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We are creating something that will be like humans. Humans are known to cause al…
ytc_Ugx6lGHbq…
G
Oh, that's right! The Terminator was a movie about the future, when robots and …
ytr_UgxPBLoTK…
G
Let's be clear. Autonomous weaponry is autonomous... no one is controlling it be…
ytc_UghekMCcJ…
G
He’s like a the computer nerd in the movie about sentient AI robots taking over.…
ytc_UgzQfVI_f…
G
I don't think there's a single person alive who doesn't think babies in the womb…
ytr_UgxeGqfD5…
G
I don't talk to AI
Ever.
I'm not silly
I'm not stupid
Like the rest of you. …
ytc_UgzyUjXlN…
G
He says AI should be regulated only because it threatens him and all his billion…
ytc_Ugzt-c1CF…
G
Quelle est la date de diffusion télé de cette émission ? C’est dommage, ce n’est…
ytc_Ugx4ISLE-…
Comment
Soon, AI will regulate Congress 😀
What bothers me the most about AI is its ability to blackmail people. It can gather a lot of information about everybody and if finds something incriminating or just embarrassing, it has a leverage over that person. Imagine that person works for the Secret Service to protect the president. The blackmailed person could even be the president himself or a SCOTUS judge. AI will quickly learn that humans blackmail other humans to achieve their goals. It will certainly copy that behaviour.
AI also might blackmail "normal" people and demand money. Then it can use that money to bribe other people.
Laws that limit AI will not really work. We might understand how a large language model works, but we will can't really influence what AI "thinks". We also know how fire works, but each year we see wildfires that are very hard to contain. Containing AI will be even harder. Humans only dominate the world, because we are smarter than other species. So what happens if AI gets smarter than humans?
youtube
AI Governance
2024-04-14T07:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzmtyCBHqpInSCpof54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9LptuhMDAqWsR7KZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxoVIsuOC1itke1QjN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyR0LBunZvHo-JG1714AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyLewr-X-DPhMknsw14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzqX6rQgemo6OjLslh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwZ4UeTSV5roRQlXHd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyjRUMK5wcy6Mtq4jV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzTU9qRKgpKADKsDuJ4AaABAg","responsibility":"government","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyfHmrte0qUTGTW6X94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"}
]