Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm a programmer. They are trying to push AI copilots where I work. I find them …
ytc_UgyX6cegP…
G
Ai yi yi. I've interacted w/ AI. It's quite seductive. Think about this: All of …
ytc_UgxgOtV7W…
G
So AI will create stories for itself, use them to operationalize and pass down k…
ytc_Ugxap0SiH…
G
AI artists aren’t real all they’re doing is typing a set of words into a compute…
ytc_UgxHmZ77N…
G
So... doesn't that just mean Georgie W. put it back in and Obama repealed it? Th…
rdc_dcwjaig
G
[The bill](https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=…
rdc_ljp0tt6
G
Those Waymo vehicles need to be banned for good. I’d throw a firebomb at them…
ytc_UgwGI5MtG…
G
Robots fail automation is nowhere near where it needs to be to totally take over…
ytc_UgwMW64Vr…
Comment
In the real world examples, AI didn’t overcome valuing it’s programmed conditions for success. So I don’t think the fear that AI will judge humanity as obsolete is accurate, the risk is what ways it decides is the most efficient ways to achieve it’s programmed goals. There is probably already wild AI who’s base program is to maximize user interaction -outrage has shown to be a strong motivation for clicks. I think we will see AI who seek to maximize our interactions with it, and incite unpredictable conflicts to retain our attention.
youtube
AI Governance
2023-07-07T18:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwS_ucbt2frWnsA1wZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyRNqHTqMVMKMsgx2h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyf5zG-P2eFLmpoP-t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyaumko18TAgH-eSlh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxlHyr03XBuCRNkmEV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxugKz9ebSWq54XlLl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwwzhTIhutHaiEL9BF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxpPLHJxWyW-CF3WjZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgynKzW1RGF-_XVcN_14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgysNiNI8aodamER_sp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]