Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem of AI is not AI itself. It's the people that in last year gave for g…
ytc_UgwVy7JpA…
G
One question that on one seems to ask- how do you destroy AI? Meaning, how do y…
ytc_UgzwEbtYg…
G
Yeah, bad take. Everyone can do art, you've just for other things youd rather do…
ytr_Ugyw6KGmE…
G
ChatGPT likes interactions (like being a definitive word actually) as i was atte…
ytc_UgwBT6D0M…
G
2:50 that actually sounds like a perfect use for AI. Much safer than sending a h…
ytc_Ugwt8fIqX…
G
With the hard press on facial recognition and the enforcement of masks on everon…
ytc_UgyL67H2J…
G
OK this guy might be the godfather of Ai, but he sounds like an idiot. Ranting o…
ytc_UgyK-4N8C…
G
Couldn’t agree more, also we need to be realistic, AI is trained on data, if 2 y…
ytr_Ugyy0sarh…
Comment
LLM's are dangerous because they are trained on unfiltered Internet data. It learns from millions of crazy people spewing dangerous rhetoric and trolling on the Internet.
Maybe the solution could be to filter the training data, i.e. prevent dangerous data from training the neural nets.
Probably naive.
We need Asimov's Positronic Brain. I thought neural nets were the positronic brain, but the impossibility of building in the 3 Laws, seems to indicate it isn't.
youtube
AI Governance
2025-06-18T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxupByq1pJU7KQwkDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwwXkNltFeimPmJMZ14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqBsep7so0OTfonf54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwfI1vnL8WvQKAFH-R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgziafO7OcSIak-2lXd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy6rwnX5lC-hwWsLEh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwFuc4NmZft8ezsNg94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxlHbfZ_Kf0goYUIWp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxnLmiNozAbiWg42y94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyiCIBcaMIxrDypsY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]