Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, it's really great for searching, google is right to be afraid of it. LLMs …
ytr_UgzY7y8Uu…
G
Writers fighting against AI is basically a repeat of horseshoe makers fighting a…
ytc_UgxZPcrWR…
G
As a scientist I use AI to generate concept images for Dungeons and Dragons game…
ytc_UgzLktVZG…
G
It's definitely unethical, yes. But the shady "terms of conditions" on many soci…
ytc_UgwtBNoV1…
G
It wouldn’t have to be handwritten. Most colleges already have testing centers …
rdc_nu1no9c
G
There was a guy I went to school with who had to transfer because this one girl …
ytc_UgzbKjsZm…
G
I appreciate the tutorial on how to draw in your style!
Now I, a person who dr…
ytc_UgzcmXaVw…
G
I hade never use photoshop 😅but were will ai take us now it creat fake models or…
ytc_UgzntmpPI…
Comment
The only way to regulate AI effectively is to create large governmental departments with deep human capacity to both monitor AI activity and judge them ethically. On the scale of the SEC in the US. Along with the power to speedily punish those using AI for unethical purposes.
Write that on your wall. I'm not going to even bother saying "I told you so" when it's too late. Actually... No I will. AI has potential for good, but it has as much potential for horrifying uses. Ignore this, even at the potential cost of good and useful use cases, at your own peril.
youtube
AI Governance
2023-06-14T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzuPDhA9V79EP9txKx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9SzMldMdYG5AsvWx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySYxFv5JnR4IYvE7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQDggEvaqwJX9mF9B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz0MvEK7j9cr1wIDUZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgylaSj0e1jv5fHJBy94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxt_hQ9CljHFLkb2OJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuDSygly46D3zQ9J14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgydMorUNyyhxtIopml4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxyTm6tQTqUEk-xNyJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]