Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Classrooms won't be needed, by then ai brainchips where knowledge is not forcefu…
ytr_UgyypPDVy…
G
AI in fiction: *Goes rogue and enslaves/kills humanity*
AI in reality: "Despite …
ytc_Ugx7nRJlg…
G
There is no diversity at all 😮 so problem one to humanity just creating another …
ytc_Ugx8diHLO…
G
I love it! Embrace technology! My company has gone into contract for projects we…
ytc_UgwXmcmvj…
G
Seethe. I'm a programmer. If AI takes my job then I wasn't a good enough program…
ytc_UgzR9TuUi…
G
How Will We Know When AI is Conscious?
~~~ "We"?
I know that there is not a si…
ytc_UgzLLs5wt…
G
I hate the general uses of AI it's all terrible and I especially loathe using it…
ytc_UgxpV-lji…
G
Need an art website that will not show users content until the make an account, …
ytc_UgzlUfcOO…
Comment
"AI is neither good nor bad. It is about how it is used" doesn't sound like a good argument against regulation. It sounds more like a case for it. Lead is neither good or bad, it is also about how you use it. If you put it in gasoline it is a cheap and effective way to help engines run, but it turns out that it is very bad for human health. That was addressed with regulation. The thing to note hear is that companies are motivated by their profits, which doesn't always align with human motivations like health. Given this, if we can identify that the technology can be both good or bad, then we can be pretty confident that perverse incentives will lead to it being used for profit in bad ways.
youtube
AI Governance
2023-06-14T16:4…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzuPDhA9V79EP9txKx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9SzMldMdYG5AsvWx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgySYxFv5JnR4IYvE7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQDggEvaqwJX9mF9B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz0MvEK7j9cr1wIDUZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgylaSj0e1jv5fHJBy94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxt_hQ9CljHFLkb2OJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuDSygly46D3zQ9J14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgydMorUNyyhxtIopml4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxyTm6tQTqUEk-xNyJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]