Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1) yes make it play a character. This is actually amazing for it to focus
2) do…
ytc_UgzD-iUVL…
G
Joe let me scare the shit out of u with AI, once humans depend on AI (inevitable…
ytc_UgxUb4i7E…
G
@zentevida1201 nah, I like art.
As an artist myself, I enjoy architecture, old…
ytr_UgyZCW5B0…
G
First Climate Change, then Brexit civil war, then Covid, vaccines and lockdowns,…
ytc_UgyV9nEn4…
G
It seems like Tesla always worked for the Illuminati and they've had technology …
ytc_Ugx-E8VJg…
G
I think it was Hunter Thompson who described the South Koreans as the Irish of A…
rdc_clurzzk
G
I wonder if, with the first advent of photography, artists were moaning about th…
ytc_UgyFD54Iu…
G
There won't be any UBI. The government is the one supposed to implement it, but …
ytc_Ugx10X80s…
Comment
As far as I understand, superintelligent AI is just like a nuclear weapon potentially. And we do not know, as humans, the side effects of this new invention. It can be very useful to solve problems, even cancers and/or climate change, but on the other hand, it can be very dangerous as well. Because of that, even most of the developed countries are eager to have it; international organizations (UN) must take control of it, like a nuclear threat.
youtube
AI Governance
2026-03-23T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzP-VNGjBbsb8lgqYF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw77rLPrTAjiSy1TU94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwasRNDHvwPZsXYSdN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-pzsla8X31RNO-iJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgzV7brUef0x5sCCaqh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyiHFxpOKk-MKXQwgV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxZO8hDhJLTD2pLdOd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy6WSYkvS60Db9HaiN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjQn77Anig-uay4yd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBMhQ4hIl-8tXrv5N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]