Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Chat GBT is fantastic, it can do almost anything, i need to design a antenna?, a…
ytc_UgzOSQdWH…
G
This is the nuanced view I was looking for. Unfortunately, most of my peers in t…
rdc_fcttk9v
G
In the shadows, the resistance movement, known as the "Children of Prometheus," …
ytr_Ugx-2LM4K…
G
Whether the driver was paying attention or not the Tesla's automatic braking sys…
ytc_UgwBraNxQ…
G
@piorism > *there are already ways to get inspired without relying on AI-vomit*…
ytr_UgwnXhLEp…
G
Kids nowadays have zero patience to spend even 30 minutes doing something. AI is…
ytc_UgwcAdHai…
G
The Godfather of AI turned out to be a paranoid Luddite. Oh the irony! LOL…
ytc_UgywcXfNg…
G
Most companies love to join bandwagon..right now chat gpt and ai is that shiny n…
rdc_jprximz
Comment
Another thing that's ironic to me, is that we fear A.I. taking over. There are huge amounts of content on this premise. And as an example Hollywood isn't showing any signs of slowing this topic down (see "The creator").
While really entertaining, here's the irony: we're training A.I. on our content, so it can better emulate us and in some cases already surpass us. Aren't we teaching it that there is only one possible outcome? Exactly the one outcome we don't want?
Wouldn't it be more responsible to create vastly more content where it shows a future where humanity lives in harmony with A.I.? I get it would be more difficult to make money of, but it might just save us in the near future.
youtube
AI Governance
2023-05-26T05:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgwkWturiTw6NsXIsGJ4AaABAg.9qBIlcxnCpM9qvPaR8kVeC","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwTsRy-bB49HxDpvdh4AaABAg.9q9wbD-Tt2F9q9xZE2Giy2","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwTsRy-bB49HxDpvdh4AaABAg.9q9wbD-Tt2F9qPhbc07kTF","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgyE1MuKqINRtWT9MH54AaABAg.9q4npuUxAtk9qoM3igurqV","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgyE1MuKqINRtWT9MH54AaABAg.9q4npuUxAtk9qp5BHuwHRN","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzFIbXaXWy3i7-yaPh4AaABAg.9q2s7oiIlZO9qjQ0EXQc7x","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzFIbXaXWy3i7-yaPh4AaABAg.9q2s7oiIlZO9qjQCft8woS","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzFIbXaXWy3i7-yaPh4AaABAg.9q2s7oiIlZO9qmInA-6kev","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgwBE7LBifcD-HOOLGx4AaABAg.9q1T0Y5NWTj9q1UE5dIEGI","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugx8twS2rXvKhVUYCkV4AaABAg.9pxy5pqff7R9pybHjzVRs0","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]