Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I appreciate your input! It sounds like you might have some reservations about t…
ytr_UgwVoTHVP…
G
Yeah, Neil. Once AGI takes over and capitalists and governments replace their wh…
ytc_Ugxuu1ybI…
G
As a severly disabled Artist I totally embrace the new tools, that ai systems pr…
ytc_UgzRDwunL…
G
The only real danger of AI is that humans will use it to harm and exploit each o…
ytr_Ugwbe0eG9…
G
But can AI get bored? in the end Boredom is what make us dream, be creative ...c…
ytc_UgyGBnfRd…
G
Exactly, but good luck trying to convince the majority of the GOP though because…
ytr_Ugw1EFk7G…
G
Chatgpt Version 1.2025.210 answers If i ask what is the different bitween u and …
ytc_UgyTtKQyd…
G
I don’t see how any company can rely on AI agents for anything of importance cur…
rdc_m282djf
Comment
This guy thinks there won't be anything for humans to do when AI will become super-intelligent. Why the hell wouldn't humans still need other humans when super-intelligence comes to be? In the future if we live off UBI, we'll have no choice but to focus on others, our circles, our communities, etc. He's assuming the super-intelligence will be so efficient it will replace our desire to live our lives.
youtube
AI Governance
2025-06-16T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx1d6a8OoiGdwJd0ll4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwI4cehGJLxyd074pN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwBjaXQmI5aD0PbdQR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwvsa9KsCHIFtY5pyR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjA4e-HIPU3QTqzy14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwo8sjRVb-SQ7x3bD14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZQFBwTlRLZhC8PfB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQRQpj2GnPjoOCogJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwEG5tHTahd13RkGA14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwpgl9whGKz17a2a954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]