Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI agents are like co-workers - no they’re not. They are like a first day intern…
ytc_UgwOqMHwM…
G
You fundamentally don't get it. The intelligence gap between you and ASI (Artifi…
ytr_Ugw8qULQL…
G
believe it or not, it _is_ an actual skill to get an AI to generate exactly what…
ytr_UgymEcaL3…
G
AI told me at my age it's just not worth it to be a full time employee [ softwar…
ytc_UgzQl7qUF…
G
Sony pictures needs an AI executive to tell them NO.
Maybe we’ll get a better m…
ytc_UgyfTpj0c…
G
This episode was basicly made for me, love thinking about AI and the potential p…
ytc_UggVxA8Xy…
G
I agree with the person that said that treating the bots with decency was the wa…
ytc_UgzLYlAVw…
G
The end of this interview was a metaphor for what is to come. After 49 minutes o…
ytc_UgyW1wdqc…
Comment
Let‘s say an superintelligent AI will Control everything, have vast Knowledge about everything. And lets say nothing more to do in terms of finding purpose. (I suppose an super ai will try to find purpose). Won‘t it Face the same consequence as we humans have and the only Option left is finding purpose in death? Because everything Else will lose its meaning once achieved: Ruling over the Universe? Cool, and now there is nothing more to conquer. Unlocking Crazy amounts of Knowledge? Cool, but for what is it worth when you basically have unlimited time as an Individuum. Etc. Can be killing it Self be an Option? So in this thought Experiment we Are very much alike :).
I am very interested in your opinion and sry for the spelling its Not written on an English Keyboard:).
youtube
AI Governance
2025-07-26T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwjZEsmtGPnsr43_VF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyoI2wTvB-_hv4y2X14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEnf4tx0UvnXDW7at4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6uEEgLhPdjlNjgV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZ_SAxeQpDAXyfhXx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxRBPHGl6iNLrfpS794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwvQ46fgH3Df_wtMyp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2h8SpngJicvo-RFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzRd90yGE0toWQesad4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugzk7G5nVdecB78OciR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]