Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The short answer is that AGI isn't dangerous -- that's just marketing speak to g…
ytr_Ugw4MUa6X…
G
how many years did the robot spend in jail? any idea? plz tell if anybody knows…
ytc_UgwyUfXwL…
G
It has nothing to do with AI. Do better research. CNBC is struggling to explain …
ytc_UgydCKBZm…
G
The robot in the thumbnail is an animatronic made by Disney's Imagineers that wa…
ytc_Ugwg3u9lU…
G
Once AI does everything, wouldn’t all humans simply have different roles as engi…
ytc_UgzkTMPNz…
G
Take a course in psychology and a course where you read Shakespeare's works. An…
ytr_UgyF5_r0n…
G
Tracking due dates on spreadsheets was driving me insane. Omnely handles the con…
ytc_Ugxjlerv-…
G
This robot is definitely programmed to interact with humans and be non-threateni…
ytc_UgzmApcWO…
Comment
My question is this. If you truly believe that we are living in a simulation right now, what does it matter to focus on A.I. safety? If this simulation ends for us in 5, 10, or 20 years because an AGI or ASI causes the end of humanity due to aur lack of safety rails , it doesn't matter as there are still billions more simulations still running the game. If you truly believed in simulation theory and that we are living in a simulation at this moment, you wouldn't care if humanity ends soon as we can just start a new simulation again.
youtube
AI Governance
2025-09-04T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwgF3q6yOVf5BJJjGJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxoaJNGNduwdK4uisx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"unclear"},
{"id":"ytc_UgxpdwYW2cFitpJ2Wa94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyPMhFb04rHugyM9Op4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxTO6teDXdFqv5oPLx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQayrU8X93wqp6ut14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyDt7bVdLJwoI2JAs54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgzkYJlwSx0qIvtIulx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyow1TmC7YPGnXyz2J4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwtPN9UbvQnG4E0VAJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]