Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How can we trust AI in the future with kids like him weaker in his thoughts ?…
ytc_UgyDpnwUy…
G
And why not create Senior versions for those of us left behind to navigate our a…
ytc_UgzeWhmDa…
G
Im sorry but I just do not believe that they don't use it intentionally knowing …
rdc_jv6vtkz
G
As if Terminator and Ian Malcolm weren’t enough, what about the Faro Plague from…
ytc_UgzrPOnXO…
G
This would be the sensible option, an AI managed world where people have nothing…
rdc_dt9jvr3
G
Today on the Ezra Klein show, Ezra misunderstands that AI might want to kill him…
ytc_UgwgVNJgS…
G
I’ve been using a.i. as a singer, since I am unable to sing in the style I wante…
ytc_UgzIbcaux…
G
One idea to keep in mind is that you can use a cheap AI model to augment GPT-4/5…
rdc_jhuh106
Comment
If you have a button that would stop AI would you push it?
Not Yet!
And I think it summarizes why we are here. Curiosity of human kind and its willingness to explore in the cost of his life/others is the driver even for those who know how fast things are going and how it can go wrong. Will we stop at some point? probably not, until a Hiroshima, Nagasaki happens.
My personal feeling is that it is like atomic bomb, the nations around the world are racing to be the first AGI owner, and conquerors of the new world. If they believe that AGI is the new atomic bomb for next decades, why shouldn't they race to become the USA of the new world?
youtube
AI Governance
2025-12-08T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugynh_fQr8Py9TTeH_Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwDEe_FsbS5LYRus_94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx7Zl0wD-yUAgZ-kDN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxw9RkCBlAfPx-0DhV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyCO915wRaliQBdML54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz7owOCZUptE7OEUF14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyqvQY3kETR62x-QS94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxCeN4oTysVN2eGBvd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNt7KZBbun9NX_byF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTbu1t6ZNyFih06MF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]