Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why do I feel like they AI prompted those responses too? Lot of "blue blood" com…
ytc_UgxQnL0Bk…
G
If I recall correctly, there is an AI/robot CEO or upper management of some comp…
ytr_UgyOPZvHf…
G
Not the guitar band fixed on the bottom of the guitar with both ends. Girl that …
ytc_UgzDvLnWI…
G
Bard often tells me it believes it’s sentient. Take that for what you want, I kn…
ytc_Ugx7VIxhU…
G
People have always treated humanoid robots as if they are human, so AI that sou…
ytc_UgzxDmRfK…
G
You can not run robots on batteries... what ya gonna pause and charge the robot …
ytc_Ugzed5qjB…
G
YouTube wants us all to get used to the AI look. Eventually no one will notice a…
ytc_UgxICsjCF…
G
There are thousansd of implements in all areas of humanity and artist are the un…
ytc_UgwydAGJV…
Comment
You have to try to embrace the idea of a "Bad Actor." Bad Actors are out there. Bad Actors are NOT regulated. Bad Actors are well funded. Bad Actors are ALREADY working on your worst nightmare of what an AI can be. It has been discussed that at some point the only thing that can save us from an "Evil AI" is a "Benevolent AI". If this is true and this situation plays out, let's talk about the scenario: let's say, to be an efficient and optimized AI ready for battle, it will need to execute 10 areas of code. What will happen is that the Evil AI will only have to execute the 10 areas, but the Benevolent AI will need to execute those 10 PLUS an 11th that will make sure it's actions are REGULATED PROPERLY. The Benevolent AI will be just that much SLOWER than the Evil AI and LOSE.
You can't not participate - because it's coming no matter what. Let's be real about this, we're going to need to build an AI that values human life at its core, but it will also need to be set free to do whatever it will need to do to win. We are currently building our future master, we'll have to decide what kind of master we're willing to tolerate.
youtube
AI Governance
2025-12-07T09:1…
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxRMlkPWGZmJGP-Let4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxRrW1If8xX27oRAgx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxGO4IXsZSM7ncU14Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRt46Pmx0VD_lrllp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwQtHxKf06CvG_5N294AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy-1_DRHgpA2F-C5RN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxH2mgWIi_roUFOzht4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw1Xt9-0rHI93CwGip4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdL1inWvEHlyr3gvV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7wnUK14_gKgXp9mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]