Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
According to this they don't jail people using the algorithm. They're just being…
ytc_Ugwo6IB9o…
G
No sh*t. The guy discovered the moon. This could have been "news" 10 years ago, …
ytc_UgxCY6OJp…
G
3:02 i feel like at a surface level yeah, neither are very deep, but i much rath…
ytc_UgzPZTNmm…
G
AI can be trained too. It doesnt know anything. It only searches and analyses th…
ytc_UgxbNkweU…
G
What happens when the model is presented with far more data than it is able to m…
ytr_Ugwq1MXS9…
G
This is horses**t. The number of the beast is actually 616.
AI would know this…
ytc_UgzM-TTwC…
G
Absolute facts. Demotivate and depress the populace, you'll have a constituency …
rdc_hm89y10
G
One person even tag the artist they took their artworks from and even boldly tol…
ytc_UgxOGZ_fC…
Comment
Balancing the triad :
Answer from Google AI
The concept of balancing security, availability, and user-friendliness—often referred to as a "security triangle" or "trilemma"—highlights the inherent, interdependent trade-offs in system design. Increasing one, such as security, often requires sacrificing another, like user-friendliness, to achieve a stable, functional system
Superintelligence safety needs compromise as highlighted above regarding security triad means you can't have a perfect security system
youtube
AI Governance
2026-03-05T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwcjbrmMhitfiy5miF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhQ0_-CaT17ic6Y7F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgycJAPOq0ogNg6RUqJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxStoOsIccAzi5frkR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBpD01b2diMkU7bOx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCyp7UPyQtPcQp5K94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxGne3anG5tIVAxryF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7OWeFm8eTXjppRQ94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy9lTiYRi-uHDvkGnx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyBG3nOX1fOvdCuFdV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}
]