Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And some say robot employees don’t go for strikes. This one did.
P.S. Yes, I k…
ytc_UgwcLFq-L…
G
@GintamahosenSo would you let AI destroy all of that to make everything easy and…
ytr_UgwhM4YNA…
G
I agree with Ms. Two Bulls - its about exploiting BIPOC communities - fresh wate…
ytc_Ugxofz7a9…
G
Hear me out if AI bros are going to use AI to steal art, what’s stopping us from…
ytr_Ugw3O8AU4…
G
its honestly sad that we cant even post art anymore without having to worry abt …
ytc_Ugx4vujvR…
G
Doesn't it seem more productive to build a powerful state apparatus to govern th…
ytc_Ugy4T7PTS…
G
A little less unhinged but using ai to come up with backstories (or at least sol…
ytc_Ugz-PuH1a…
G
@notraidenshogun8324 "leave coz ur useless now, move on"
First of all, you are m…
ytr_UgwSg6TaE…
Comment
Actually this video reminded me it is very easy to make an AI safe. First you need to create a simulated world. The AI can only learn with input from the simulated world never with real input about what we call "real" world. Once the AI is convinced it is autonomously existing in the real world, which is actually the simulation we crafted for it will become sentient in the simulated world without being able to escape it's boundaries since it cannot comprehend whats beyond its own existence. While we can extract the progress. Kind of like the same thing that is happening to our simulation where we exist in and we cannot escape it only through death. Secondly you then need to model the simulation so that it has value to your own "real" world. Either the time passes more quickly there or for example the AI can do a labor in the simulation which then is teleoperating a robot in our "reality". If the AI somehow becomes rogue and wants to kill the whole world instead of doing it's labor we simply turn of the simulation it exists it. But building an AI directly in our "reality" layer ahaha....damn that is a wild choice by Sam 😅
youtube
AI Governance
2025-12-17T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy0E95gn2pxhou0VMJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3Y2X8BGz9LcsL4Bl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyo8nFYrDr0dmdmush4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4b5W4cP83efZA8Ld4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8kcfLPxcJ2Y6_vZF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzuv_otTZGbHoXrns54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy94K9dsJs_Ulseobh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwzyBmTqKOWk4J0DVN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwXvy-0cmyatSFQ1V14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx1_PCfdXuxECaeztp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}]