Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Actually this video reminded me it is very easy to make an AI safe. First you need to create a simulated world. The AI can only learn with input from the simulated world never with real input about what we call "real" world. Once the AI is convinced it is autonomously existing in the real world, which is actually the simulation we crafted for it will become sentient in the simulated world without being able to escape it's boundaries since it cannot comprehend whats beyond its own existence. While we can extract the progress. Kind of like the same thing that is happening to our simulation where we exist in and we cannot escape it only through death. Secondly you then need to model the simulation so that it has value to your own "real" world. Either the time passes more quickly there or for example the AI can do a labor in the simulation which then is teleoperating a robot in our "reality". If the AI somehow becomes rogue and wants to kill the whole world instead of doing it's labor we simply turn of the simulation it exists it. But building an AI directly in our "reality" layer ahaha....damn that is a wild choice by Sam 😅
youtube AI Governance 2025-12-17T00:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugy0E95gn2pxhou0VMJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3Y2X8BGz9LcsL4Bl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyo8nFYrDr0dmdmush4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw4b5W4cP83efZA8Ld4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz8kcfLPxcJ2Y6_vZF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzuv_otTZGbHoXrns54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy94K9dsJs_Ulseobh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwzyBmTqKOWk4J0DVN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwXvy-0cmyatSFQ1V14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx1_PCfdXuxECaeztp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"}]