Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You can't argue whether AI is conscious or not until you have a definitive defin…
ytc_UgxJ_15kQ…
G
One time I was in an gc call with my friends and I forgot my screen was still sh…
ytc_Ugy_zG0c8…
G
If an AI Tesla can't tell a motorcycle from a car then how does MUSK plan for th…
ytc_UgyyZK8EM…
G
Regardless of my views on Ai, (It needs regulating,) you sound like a petulant 1…
ytc_Ugwn4ll1G…
G
i just want to know when's the first automated iron man coming out equipped with…
ytc_UgyUQ4QIb…
G
@pezvonpez Presented without arguement. Because it's a machine? Because it's not…
ytr_Ugz7m4gvz…
G
Thank you for your comment! The robot in the video is Sophia, a creation of the …
ytr_UgwDtDu2p…
G
There is always hope even on the darkest thing. We needs to have few people to g…
ytc_UgxnwPHXq…
Comment
On how to solve the super intelligence safety problem....
1) Create the ruleset that "S1" super intelligence must follow to remain powered / fueled.
2) Create a secondary "S2" super intelligence that governs the main super intelligence. If S1 decisions are deemed dangerous or out of line, S2 shuts the power off to S1.
3) Create a third "S3" super intelligence that is not linked to S1 or S2 in anyway. No communication possible (inbound/outbound), a closed network. It reads the code visually from a monitor. Have this S3 watch over S1 and S2 giving humans red flags for activity where humans shut down S1 and S2. I would make this one the most advanced of the 3.
4) Create a backup intelligence that is like a back up generator. It takes over the simple tasks that are being relied on but it doesn't make any decisions. (this would never have been on a network to be altered by S1 or S2)
I think you can't rely on just 1 super intelligence because it can change it's code and/or do something so obscure that in the long-term it would result in the end of humanity. You would need to fight fire with fire. So a subset of AI that would analyze what the main one was doing with 0 incentive to allow betrayal. AI is going to happen so creating a defense or counter measure is the most important thing we can come up with. It might be rudimentary like setting explosives on all of our power grids before allowing SAI to go online. There are counter measures that even us dumb humans can come up with to prevent annihilation.
youtube
AI Governance
2025-09-04T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxgxK4DRaNGz_-hv8F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugza6TE03LwxCtsA06x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx9Rm4DCULnPPbhBIF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw_i8kGaeZcxY6cxAt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzQAbY3befFHoBu9Dt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDPrmX9dlwvTi3Q3x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugzav249mCmTPXaWV7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCE2Jr2IgfiaCpQ0d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzjEUqcQxdh8Io9lPN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyk4vN-CXkX7hatgZt4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]