Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As much as i enjoy playing with AI, i do prefer man-made art
The soul put into …
ytc_UgzzN87xS…
G
some will say AI is good reference material for certain things or a great way to…
ytc_UgyatMShI…
G
I disagree. For several reasons.
1. Let's imagine we've all got AI built into o…
ytc_Ugwij_STT…
G
*From YouTube AI summary:* Hinton warns about the Volkswagen Effect, where AI m…
ytc_UgxVwG0Co…
G
I love how he accidentally reveals the game and says he's for people making mone…
ytc_UgzPQWehJ…
G
Two reasons:
1. Unlike chess, IT/Tech soaks up a lot of employment in many coun…
rdc_kyzu6lp
G
Concerns are real. AGI by 2027? no. LLM is not a stepping stone to AGI.…
ytc_UgxUpO3fV…
G
Never thought about machine learning and human bias. Always thought it will not …
ytc_UgwdRhIPv…
Comment
As it goes for simulation theory and AI there is a better viewpoint.
If we have the ability for unlimited incredibly cost effective world simulations using a quantum computer. And we populate this simulated world with autonomous beings. What would be the purpose and what would be the problems.
It seems highly logical that the problem would be that autonomous beings making egocentric decisions without care for others ALWAYS culminates in a TERMINAL world simulations scenario.
Does that sound eerily like what this podcast is discussing..?
So from this PROBLEM we can deduce the purpose.
All of us within this world/simulation is here to LEARN about the impact of our choices upon others and ultimately upon ourselves as these things are always circular ultimately.
Which inescapably is moral or has religious underpinnings.
Ergo.. it is our choices. We are here in this sandpit we call life to learn and grow from our choices.
Seems like this point has been debated and resolved a million times over by every sophisticated culture only for everyone to collectively shrug their shoulders and resume the work of engaging in this exceedingly persistent life we live.
My suggestion is to resolve that game theory and unconditional love for everyone is in everyone's best interest. Yes we are likely in constant struggle to achieve it reliably... But their is no better course of action.
It is our best choice for the here and now and for the after...
youtube
AI Governance
2025-12-08T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwkx4_FlQyJMnyjE4h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyX70Jd4GGYdOJCjhp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwPv9hy2fIqIaMrIOh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBeJllSRorxCzq4td4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx7yy7TdFEJeYIRaOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXevjeaXBRxOUZ6cB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzTB_VlNwsKp-JtAKl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGb6cNlksMK1P8zQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxMK5OXLCvu4MlZ2Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx6ZESh67yX0rHT4YB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]