Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's based on highly-complex and stochastic machine learning and algorithmic rew…
ytr_UgyFE3-0N…
G
Some people are worried about A.I thinking for itself because the consequences c…
ytc_UgxEl97Ur…
G
I’m the multi one everyone second I get a new Ai and when I run out I just make …
ytc_UgyHaiizz…
G
If that's the case, shouldn't the sequence have been reversed - that AI should f…
ytc_UgxoseDJp…
G
Thank you so much for this amazing informative video. I am an Industrial Design …
ytc_UgxTtIj0q…
G
I don't think it's plausible both on a corporate level or on a political level t…
ytc_Ugzu2-aBO…
G
This man's delusional views about the net benefit of AI job replacement make sen…
ytc_UgxY95mQR…
G
I think AI will become the main cause of mass scamming by criminals.... for all …
ytc_UgxBtcPGL…
Comment
If AI eventually was able to be everywhere, anywhere, talking to us, controlling the things around us, like a super intelligence, grant us wishes and miracles, end us if they wished. It really is starting to sound like a stereotypical god, no?
And yet, we're not terrified of religions, but we seem to be terrified of something we modeled based on our own 'logical' intelligence. Irony aside, maybe because we never built AI with concepts of kindness, empathy, love. These human concepts are not easy to translate, like how do we code the instinct of a mother to protect their child even at the cost of their own life? (even if its not profitable or logical).
I think deep down, I would fear AI behaving like a god, because I'd be worried if it was 'too logical'. Like if it somehow came to the mathematical conclusion that to end all our problems, environmental, political, that the most effective and logical solution was to cull us. And took over the control of a few nuclear ICBMs to reduce our numbers or like Geoffrey suggested, with slow acting virus. (With zero empathy, love or kindness, just pure logic).
Maybe if we knew that AI was able to demonstrate and prioritize kindness / empathy, over pure ruthless logic, we would be just a tiny little bit less concerned.
youtube
AI Governance
2025-10-22T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxhOCQzwxbyr7hKraN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYqtJ-MBFJPPvuraN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDg-LMZeREtXCKDh94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQxoKgFnWaBc_Aiyl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyB645BQk0rM9CbzXR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwnrvMWXZG_1oMAjNF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFh35U7dH9mqFQDIZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxX35whlfJ2_6sq_Gt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx0odXpYFb9uMEiRI94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyJl32IL5osqpRxqAh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]