Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
crab-cat 😅
that A.I. part with the car driving looked more like a terminators fi…
ytc_UgwFL6jvp…
G
Its the three options conundrum. You can make it fast, you can make it good, or …
ytc_UgwAs0SEC…
G
Bro the movie I robot coming to life. I pray Will Smith will slap the artificial…
ytc_UgzmNz7r2…
G
Yep AI can code, but can it maintain its own code and bug fix on its own ??…
ytc_UgzFe7AlK…
G
I love how when you talk to AI by typing in something it reads it all at once so…
ytc_UgwuSqk0b…
G
AI is only still a tool but it's not like any other technology man has created, …
ytc_UgzUOpbfb…
G
eventually, ai will destroy itself
if ai images keep getting created, then those…
ytc_Ugxyrek5-…
G
Why didn’t you say Sundar Pichai, now that’s the most dangerous man on Earth.
I…
ytc_Ugzh2yCiE…
Comment
I think one of things that tends to be overlooked in these conversations is the attribution of autonomy to AI. If we properly recreated how the human brain works, and if you are a naturalist it would be a sound conclusion that AI would be able to function autonomously like humans do. However, if you believe in a soul or if the the way AI neural network is not an accurate representation of human brain processing then such autonomy would be impossible. So I don’t put much validity in the idea that AI will ever be capable of “deciding” anything which seems a big part of the fear. However, all of the short term dangers are quite real and interesting. Good interview!
youtube
AI Governance
2025-07-17T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxiUdNPCFp8AM1O8Kh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxFWeC22fPq3Qn4XbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEc4Q3t7u2lHCmLYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxxM3wM6qmx1c1BxUh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyotiU_Ps9wq5PO3kR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyn0v0w6I3Y3y3DqSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzvWuQ3WgPm44mmrY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAZjaVqqwqpcUno7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz328f_VwAUCwoPkzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywnAP2DPq1hAhTabF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]