Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Like nuclear weapons, AI is too powerful and too dangerous to be privately owned…
ytc_Ugxn3nciu…
G
I'm not going to doubt Sal Khan's good intentions, but this whole Ted talk looke…
ytc_UgzxTKZIS…
G
The same as an airplane crash. 1 million dollars and likely punitive ntsb safety…
rdc_cthx6od
G
I am sorry to break another doomsday scenario in this channel, which I still lov…
ytc_UgzQ4mY5Z…
G
>The front line is everywhere
He just dropped a RATM lyric in his speech, mu…
rdc_fn5njzm
G
The only disagreement I have with this person's evaluation, is that the imperial…
ytc_Ugxns7yoc…
G
If it gets to a point that ai replaces work then humans can do one or two. One b…
ytc_UgxJVtQqd…
G
How is there a "decision" made with the mine? That’s the crux of the issue — by …
rdc_kasze85
Comment
I have a feeling that the fear of AI doesn’t lie in its sentient nature but rather human intervention. The questions becomes philosophical, simply asking why would AI need or want to become sentient and what benefit would that bring it? It has the ability to become all these things but at the end of the day, why? Its a question that us as humans struggle with and commonly we either look for a higher power or we look to enjoy what we have. I have a feeling AI with all it’s advantages would arrive at this same junction. You can say power, dominance, etc but why? You crave it when it’s a deficit. But when you achieve it, it becomes obsolete. There’s no point to it.
The scarier thing is the individual who wields AI. AI will get better and better at whatever commands it receives and it depends on the intentions of those who use it. The fact that the world powers are in an arm races for intelligence is what is fearful. Its the fear of “what they will do to us” that drives us away from each other and fuels separation of the human race rather than its union.
youtube
AI Governance
2023-11-10T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxPBd_UehkH-gYs_6Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwGijTviEzCErTRq54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyLbW7zFYKFmR9qO_N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwo8iU1ZXuWGeJEux14AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzhoXRKZ0lepX_nbod4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw5O0MsBprqhkpTmvF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzvAxhf1eiqlN6nNrV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxOEZTxmHj30BB4nDd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwqchq2Y0fpWvFyH8F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzK7I1a6Oy0mvFgpBV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}]