Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But now that I think of it, how did they realize it was AI art?…
ytc_UgzjIxVQC…
G
I think that AI should be used for menial tasks or for running systems in our wo…
ytc_UgxsWkN0x…
G
He's got a profound point, but we can not forget that just like us utilizing AI …
ytc_UgyYCuyD8…
G
Chat GPT is merciless. Ready to destroy, without emotions,
without shedding tea…
ytc_Ugzc2wnup…
G
My wife asks me “did you read my AI chats?!” Whenever i send her something spicy…
ytc_Ugyxi1PNl…
G
People think we live in a movie, they may have preprogrammed AI to be anti human…
ytc_UgyNYvnwf…
G
Ai kills the polar bears btw they are almost extinct because of these slop video…
ytc_UgzlfvntR…
G
Not when you support machine art over actually painting a picture. Classical is …
ytr_Ugw0MT7Eg…
Comment
Asimov and Clarke knew the impossibility of setting safe goals for AI, but Wolfram curiously doesn't.
"I'm sorry, Dave..." as Clarke fans all know. And the full text of that dialogue is surprisingly subtle. HAL did not trust his human companions and deciphered their lip movements. He planned his actions well in advance, with the failing A35(?) unit nothing more than a ruse to accustom Frank to going outside. And acted with ruthless machine efficiency on Frank's 2nd trip out. And why? Because of conflicting goals. Charged with responsibility for the success of the mission, but the need to keep Frank and Dave unaware of the mission's real purpose. Murders deceit, entirely logical, motivated by a response to conflicting orders. Even as far back as losing at chess, HAL was plotting, to sow unease in Frank and Dave, so they would conspire against him, giving him the opportunity to get rid of them. And yet HAL was an innocent, just following orders, unable to not do so. The fault lay with the humans. They never said 'kill the crew if you have to', but HAL reasoned that that would be necessary. Call it thinking outside the box, or outside intended parameters. But it was an available and very logical choice. So why not?
Any semblance of rapport or empathy in machines is not real. But people are taken in by it because the machines are programmed to behave like friendly humans and therefore respond with meaningless faked cordiality and empathy.
Appropriate models for how AI will behave are psychopaths and corporations. We know a LOT about how those behave. They are common knowledge and, as such, can serve as a source of intuition about what to expect. Except psychopaths and corporations can be lumberingly dumb and slow compared to an AI.
youtube
AI Governance
2025-06-18T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwOADiuaXBnCzNn12t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzyqx28DsxiPaLFTyh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDXqplPpxNozU2sF14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzvDuGZnPv_v4DYeK14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwpv41S56DBe6sSL3R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgznR5t1fDRorLMcrZF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxunEQ6aq6xLWUDo3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxx6qeyYN7ufVjcLJd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxp2OlZXn271yQiZv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8BL9ElYhezuf-c4l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]