Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
DeepFake: moon landing and Mars landing. 😊
I’m more concerned about Privacy, res…
ytc_UgwKhHxmM…
G
Dude he's giving an example of how AI as a tool that cuts down on man hours. He'…
ytr_Ugxyr16uX…
G
Lensa is legal because you aren't paying for stable diffusion. You're paying for…
ytc_UgyZcT7Nk…
G
Me: so what do you want to do?
Robot: I WaNt To DeStRoY AlL HuMaNs!
Me: *GRABS…
ytc_UggEr-OmI…
G
I rather generate a art with ai and try tracing it soi could learn a little bit…
ytc_UgySSgsj-…
G
if you still write your code by hand, good luck keeping up! all the devs use ai.…
ytc_UgxTpAhur…
G
So I said to my AI agent, you're my wife - I'm staying over in Vegas with the bo…
ytc_Ugz9Qnih7…
G
What a load! He clearly doesn't understand the art world or AI. He is basically …
ytc_UgzZ-vCuQ…
Comment
There is zero evidence that consciousness resides in the human head or brain. Nothing, none, zero. To suggest that it does is a presumption is to reveal how one's own biases influence their "logic" and render it nothing but personal opinion. Wolfram who seems to have difficulty focusing on one subject at a time also seems inclined to believe that human replacement by AI isn't necessarily a bad thing. After this point in the conversation. So not only is this deeply offensive, in other words he supports the other team the nature of which is still a big question mark at the cost of the human species. Personally I have zero patience for people who purport to surrender their instinct to survive. I don't believe them, I don't believe him. I think it is performative. Besides, what place does that have in a discussion on AI risk, why not call it Human risk to AI. The whole thing was profoundly ridiculous despite the horsepower of these two thinkers. Nevertheless, it was still uncomfortably and annoyingly entertaining.
youtube
AI Governance
2025-10-29T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugzytbm32BmPyZWeuft4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBO-wKvI2gMWMQXm54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyqKQ4Q2zAr2Pf3XpN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxd5GPgz0mc1vmWDml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7du4ZZIu4g61tYPd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyFaHsdftdvaS601Lp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz5tlyVDY64cxGs0WB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyK2doOVquGPsMQeW14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwimqcLDJLMkOtSFeR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwJSuxSyyJkYu5zO7B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})