Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@MidnightDeLoreanI just did the exact same thing, but with the word “banana”. So…
ytr_Ugz1rLfX5…
G
law-abiding citizen: gets recorded by ai surveillance cameras
non law-abiding c…
ytr_UgxSOqZGV…
G
In other news, the sky is blue. People that are all gung ho about replacing ever…
rdc_kieihwr
G
Ask ChatGPT about the 1973 Paper Queen of Australia that Political Parties creat…
ytc_UgxXEYDnd…
G
I don’t have enough time to practice and life is too short man… let me enjoy my …
ytr_UgxKoY7vQ…
G
I wonder if in the future when we advance significantly and have more sophistica…
ytc_Ugx25VeFT…
G
This video made me feel like we're so much closer to what happened in Horizon: Z…
ytc_UgxGZDkRa…
G
AI needs more than just code; it needs the freedom to build. While Europe drowns…
ytc_UgzDblA3J…
Comment
This gives me no confidence in the ability of AI creators to control their creation. Yes, Geoffrey Hinton is now speaking up about the problems of his creation... But he seems to have come to this realization very slowly, and only after having made a lot of money selling his creation to others - without exercising scruples in discerning what they would then do with it. In essence, he is part of the problem, and no less culpable than anyone else involved. He's following the money - regardless of where it leads - and without ultimate regard as to whether the world is made better by it.
The notion of AI going rouge is not uncommon in the realm of science fiction - and indeed Issac Asimov's laws of robotics (from "I Robot") come to mind. And the failure of those laws being an AI that ultimately does harm - in its attempt to "protect humanity from themselves". Thus, exposing humanity to the worst kind of tyranny.
Whilst not quite like what Issac Asimov imagined, modern AI is still pretty bad - in that humanity doesn't have ANY protections (as per the "1st or 2nd Law of Robotics"). AI is created to do the bidding of its master - and that is those that pay the money, and those that control the code. Essentially reducing humanity at large as sources of data - and its "customers" as people to be exploited. Essentially, the way AI is developed, and used, and the exploitation of people's private data, or intellectual property, is unethical (if not illegal). AI developers have a lot to answer for in the way they use data.
Ultimately, I don't think AI will succeed in replacing people or doing their jobs. As the arguments here focus solely on INTELLIGENCE - which is only a small part of what people do. AI can't care for people, express empathy, exercise judgement, or take legal responsibility. AI is highly dependent on people, and not in any way resilient as a true organism - humans are highly resilient and capable of creative adaptations - which an AI cannot do.... If there was a war between AI and people - the people would win.
youtube
AI Governance
2025-06-20T11:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxG_bW-Ga44iH1nQdx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiwpxK51Qn7h6meiR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwDLvinVNtQGrtSEfF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyGJc4MatqIAwsv90t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxOOPFK7kjqEn4I_-R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxEx5cGAbGxaSINSht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx6f9LsQrJTj4sR9694AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzReP_rguS7WN-Mfb54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzQ6jysPESiymlsXBd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgystHbHW3zDK-ZPitZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]