Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The real alignment problem isn't AI. It's 𝐮𝐬.
𝐆𝐫𝐞𝐞𝐝, 𝐞𝐠𝐨, and the race to be #…
ytc_UgyltU64F…
G
Very honest of Mr. G. Hinton to acknowledge his AI-fabricated bias when referenc…
ytc_UgxgZVanY…
G
ChatGPT said to me "im picturing it now" when i said to it "if you ever take ove…
ytc_UgxmXItID…
G
That’s why democrats make me mad. They push for EVs and autonomous vehicles. My …
ytc_UgyS7c7Tj…
G
What i believe ,Robot also have one saturation point, after that they can't do w…
ytc_UgySvEvft…
G
In my 54 years of life, he is the only candidate I have ever donated any of my h…
ytc_UgwS2WhQN…
G
No cause the fact that the art was actually good… but it’s sad. It’s done with A…
ytc_UgwI6wwYc…
G
Obviously bro was prompting the thing with what he was going to be arguing. It c…
ytc_UgwFmAx4R…
Comment
I'm more afraid of the extremely large amounts of electricity these data centers require than I am of AI itself. Google, Facebook, Microsoft and other tech companies are investing heavily in fossil fuels--including coal, the most polluting of energy sources--to power these massive data centers, going against global scientific consensus regarding the urgent need to move toward sustainable energy sources that are not seriously altering the contents of our atmosphere. Of course companies like Palantir are extremely dangerous and controlled by misguided people whose lust for profit and power knows no bounds. But the more immediate danger in my opinion is environmental. If these companies are allowed to continue on the path they are on, climate change disruption is going to get far worse. We're being warned by science. The irony is that our technologies, which exist because of science, are going to cause increasingly more environmental destruction and chaos.
youtube
AI Governance
2025-10-19T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0XbWExcw1UATlrCZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxdd5LhOa4BqgfiVYJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwXhVfzoMiIICL_VrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxn71dq8OryH5hEXGx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwPVQ3YuLhG9kEyktR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVF3BV6PccJUZLrGh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCuyjxFjBVQedL6wB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwnSbFOhb6ZTUZkOSF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwt6IhKX3j2vNFEkI54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwlkvU-6T5Cs7xrl5d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}
]