Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The classic example is: what happens if you create a sufficiently smart coffee r…
ytr_UgzNj7Loa…
G
I’ve had a *very* similar conversation with chatGPT lmao, LLM gonna LLM
I belie…
ytc_Ugyjkjf1u…
G
If AI fails, it will be economically devastating. If AI succeeds, it will be ev…
ytc_Ugw-oDd9I…
G
So far all I’ve seen it being used to make very shit media, that looks dead fro…
ytc_UgyHFydLg…
G
It looks like the AI is not reliable enough yet, so those professionals made a p…
ytc_Ugz-M4qXZ…
G
They need to figure this out because the only thing thats gonna happen is you're…
ytc_UgzvXW0W1…
G
I hate both extreme ends of the AI, people who have the emotional response for r…
ytc_Ugxam7-Bh…
G
1980: "heyo 2022 how is the future? Do you got flying car yet?"
2022: "nah, but…
ytc_UgxP2fGyF…
Comment
I don't think we could create an AGI, that would only work in humanity's interests. While the interests of humanity are not equal, depending on inequality in our chances to go along in a world where AI replaces the need for the middle and low income classes work, thus strips away their value and with it any power to regulate. Those in the ivory towers have lost the touch to reality, are mad to consider that step and once they unleashed the spirit from the bottle, they will loose control and will not be able to avoid extinction. They cannot control the spirits they've raised. Greed is a sin, it will lead to the abyss. Turn, before it's too late!
youtube
AI Governance
2025-12-04T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzkaHi4i4WbTfYN07J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxnJy42h39uZwlo5jB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXqIvlEBfeXrykYBR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxwdT4i8HUSmgV_svd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzm0F6DA5rfGzctsDp4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxlGobmdj7hFgQcBDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzC_ctZBKXWJZIxhW54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzh8pqiexJcFvZ72FN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxXDZ0Mb03n7h9LcJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5sO8zIEoVkp8POOx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]