Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Would a true distributed intelligence without corporeal needs require killing or…
ytc_UgwP1QVXb…
G
Asking Ai how it feels about being turned off and so on.... Just when you think …
ytc_UgxzxW16R…
G
One of the main ideas presented is universal capital allocation. So basically as…
ytr_UgzLxvKED…
G
For me, anything that doesn't have to be precise, is art because is up to subjec…
ytc_UgxpJb25L…
G
@wardm4 are you saying we can’t put guardrails at the point between the user sub…
ytr_UgxBIFZaY…
G
The teaching you is cool but I don't think that's going to be a benefit because …
ytc_UgwbPxTWJ…
G
One thing I didn’t hear mentioned, is that electricity is needed to run AI. Turn…
ytc_UgzW-eLBB…
G
AI doesn’t shape anything. AI is just a sludge of all the things humans have don…
ytc_UgyJKHDe6…
Comment
This is a fascinating conversation, that has really got me thinking. If we could determine and agree on moral boundaries of AI use, that would be a start, but that only works with honest people. The disaster happens with greed and power-obsession. There are plenty of powerful people who would clearly want to take advantage of the worst scenario. I fear for my heirs’ future more than my own.
youtube
AI Governance
2025-09-05T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx6gGG7FzPhOAlXoK54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBRXPJ8LUMzuym8MJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyiJoxDWDUT03Yfuo14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwvwvXPzBCNK2No5uZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzYVYrd6IzUbrdsaDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy-ZkoADoJRVCBxf9h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz3gmEyCQ6_dGsxHJ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgygpZ1ETacGeO0Q75N4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"sadness"},
{"id":"ytc_UgzOYM-l3ccsmedjnh54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgybUXgbCiC3ZssaPKh4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"fear"}
]