Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
dont care, ai is cheaper and a lot faster even tho is a little erratical, with g…
ytc_Ugwnm91X9…
G
I never even really stopped to consider how AI controlled anti-missile defenses …
ytc_UgwXKJtHa…
G
This is basically the equivalent of people using photography when it first was i…
ytc_UgxaHXORd…
G
AI will either advance us to unimaginable levels or destroy us. Advanced civiliz…
ytc_UgzIkBv7u…
G
everyone thinks realistic breathing makes ai voices more trustworthy but i've ac…
ytc_Ugx4gqv7s…
G
Won't happen. Mostly because a.i abilities are overhyped but how the fuck is you…
ytc_UgzwbLeSm…
G
By 2027, AI is expected to consume between 4.2 billion and 6.6 billion cubic met…
ytc_UgzxKEQ5L…
G
@illmicrophone These robots are NOT actual AI. These robots have preprogrammed r…
ytr_UgxHqSLcI…
Comment
If the goal is to benefit everyone, then as a society we first need to address many fundamental issues. We should elect knowledgeable and qualified leaders, ensure people are secure, healthy, educated, and respectful, and move beyond outdated conflicts—whether religious, resource-driven, or rooted in competition. Until we learn to behave responsibly, value what we have, and manage our impact—like properly recycling and reducing waste—the promise of AI should not come first.
youtube
AI Governance
2026-03-25T14:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzCWtIk6tt-KUIVgoh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy1mJft5KZo2N7vU7R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzSViSr65liaEYkxHl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwYP4toxS1hS23zeUZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQF0p98JwuTLpQaA54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxnNI1M6f27-ss5yMF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxIyxkS0OCTzS_-aUZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyE76D3zx3X1kULp2h4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz4jQBOIBsXS44-kC54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxrAICvJz2Ct5He0uF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]