Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel like the most recent foundation models are too big for someone unaccustom…
ytc_UgwAlDe4S…
G
For my own reference, what gives it away? Not that I'm trying to hide it, I'm ju…
ytr_Ugy3is-FJ…
G
Thank u for discussing this very important issue; not sure whether we’ve all co…
ytc_UgwuxrQc1…
G
You think 95% of this sub (being generous) even know how to use an open source L…
rdc_m943lll
G
Nothing good has ever come from playing god...
Despite every warning, why are w…
ytc_UgzVUIras…
G
If ChatGBT said personal, moral, and philosophical views one more time... It's …
ytc_UgxXI2kde…
G
“Comes with many short term risks… Ai may be used to create terrible new viruses…
ytc_UgxeebT88…
G
Thats an inpatient AI! Imagine when we use AI to go to war for us… This is taki…
ytc_UgzwbfaB8…
Comment
It appears that those responsible are withholding AI from the public to create the impression that AI, already in use, is limited by programmers and security measures that do not align with intuitive usability, rendering AI less intelligent than claimed. In general, it has a reputation for being unreliable and not user-friendly. Particularly, government departments lack the equipment necessary to utilise AI effectively, despite its claimed intelligence; programmers are insufficiently compensated for their work, leaving AI perpetually underdeveloped for adequate use. It seems the only time a proper program is employed is when the government deems it necessary for financial extraction.
youtube
AI Governance
2025-06-16T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxr6fyHTZOBODgZwSx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzP4iv1Rz8nD6jbSMl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyWZ1ABfkOF_FwQu3Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwpoxZbYFkYMFz0ejN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZoIIx12D3wjWYhsZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzRjlqeZF4unDbCpcF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyaBRCXYKen3UjWySR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzDn1VjbAIm7QOfVy14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyIhnw_SuAtHmli6MN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyAdkneji7-9amxnip4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]