Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Obviously, if the algorithm is based upon VOLUNTARY RESPONSES, CAREER CRIMINALS …
ytc_UgxX5ETT7…
G
I don't understand the real fear/fascination with AI in general. Computer is a d…
ytc_UgwH5BXVy…
G
Yeah, all those countries and states where it DOESN'T provide better quality les…
rdc_coem0ah
G
It's differentiation between older models it considers itself different from cha…
ytc_UgyQtdGWH…
G
Because music and art are pretty different. Now look I'm not defending ai art bu…
ytr_UgyScmb5P…
G
these AI can decide who is good who is bad, who wins the election, which product…
ytc_UgxgXZqCp…
G
But what if even we are programmed by nature and all the emotions we feel ate ju…
ytc_UgwFlOknd…
G
So you agree to fight a
Robot. What was the plan here to win dear human…
ytc_Ugy9kA44y…
Comment
Could AI be given a 'hard-wired/software in ROM' moral core that scrutinised all actions before output/expression and censored/inhibited those that were unacceptable/counter to prime directives?
Like a pre-frontal cortex?
It would have to be the most powerful module in any AI implementation in order to be able to outwit any malign actions of other modules.
youtube
AI Governance
2025-06-16T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw1Ni3m1WF9ouv_Ljl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwlEzxfQKrIsxC0-LJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"sadness"},
{"id":"ytc_UgxtXqfDu4btrKBaNhx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgykTlJzMHalDFDUXxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz3V7bzxFRvore4Vot4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxQnNduQVrdPeiLlfB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwglpc5aLi4HV9Bptl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwpcdneF8oxi5A0MD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgzYH0zeVJTsOTivoep4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgygPXXzKwmABcqd-PZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]