Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
into 6 min , and realized that was just the intro😂 , my AI craving brain was thi…
ytc_UgzB8PaYg…
G
Why will the plumbers keep putting pipes together for poor people that don't wor…
ytc_UgxCb1k5b…
G
Generative AI will not replace traditional artists as long as traditional artist…
ytc_Ugy2EvZFU…
G
@aeioe Sadly you are so ignorant in Artificial intelligence.
When they get bore…
ytr_UgzJNvURn…
G
If you've read a script of Blake Lemoine conversing with LaMDA you saw that AI t…
ytc_UgyDMhQg7…
G
But there are still elections in a democracy so I would expect people to vote pr…
ytc_Ugzos0qL_…
G
The county beside me has went from proposing one data center to commiting to 3. …
ytc_Ugx7nJlO3…
G
considering silicon valley is run by the mostly liberal socialist communist thin…
ytc_Ugzx08cdO…
Comment
I have sort of a counterargument to what he is saying specifically about AI molding public opinion subvertly through social media. Let's say it could 100% optimize language to manipulate people. Wouldn't people be far less susceptible to manipulation once they realize an AI is active on platforms influencing everyone? And wouldn't AI be forced to operate within the confines of languages, which are human-made and inherently imperfect? Aren't there certain limits when it comes to how effective an optimized combination of words can be in influencing people? I get there are charismatic dictators who can induce people to do terrible things, but if there's not a human face behind the words, and actual, forceful control of societal institutions, how effective would said AI actually be?
Not saying we should take AI for granted, but if it's forced to operate within the parameters, systems, and conceptualizations set up by humans, I think there might be inherent limitations to how powerful it could actually become.
youtube
AI Governance
2023-04-18T21:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwXl6sKkIDjWYEuYlB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzu9PdxNvFli6B-n5J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyT4JT0rtYRvGGhGbB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyycrD11XiLG0nYgDR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxmjyTsnNif23na9GB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNQootZ128WvylJ-t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxwoqCA4nUO6zaw_D54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwrxTmiGTj8wE-gNpN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyn9R98fnhmdiA5Z394AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz1ADfmLXRWWRkjStx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]