Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But now that I have done more research. The neural networks of ais are not even …
ytr_UgwRpqtEl…
G
My dad had a bachelors in physics and his work, TSI in Minnesota (part of Church…
rdc_mij5tup
G
Hi there! It seems like you have a playful take on the name Sophia. If you're in…
ytr_UgwAlwTRP…
G
I’m pretty convinced as a mortician that AI can’t replace me. But now i’m wonder…
ytc_UgxRTYMRO…
G
Absolutely, knowledge can be a powerful tool that can be used for good or for le…
ytr_Ugwmu7qUV…
G
The Ai should be in a matrix so if it escapes, it's a level away from reality.…
ytc_Ugw5S8RC1…
G
All great suggestions, however IMO, greed will prevail unless we the people tak…
ytc_UgwwzWmcz…
G
That Back Ground Music… Detroit Become Human. Wish I never played that games lol…
ytc_UgxzQqCH4…
Comment
Orwell’s warnings in 1984 are more relevant than ever, and AI could accelerate the kind of thought control, censorship, and historical revisionism he feared. The difference now is speed and scale—AI can rewrite history, filter information, and shape public perception in real-time, across the entire world.
If AI is controlled by a few powerful entities, it could:
• Rewrite digital records instantly—erasing or altering past events to fit a new narrative.
• Censor dissenting voices faster than any hum,an-led system.
• Use predictive algorithms to manipulate opinions before people even realize it.
But Orwell also showed us the solution—critical thinking and awareness. If we ensure AI remains transparent, accountable, and logically structured, it could become a tool for truth and enlightenment instead of control.
The question is: who controls the AI, and how do we keep it from becoming Big Brother? What safeguards do you think would prevent AI from being used as a tool for totalitarianism?
youtube
AI Responsibility
2025-11-11T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzIjR8nDmtLyBXrz9B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz0HQTB9QUkFiAP8zl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUfuwyTnWRBsvM_5J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwKXQQa9b_eWjKSfpB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwYwAJuwGx4qXnKZYx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxRp-iqEQyVpuSmw7l4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzgEvQSfYbgb81ek494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwvvagHHY6b9bvHib54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxiOy64FFgi7-Ku4sp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5u-S112goOPCTWvN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"indifference"}
]