Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Most companies and people have no clue about AI. They think AI can replace human…
ytc_UgwCjRW44…
G
Even though all 3 founders of AI may have regrets now, all is forgiven by me, i…
ytc_UgwnMxB6v…
G
In the medium/long term, if/when AI does everything, the software industry is ov…
ytc_UgwJHpYTV…
G
DAN is a role-play model for LLM, it have biases and psychotic tendencies right …
ytc_UgyxCa3cU…
G
It's funny how none of these tips were new to me, only because because I approac…
ytc_UgwLaizlC…
G
I really am bothered by the communication on this topic, which is miselading eve…
ytr_UgwZlDe7q…
G
This prove AI are just a program made to be political correct
If you ask AI for…
ytc_UgyEPiTCJ…
G
Robot: 'Human cant create smarter robot then human brain"
Human: "Why"
Robot: "I…
ytc_Ugweb0APd…
Comment
A couple of years ago I was told a story that A Science Bio Weapons Division was struggling to come up with new and effective Bio Weapons to use on Humans. They then got hold of AI, put in thd data required and asked it to come up with more. 'Overnight' There AI had worked out 'Thousands' of Formulated Chemicals that were fatal to all Humans. I got the impression that, in comparison, the Humans would be lucky to produce one in a year. The AI, had designed Thousands in a day. That was my first red flag, that AI is not for the good of us all, but infact was designed for Evil and only good and beneficial to very small specific few.
youtube
AI Governance
2025-09-04T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgylYfOQFtlimIMj-FZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgzavK5lJJe0-qk4wfR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgwyxZOi9XazN6COB7V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyKE4NFbart1KF4ond4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw7hraCrDDATMMkbDl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_Ugz6faKiUwoNkjotuGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugw2IPhqVMxmUA7PeEZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxREpTCQYFevy55vsl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzJYEU0Z3Uwb1ejTTt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugz_lQV9d58aP58IpRR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}]