Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This argument is nonsense. Ordering the AI around is not a skill set. Typing key…
ytr_UgwmDub0V…
G
The problem is the 6000 that die by AI are not part of the 40000 that die by man…
ytc_UgwdASYEo…
G
You can get the materials in description of my channel or you DM me at my instag…
ytr_UgwX3w8m-…
G
AI can constrict freedom of thought and creativity through three primary mechani…
ytc_Ugxe-br6p…
G
Take a look at the last voice model from OpenAI. The way it reads emotion and mi…
ytr_UgzjcU3IC…
G
The AI said he would be 99% more likely to be involved in a shooting. He was inv…
ytc_UgzSQHZDS…
G
Companies start implementing AI. Some workers get laid off. Everyone starts gett…
ytc_Ugwh9rzFy…
G
AI, AGI And SuperIntelligence is yet just another catalyst to human evolution to…
ytc_Ugx0IPOJi…
Comment
Saying we shouldn't develop ai because it is a better weapon and because it can be used to assassinate people is like saying we shouldn't develop nuclear power because you can make a nuke, or we shouldn't develop rockets because you can make ICBMs, or we shouldn't of made swords, bows, guns, etc because it makes killing someone easier. Using this logic is heavily flawed and basically classifies all science as a force of evil. Science (and AI) is not evil. There are problems with how everyone sees AI and instantly assumes they are going to destroy everything. Just think about it, first of all if an AI does go "rogue" how would it get control or weapons? Maybe just maybe militarys have people, defenses against hacking, and AI working for them if one is going rogue. My point is just because an AI goes "rogue" doesn't mean it is suddenly all knowing and all powerful .
youtube
2018-04-03T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwOHWOKk4mrzdvx6r14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwD5zjsOKm381BRwkp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKd151bVJvhA7QixJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6QvnlNOFGFQu3fph4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzhO-M52B0Uwg-KnsF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyMkcekidWccs1Q-Pt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyoglIh54yKKwxPFFh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWqUR3rjToEfHbbWB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugytlda4Gn_CBnVOaCt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwP0Zt_2THvcx3nmvV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]