Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT is dumb as shit. I tried getting it to make a simple habit tracker yeste…
ytc_Ugw8GJucv…
G
AI and AGI are the biggest existential threats to humanity. They are going to de…
ytc_UgzuGdJJ7…
G
It doesn't copy, it "synthesizes" the information. That is why it's called "late…
ytr_Ugzo5LFLk…
G
Wait until ai creates a replica of you and make it say "here's an image that ai …
ytc_UgwB47wzP…
G
If you are trying college placement don’t reply on ai.
The interview will direct…
ytc_UgxW6apI6…
G
AI needs to learn the value of humans just as humans know the value of AI. Then …
ytc_Ugx7C-JuN…
G
yeah nevermind the prospect of singularity, or complex hijacking potential of th…
ytc_UghtO0zPz…
G
But are you sure that these are robots? Their behavior is like the behavior of h…
ytc_Ugzc21Jm_…
Comment
Just like you can Google how to build explosives or dangerous devices, people can now use AI to create those same things. But at the same time, we can use AI to develop cures, prevent threats, or even build AI controlled systems that monitor air quality 24/7. These systems could detect viruses or hazardous conditions and alert people long before they reach densely populated areas.
Fearing progress solely because of the potential for misuse is completely irrational. By that logic, we shouldn’t build airplanes because someone might fly them into buildings or drop bombs. We shouldn’t create the internet because people might spy on you or steal your money. We shouldn’t make TVs because someone could broadcast lies or dangerous content. There will always be risks but halting innovation because of them is not the answer.
That said, I do believe precautions and proper safety procedures should always be implemented. Growth should be guided, not stifled. Yes the dangers are real but we will be okay.
youtube
AI Governance
2025-09-04T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy63kpjuhhgUMJY7jh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZEMQeXEHwivjfCvV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKumYzVw5xELudLKB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxKhz-1fpjD635mdmR4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMrG4KDqj0Zz9qLVd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws8RSwTVfbiJ4EIfN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgydXxD5KENHlXg7r-d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxs1VIgK9XlsErCNxR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgycMF7xVAmYqRtxfF14AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKBuDMhUknUqCUHNZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]