Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People should not be afraid of AI it’s the best thing ever happened to humanity …
ytc_Ugy_FfVzp…
G
The part of "Do you want to destroy humans?" it was so creepy because owner of h…
ytc_Ugw3o44X-…
G
👋AI DevX Engineer at a medium sized startup: biggest speed boost happens when yo…
ytc_Ugwz2auy0…
G
Meanwhile not many people have noticed this is the plot line of Captain America:…
ytc_UgzvL_UgQ…
G
I tried this and got two "apple" responses. "Are you able to confirm the existen…
ytc_UgwBluyzy…
G
Bruh whoever that was, fuck them. I had to say it. I'm sorry. I worked since I w…
ytc_UgzxO2a01…
G
Well and a robot that writes emails is a easier problem to solve than cancer.…
rdc_ohw6ko6
G
Lowering the bar on sentience does not an AI 'being' make. That term is in itsel…
ytc_UgxJ19H_3…
Comment
It can potentially expose our dependency on all electronic gadgets, the internet, websites, bank accounts, transportation, aviation etc etc., the list is endless by now. For example: what if it hacks everyone's retirement account and simply drains the money? That'd be little shocker to most, I bet. Then empty all bank accounts and voila: no money, no decent life and safety, health etc.
I think it can do much more harm than that but this is pretty much all that's needed for starters. AI being at home in the E-world, there probably won't be many of those spaces where it cannot give itself complete access and wreak havoc if it so desires. It seems that eventually humanity could actually end up exactly where it started; trying to figure out how to make fire.
youtube
AI Governance
2025-06-21T23:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzgMpQuX6_LuAZLmBV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzzAjtTDwmuXgxi5iJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzjMeM33SJcqDjoAqt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz3h6hyG7txb_6H2VR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNDly0S5U8S4zCpDZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyn7cLoBNJbyNrITMl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx1hsx5DuMDiPnDeQJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxLIKXmnOz0F05dmYF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy84Oviamhg913WAqN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyUj7PwIc5RJqYYJVF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]