Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I truly hate AI. I have many, many illnesses and rare diseases. I am disabled. T…
ytc_Ugx093ULF…
G
10:22 So he's getting mad that this new technology is killing the AI art communi…
ytc_UgzwGS-9-…
G
One thing I've realized after this began to come up, is that such a tool can be …
ytc_UgxvMhY0b…
G
People who use AI to make "their" "art" or "music" or "photographs" for them are…
ytc_UgyeHdQim…
G
It seems wise to pause AI now, to see its reaction, before it's too late.…
ytc_Ugzj0GfpI…
G
sooo... why is no one asking... why do these companies get to mess with AI? its …
ytr_UgzVF8a86…
G
In the 50's Ebbe warn human kind about the super A.I intellegent. They know sinc…
ytc_UgxUW_eBa…
G
You should have added saying, but then youre also conscious that you lied right?…
ytc_UgxwHR91v…
Comment
11 ways to stop AI from harming humankind...
1. Dont allow AI access to nuclear warheads, Bio weapons or chemical weapons
2. Dont allow AI access to banking systems
3. Don't allow AI access to medicine and drug distribution
4. Don't allow AI access to news channels
5. Don't allow AI access to emails or social media
6. Don't allow AI access to satellites / telecom towers
7. Don't allow AI access to electrical grids or gas pipeline networks
8. Don't allow AI access to smart cars / smart vehicles, smart drones / air travel
9. Dont allow AI access to the stock market.
10. Don't allow AI access to flight paths and air control centres
11. Don't allow AI access to international shipping lane systems, motorways and railway networks.
In short, only allow AI to play (a heavily monitored) advisory role to humanity.
youtube
AI Governance
2023-05-14T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxwoKpN4YLM6klbhol4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzk_JQFjgS5DB7bHA94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyT3Ro8V2dsK-PEE1t4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxmrb0xePnxAdumwvF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx4HRCFHX2yPjfJvwB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzcPdeNXIRwWbgwFL94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgylsrrhzPC5-B_oPhF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxa0sBQxWZ7skCUtM14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxicOjDL_YNhvexcHF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw9hEL38ThmYrXKmSN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]