Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The amount of conspiracy this cop just created just to align himself to AI are a…
ytc_Ugy0zt-Ad…
G
Open AI isn't going to do advertising,... isn't stopping others from using Open …
ytc_UgzkcqfBb…
G
Eventually ai will design more efficient designs and hopefully all these or many…
ytc_UgxvzOc-E…
G
Wouldn’t they just use a VPN and ChatGPT?
If I wanted to cheat. I would find a …
rdc_mwu68yi
G
A remake of Christine. Self driving car developes a thirst for blood after the o…
ytc_UgwvPIK4D…
G
I started doing commissions just recently. I was finally happy to earn my own sh…
ytc_Ugw4XaZCS…
G
Politicians, that's the job that they will never let Ai take over. They are too …
ytc_Ugw2E0y_3…
G
Add to this how generative AI has no proven material returns. Companies that hav…
ytr_UgwrqI8_3…
Comment
The most dangerous aspect of AI is when you creat a dependence on it.
For example you realize the danger and the AI suddenly tell you: Mr Smith, you CAN stop me but can you afford do? I make all your shopping, I pay all your taxes on time, I keep track of your family’s birthdays and send them texts, I keep your house and car payments up to date, I tell you what the most important news you want to know, I watch over your house when you are away, I protect your phone from all the malware and viruses, and I keep track of all your passwords and accounts.
Without me you would need 10 other apps to follow and keep track of and lose in instant. Would you rather have all that or one Smart AI?
youtube
AI Governance
2023-04-18T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwNjvOBFA-sDv5Yail4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2A7X0AMNGOT1loqJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw_xhDUC0cpnFspc0B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9p4gQRhX6HKfQxFt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw5Wgx9q0IyVWn2q6N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOOjlhxFiQ9v7dyBF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx6fNM4KQjGqladqj94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy_6_Nm9nZynOG0_FJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxjnB2mmG_BL7HWpWp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxBZYs99LtOK4PxEHN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]