Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is exactly why all future AI programs must adhere to the three rules of rob…
ytc_UgzEXOs3i…
G
CNN pretending to be concerned about AI and children's safety but championed sex…
ytc_UgzLnFGY8…
G
Copyrighting AI art is more like copyrighting someone else's wet dream they told…
ytc_Ugy0sjA9m…
G
well theres the rationale that the product of ai generation isnt ripped from a p…
ytr_UgxzFloQ9…
G
In the human slave encampment the AI picks a human they want to roleplay with xD…
ytc_UgxyV80Fs…
G
“Lose control to misaligned ai” - good luck “controlling” something infinitely s…
ytr_UgyYsaDNo…
G
I swapped my major from computer science to finance a semester after chatgpt sta…
ytc_UgwRnjbD-…
G
Might fine police work, trusting ai 100% of the time instead of going to find Mi…
ytc_Ugxc3ZURH…
Comment
Ai/AGI will not be a major threat until it’s able to run efficiently on general device. Currently “dangerous” Ai can only run in data centers due to the amount of compute it needs.
If the Ai/AGI runs in a monitored environment, it’s easy to manage safety. Unless there are bad human actors that build the infrastructure to run Ai/AGI for destructive purposes.
We already have viruses that run on devices and we can mitigate those. Ai/AGI would be a “super” virus that could evolve over time compared to our current day dumb viruses. However, like I mentioned, our everyday devices such as smartphones and laptops are currently not fast enough to handle a “super” virus involving AGI on device neutralizing the real threat of a AGI massive takeover.
youtube
2024-06-09T14:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxj2bYDcZz5H-KZP2d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzfqrGn4DK3mWfzZ-x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxqg-TF_5RKEQh9ZcR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzziEQx93I_L1s1RDV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwkD-qDKzHZxsAXlBx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz8ICTkLRkQL4hlReB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwd2eilb2aRg2nQuGV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwPCpbbGged-kVApXp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzKMaL0KLHA-l2RWKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgybjRWn_iH71pclQzR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]