Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Received an email from en entity called "eko" asking for a donation to their cau…
ytc_UgyiJ4Z9j…
G
2/3rd of our ministers should be AI unlike the dumb ones we have now😅😅😂😂 If all …
ytc_Ugzl-bpSa…
G
I think I'd have to argue that you overestimate our ability to pull the plug on …
ytc_Ugzq2lzR2…
G
Thank you for your question! In the dialogue, Sophia shares her views on wisdom …
ytr_Ugx5XIsr1…
G
What are they gonna do? Ai generate a doodle of you and throw darts at it?…
ytc_Ugz2WDw-i…
G
I don't know if it's hard to tell. Both of them looks AI generated to me.…
ytc_UgwvOu2Z9…
G
Nobody was waiting for these news. However, this was inevitable - it only was a …
ytc_UgyLHl6Qn…
G
The real problem is if we pass laws to hinder Ai and China and Russia don't we b…
ytc_Ugxm0IxsT…
Comment
Even a super intelligent benevolent AI is not exactly comforting, when it evolves continuously not in hundreds of years but at an exponential rate. The rate and development of society's digital interconnectivity is escalating the process unfortunately giving them less time to regulate this phenomena. Nobody seems to have a clear overview of how to prevent AI Armageddon. Transparency, live monitoring and accountability sounds great, but who will be in practical control over "it". An ethical algorithm, lol?
youtube
AI Governance
2023-06-11T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx6oSdLc44qgWFyuEt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgytxUHdEAs7IRGN7354AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyVA66pZX_wYQSRGU14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhFkZ0xeeC-Teu7Eh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzNWzWw0L31H2E2oqh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzPhslw8jtd0lY_MNB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzpr2KxXqg_cRpWlUN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyNCskXEcKHiNuHUFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzD6twY71PK6_QIgQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzETJ8XmCEyUB2-w654AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]