Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That reminds me of South Park episode in 27 season were Randy tried to save his …
ytc_UgyYL5nGo…
G
If they end up mass producing these things, it's gonna be the movie I robot but …
ytc_Ugy6xb-yq…
G
True 😂 & I’m not gonna lie I feel like if AI becomes “evil” it’ll just end up be…
ytr_UgzWWkk3L…
G
OP's title is a misrepresentation not only of the thrust of the article, but of …
rdc_jkfoxq0
G
I'm not really worried about AI beings smarter, I'm worried about who makes it h…
ytc_Ugy3bpkeD…
G
After having sunken many, many hours into learning about AI and the alignment pr…
ytc_UgwOsyCx0…
G
AI in its current form is not sustainable though. It is running at an economic l…
ytc_UgxuYn2WK…
G
Streisand effect in action: her reaction and atrioc's apology just brought more …
ytc_UgwCqT8dh…
Comment
I was having argument with my Brother in law. He was saying that AI in the very near future will approve medical prescriptions.
I argued that even if that happens, the final say/permission/signature will be done by human doctor.
He persisted that this is stupid and that it will be completely given by "Dr. AI" In any case, this is a very good example how AI could be dangerous.
youtube
AI Governance
2023-04-19T13:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgybCRLuslUr-O7PqK94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwK7ESV0IIbRfMoHJp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyO2TCNw1VVn8By_794AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0u9JUwkduphZaD-94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwWt-Lz7hC1PtFwQPh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx1tynQvgHuB6gTHep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6ZfCPmFVNT5MZO_h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw6MvxvLZgsT9GjZzh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzMl3ldqg3Zxme2Xwp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyhr_5736wVeRWAJCl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]