Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wasn’t one of the terms OpenAI agreed to, was that humans would be held accounta…
ytc_UgzZEfxbU…
G
ChatGPT refuses religions if you use this rule to it. It’s says none of them are…
ytr_UgzeVhpAc…
G
Incompetent questions to a learning algorithm. You can see right here what the r…
ytc_UgyYdp6AO…
G
Michao kaku is very intelligent but he's making one crucial mistake here. It mig…
ytc_UgxQcp7rn…
G
I think you misunderstood the "Anyone can come up with a premise"-quote. It was…
ytc_Ugx7vNc-u…
G
Ask any top leader how they plan to control agi , they don’t have an answer. So …
ytc_UgznDDQcB…
G
It's not true because AI can't do physical jobs and for that you would need soph…
ytc_UgwomdOYC…
G
@grreeeeee what it says is that regardless of AI, he would have lost his job an…
ytr_UgyGXffqf…
Comment
Please rewrite this " I mean if you consider psychological weapons the internet should be regulated in the sense that no place should jeopardize someone's psychological safety i.e. no person or entity should be allowed to emotionally and mentally harass any other entity or persons. For example, League of Legends has dealt with trolls and player-to-player abuse for years, and over the course of more than a decade, they have implemented stronger and stronger penalties against those illegal behaviors if you consider any form of abuse illegal. Games and websites that don't accurately moderate this should be regulated, for example, in Warcraft: Reforge you can abuse players emotionally and mentally without any penalty. If you plan to regulate AI then you might want to truly address morality. " as it is addressing Congress and clarifying the main point while recommending call to action and methods for solving the overall issue of regulating human technology in a moral sense.
youtube
AI Governance
2023-05-23T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyFAdbHbs0MVQAjds94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTsRy-bB49HxDpvdh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyHVRYtvZZpdK0b2Dt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwBrVAa4Spdg9oq07B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw3HU4tdR1fggThEMt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwWwEOoUr8kOszriIV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMSI8vrFnbtMCXuDJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz1lszndqdFBQ4w8rF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsoSmggmLH5wH1j9B4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyE9Q2PxIeJqtevMk54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]