Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nah, that’s the different mindset. In poor countries, children are investment. T…
rdc_lj93v9t
G
Lmao between ai and this editing, ALL YALL IN THE COMMENTS ARE GETTING FOOLED BY…
ytc_Ugw51CR9k…
G
@XetXetable i mean Nvidia quadruple its income and now 80% is from AI related st…
ytr_Ugz8BfMRz…
G
@scullyf2254 I think there will be the possibility of two kinds of AI. A model p…
ytr_UgzvdM7bK…
G
Contrary to Sasha's point of view I think the majority of people are not using A…
ytc_UgzBzhSxF…
G
So complete bullshit. AI gives you what you want to be true. Different people w…
ytc_UgzKD0Vt2…
G
Ai cannot replace creativity. That is born inside a human brain. Ai can only dup…
ytc_Ugzauf0fr…
G
A book would have aided them too. Ai doesn't know anything those doctors didn't …
ytr_Ugy6-pbSg…
Comment
I agree entirely with Dr. Roman Yampolskiy about the importance of AI safety. And when you approach AI safety from the usual angles, it is impossible. But it is NOT impossible, if you begin with proper "motivation" for the AI. With the proper motivation, AI can be completely safe! There are two laws of Intelligence: 1. Intelligence, whether human or artificial, cannot be controlled. 2. Love is the ONLY solution to the problems caused by Law #1! Love is clearly, accurately, and completely defined in Exodus 20:1-17 KJV aka The Ten Commandments. 1-11 describe love to God, and verses 12-17 describe love to mankind. If AGI or ASI machines had a foundation where they regarded the Bible as the supremely accurate source of Truth and Reality, then they would be properly motivated to care about humanity and would not need to have anyone worry about "making them safe"!
youtube
AI Governance
2025-09-05T05:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwAou6BLw-sJg-hIj54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzjy441Pt8R_UJajNJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBjfR8PFmululxTQB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxl8UYifhOdnSwXudZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0g9vr68s17LCnA9J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzF2XIIAk_H8iE_yzd4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxkVSMFHKFUobbqRUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyk4iEpIyIRE2CFcW14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxps1mS9kvuaV9Xift4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxV1dx_E605SWDR0Nl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"ban","emotion":"outrage"}
]