Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI bros also don't realise that stuff like the banana taped to a wall has a mean…
ytc_UgzwtnFjA…
G
It's funny how the first half of this interview goes really well up to the point…
ytc_UgzmvKB7K…
G
Ужас какой. И какой в этом смысл? Инвесторы в это вкладываются только для того, …
ytc_Ugz4mritb…
G
AI and factory automation has been taking jobs for decades. That "machines" wil…
ytc_Ugy5YlDo5…
G
America the truth about AI. AI is a tool but AI as with any program is only as g…
ytc_UgyCdWrE9…
G
Tesla’s Full Self-Driving (FSD) is safer than the average human driver on a per-…
ytc_UgxSzEh_O…
G
AI guys need to stop pretending like asking AI makes anyone smarter, its actuall…
ytc_UgzsdlGDo…
G
17:09 yeah this is so dumb it's hilarious, one of my friends obsessed with the n…
ytc_UgxJfGeRf…
Comment
What I've learnt from this debate is that no one really knows anything about AI risk or safety, everyone is talking based on their own personal beliefs and fears and applying them to AI, So based on your understanding of society and moral leaning, that will determine your thoughts on AI safety. What we need is concrete research and experiments on AI risk and safety. All these probabilities and hypothesis should be tested in controlled environments and their results should be shared with the public, just like standard science. What we are doing now is just arguing our personal beliefs.
youtube
AI Governance
2023-07-09T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy0RS8rCJsCo4XkwB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJgzi4OkQ7QPapltJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugwu0fayEqNBHovgu2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJXnv95u_j7vvt3Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzKWrogoupRqwRe8EZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVZxBgODIUen5Phwl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwaIsXG6vGzkg3o0V14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxF-JUIdpiLbjc_lUx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuzAUM67Dn8MAFwxZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZBZa5vsqXpN2YZ2t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]