Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is a really wonderful interview. I'm definitely TikTok-brained when it come…
ytc_Ugx5FQXrM…
G
I am a disable I use be a glass blower an now i make pixel art and 3d Voxel art.…
ytc_UgzJYRNzq…
G
ANOTHEr WAY to avoid these ai cameras is to live where I live A TRAILER PARK, ba…
ytc_Ugxa3BUvR…
G
I think what he was saying is that the only way to get the situation under contr…
ytr_UgxZUIuQQ…
G
Your content is just amazing. I teach algorithmic ethics and will be using this …
ytc_UgxaYGWmC…
G
The only place where AI will shine and be appreciated is in the gaming industry.…
ytc_UgyxJ4aj1…
G
Talked to a friend about this because I've tried to learn for years. it's someth…
ytc_Ugyair-bc…
G
not exactly, we've only recently had breakthroughs in attention mechanisms and o…
ytr_UgwNtET9b…
Comment
Nobody has really answered the points raised by Teg Mark. To be an existential threat to us, AI does not need to match us in all possible aspects. Some psychopaths are clearly of less than average intelligence and yet pose significant risks. How about the question of abuse by bad actors? Finally, the possibility of the emergence of unanticipated emergent properties, inclusive of emergence of an instinct for self preservation, has not been addressed. Frankly, those arguing that AI does not pose an existential risk. seem to be driven by some ulterior agenda.
youtube
AI Governance
2023-07-08T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy0RS8rCJsCo4XkwB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJgzi4OkQ7QPapltJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugwu0fayEqNBHovgu2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxJXnv95u_j7vvt3Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzKWrogoupRqwRe8EZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVZxBgODIUen5Phwl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwaIsXG6vGzkg3o0V14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxF-JUIdpiLbjc_lUx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwuzAUM67Dn8MAFwxZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZBZa5vsqXpN2YZ2t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]